To Create Trusted Agentic AI, Seek Community-Driven Innovation


AI has moved from exploration to high authority. Across industries, competitive pressure and rising user expectations are driving leaders to embed AI into core workflows, increase automation, improve efficiency and accelerate delivery. Competitive pressure is driving innovation, and technology leaders and employees are finding new ways to meet growing demands. Enter: agent AI systems that can think, plan and act independently.
However, they also recognize that independence presents new areas of attack, operational risks and management challenges. And some level of caution is healthy, especially since Gartner predicts that, by 2029, 50% of successful attacks on AI agents will use access control issues with direct or indirect rapid injection.
Which leads to a fork in the road: Are organizations building walls around agent AI or opening doors to broader collaboration?
As with any transformative technology, such as Linux or Kubernetes, building better, more secure AI agents requires community-driven innovation. Leveraging the breadth of contributors across hyperscalers, startups, financial services, healthcare, government and more, it delivers broad, diverse peer review, and rapid risk discovery. Additionally, open collaboration distributes oversight across global engineering communities, rather than concentrating responsibility within a single vendor.
As agents focus on critical applications, this interaction model becomes important. There is no doubt that AI agents will become powerful technology tools – instead, it is a question of how to make sure that organizations can trust that technology.
Scrutiny over privacy
Private programs tend to magnify small errors. Small problems can turn into big problems when an agent receives an incomplete context, misinterprets permissions or interacts with an unstable infrastructure. If the design, recovery pipelines, and operational thinking behind the agent is murky, determining the source of that failure becomes much slower and more difficult.
When building agent systems, always lead with the assumption that vulnerabilities will arise, data may not be ready for the agent, and real-world applications will differ from assumptions. No technology is perfect, and there will be gaps. However, in a closed environment, the speed in visibility and repair is often slower given the limited visibility of the interior and equipment.
Open development removes some of these barriers. More contributors enable more testing in all areas, increased peer review of architectural decisions, and faster discovery of vulnerabilities. Organizations often think that transparency increases exposure, but experience shows that systems that are widely reviewed quickly reveal problems – before they become systemic. In open ecosystems, problems can be publicly documented, collaboratively investigated, and mitigated by contributors with diverse domain expertise. That collective response strengthens endurance and reduces long-term performance risk.
Trust starts with the data layer
The discussion around agent AI often focuses on the model’s capabilities such as reasoning, planning, singing and tool use. But in production systems, trust depends more on the data and retrieval layer than on the model itself.
Agents operate in context, and if the search, analysis, and viewing systems that provide that content lack accuracy, recency, or traceability, agents can produce incorrect output, perform incorrect actions, or create corrupt workflows. Often, AI-induced failures are actually caused by gaps in retrieval quality, permissions visibility or system telemetry.
These challenges are driving engineering teams to integrate agent workflows directly into production search, visualization, and analytics platforms. Logs, metrics, tracking, structured data, and semantic search pipelines are increasingly serving as the collective operational foundation of AI agents.
Modern AI stacks increasingly treat retrieval, analysis, and visualization as primary control layers rather than supporting components. By combining semantic and keyword retrieval, using a proven, integrated vector database, enforcing well-analysed access controls, and instrumenting agent workflows with logs, traces, and decision telemetry, teams can not only see what the agent produced, but why it produced it. This architectural visibility allows developers to validate baseline data, obtain drift tolerances, reproduce failures, and continue to refine the orchestration logic as the workload scales. Essentially, trusted agents come not only from model development, but from the infrastructure that makes all content sources, query methods, and automated actions testable and responsive.
It is clear that reliable agent AI will not come from hiding behind proprietary walls. It will appear in building systems that are transparent, readable and continuously improved by the professional community. Community-driven innovation ensures that the infrastructure agents depend on, including retrieval pipelines, visualization systems, and more, can be widely tested and collaboratively improved, delivering a truly reliable AI agent.



