Introduction
In the first blog of this series, we explored the rise of agentic AI—systems that don’t just follow instructions to set their own goals, adapt to changing environments, and act autonomously in complex scenarios. This shift marks a profound change in AI: from reactive tools to proactive agents capable of independent decision-making.
But what does agentic AI look like in practice? How are these systems being deployed today, and what new security risks do they introduce? In this post, we’ll bring agentic AI to life by examining concrete use cases across industries, with a special focus on cybersecurity. We’ll also unpack the unique challenges and risks that come with empowering machines to act on their own.
From concept to reality: Agentic AI across industries
Imagine a security system that doesn’t just wait for alerts, but works autonomously for hours or even days: hunting threats, adapting its defenses, and coordinating responses across teams and technologies. This is the promise of agentic AI. Unlike legacy tools bound by static rules and constant human oversight, agentic systems can scan for anomalies, prioritize vulnerabilities, and take action, even creating new task-specific agents and collaborating via Teams or email.
Cybersecurity
Agentic AI transforms incident response. These agents never need breaks and can immediately flag any abnormal activity, analyze the situation, isolate affected network, and deploy countermeasures such as blocking malicious traffic or restoring clean backups. Throughout this they can keep human teams informed through familiar channels like email or collaboration platforms. They even prepare documentation for review. Key differentiators for these systems is their ability to retain memory across sessions, enabling continuous learning and faster, smarter responses. They can manage multiple incidents at once, scaling their response beyond what a human team could handle. For businesses, this translates to less downtime and lower financial risk; for governments, it means critical infrastructure is better protected; and for individuals, it ensures personal data and services are safeguarded with greater speed and reliability.
Case study: Kontent.ai
One compelling real-world scenario comes from Kontent.ai, which integrates AI agents into security operations to automate response to phishing and spam. By leveraging agentic AI, Kontent.ai’s system ingests suspicious email reports, cross-references threat indicators against internal and public intelligence sources, and suggests classifications such as phishing or spam. The AI then replies to users with a structured analysis and recommended next steps, such as blocking a sender or escalating to the SOC team. The impact is significant: responses are delivered in minutes rather than hours, actions remain consistent across time zones, and analyst fatigue is reduced by automating repetitive tasks. Crucially, Kontent.ai enforces strict guardrails sandboxed evaluation runs, enumerated permissions, human-in-the-loop gates for escalations, and transparent provenance for every AI-generated response. This ensures that while agentic AI accelerates and standardizes security workflows, human accountability remains central.
Healthcare
Healthcare is another domain where agentic AI is revolutionizing practice. These systems, operate as proactive teammates – monitoring patient data, interpreting lab results, and tracking subtle changes in real time. They can autonomously trigger early interventions. For example, an agentic AI might recommend a change in medication, alert a nurse to a patient’s deteriorating status, or even order additional diagnostic tests. In some hospitals, agentic AI coordinates care across teams, ensuring that the right specialists are notified and that resources are allocated efficiently. This autonomy is especially valuable in environments where staff are stretched thin.
Diplomacy
Finally, agentic AI has great potential to be used in diplomacy, where the ability to rapidly analyze information and coordinate responses is increasingly vital. Diplomatic challenges are increasingly complex, involving dozens of stakeholders, rapid information flows, and the constant threat of escalation. Agentic AI is poised to become a critical tool in this environment. Imagine a scenario where multiple countries are responding to a fast-moving cyber incident. Traditionally, diplomats and crisis managers would rely on manual coordination, phone calls, and slow-moving protocols to share information and agree on a response. With agentic AI, autonomous systems can monitor global data feeds, flag emerging threats, and instantly alert relevant parties of the evolving situation. These agents could help orchestrate multi-state responses, ensuring that information is shared securely, resources are allocated efficiently, and escalation pathways are triggered only when absolutely necessary. Another powerful use is its ability to mediate sensitive negotiations. Autonomous agents can analyze vast amounts of data, social media sentiment, economic indicators, historical precedents, and provide diplomats with actionable insights in real time. They can simulate negotiation outcomes, identify potential sticking points, and even recommend compromise solutions based on the interests of all parties. This doesn’t replace human judgment or in person diplomacy, but it augments it, allowing diplomats to focus on the nuances of relationship-building and trust.
The security risks of autonomy
The introduction of new technologies consistently brings opportunities and risks – and agentic AI systems are no exception. Some risks are familiar, while others are novel.
Expanded access and attack surface
To function effectively, these systems require access to many of the same resources as human users: databases, email platforms, collaboration tools such as Teams and Slack, cloud services, and internal applications. They must be able to read, analyze, and, when necessary, modify files, records, and communications. For instance, an agentic AI assisting with cybersecurity may need permission to examine network logs, user accounts, and incident response tools.
Granting such access makes agentic AI part of the organizational attack surface. These systems operates at machine speed, often across multiple systems simultaneously and sometimes without direct oversight. This heightened capability means that software errors, misconfigurations, or breaches can result in rapid and far-reaching consequences. As a result, organizations face a more complex risk environment where misuse, unintended actions, and excessive reliance on AI are significant concerns.
Misuse and manipulation
Misuse is not hypothetical. Threat actors are increasingly attempting to manipulate instructions, poison knowledge sources, and override AI behavior through prompt injection attacks. These exploit hidden instructions—like invisible text—to bypass safeguards and for example leak sensitive data. Microsoft’s AI Red Team recently demonstrated this through a memory poisoning attack on an AI email assistant.
Oversight and governance challenges
Loss of human oversight is another challenge. Traditional “human-in-the-loop” models are insufficient. As agentic systems proliferate, organizations need scalable governance mechanisms, such as automated escalation channels and “phone-home” alerts, to detect and respond to emergent behaviors. Without robust oversight, agents may make decisions that humans cannot easily audit or reverse, or overwhelm reviewers with thousands of requests.
Vulnerabilities and escalation risks
Systemic vulnerabilities are a growing concern, especially in critical infrastructure. If an agent is compromised, its autonomy can be weaponized to disrupt operations, manipulate data, or propagate attacks across networks. Technical solutions like fingerprinting, watermarking, and identity verification are essential to protect broader systems from rogue or adversarial agents.
Finally, escalation and crisis governance present unique challenges. Agentic AI can accelerate decision cycles to “zero seconds,” outpacing traditional escalation frameworks. Without built-in delays, auditability, and escalation pathways, autonomous agents could trigger unintended consequences in high stakes environments. Guardrails such as enumerated authorities, strict permissions systems, and outcome multiplicity checks are critical.
What’s next
Agentic AI is already transforming capabilities across industries. Its autonomy offers unprecedented speed, scale, and adaptability, but also introduces new risks. Acceleration of its adoption requires both vision and caution. In the next post of in this series, we’ll explore reasons for optimism and offer policy guidance for a future where agentic AI serves as a force for good. By building trust, transparency, and robust guardrails today, we can harness the power of agentic security while safeguarding our most critical systems and values.