business
innovative

Cybersecurity threats escalate as agentic AI rises

Nov 20, 2025, 1:45 PM10
(Update: Nov 20, 2025, 1:45 PM)
multinational professional services network founded in 1845

Cybersecurity threats escalate as agentic AI rises

  • High-profile breaches like the Jaguar Land Rover attack illustrate significant vulnerabilities in organizations.
  • Experts advocate for a shift in mindset from 'if' cyberattacks will occur to 'when', emphasizing preparedness.
  • Adopting a comprehensive and proactive cybersecurity strategy is essential to mitigate risks and ensure effective recovery.
Share your opinion
1

Story

In the wake of notable cybersecurity breaches, the discussion around enhancing protection strategies has intensified. Countries and organizations are increasingly acknowledging their vulnerabilities after significant incidents, such as the Jaguar Land Rover cyberattack, termed as one of the most damaging in British history. Experts from Deloitte, Workhuman, and BeatingPoint emphasize the urgency in adapting a zero-trust framework in cybersecurity. The shift from viewing cyberattacks as possibilities to certainties is critical for organizations striving for resilience. They advocate for developing proactive strategies that encompass prevention, mitigation, and recovery to safeguard against the increasing sophistication of cyber threats. The emergence of agentic AI has added yet another layer of complexity to the cybersecurity landscape. With its potential to streamline operations, organizations must tread carefully, as the technology can also misinterpret commands or be subject to manipulation through prompt-injection attacks. Experts like Liam Farrell underscore the need for a cautious approach when deploying AI systems that handle sensitive data. As companies navigate these challenges, they must ensure that protective measures are essential, adopting comprehensive guidelines that are often overlooked. Claire Wilson emphasizes the importance of recognizing cybersecurity as a strategic risk rather than a technical issue. A proactive stance, which includes understanding the potential impacts that a cyber incident can have on customer trust and reputation, is vital. By grounding cybersecurity protocols in a robust, layered defense strategy, organizations can enhance their capabilities to withstand attacks and respond effectively. Continuous improvement and adaptation to emerging threats are imperative for maintaining a robust cybersecurity posture. In this evolving landscape, compliance with best practices is paramount. Despite the availability of detailed guidelines for system hardening and cloud security, many organizations fail to implement them adequately. This inconsistency contributes to recurring vulnerabilities that jeopardize security. The experts collectively agree that the resilience of an organization hinges on its ability to adapt to the changing threat environment and embrace advanced technologies that comply with strong regulatory standards. Therefore, a commitment to cybersecurity is not merely reactive but foundational to sustaining trust and safety in the digital age.

Context

The impact of agentic AI on cybersecurity is a subject of growing concern and interest as the capabilities of artificial intelligence advance significantly. Agentic AI refers to systems that can operate autonomously and exhibit goal-directed behavior without continuous human oversight. In the field of cybersecurity, these systems promise to revolutionize the way threats are detected, managed, and mitigated. However, the rise of such technologies also presents a myriad of challenges and risks that must be understood and addressed. While agentic AI can enhance the speed and accuracy of threat detection through real-time analysis of vast datasets, it also introduces new vulnerabilities that adversaries can exploit. One significant advantage of agentic AI in cybersecurity is its ability to learn from historical data and identify patterns indicative of potential security breaches. These systems can conduct predictive analysis to detect and respond to anomalies much faster than human operators could. Furthermore, agentic AI can automate routine security tasks, such as patch management and incident response, reducing the time that security teams spend on manual interventions. Such automation allows human cybersecurity professionals to focus their expertise on more complex, high-level strategic challenges. Nonetheless, reliance on AI systems introduces the risk of algorithmic bias and false positives, which could distract from legitimate threats or lead to unnecessary alerts. Moreover, the deployment of agentic AI in cybersecurity raises ethical questions and concerns regarding accountability. As these systems evolve, understanding who is liable for decisions made by autonomous agents during security incidents becomes critical. In addition, the potential for adversarial attacks—where hackers manipulate AI systems to malfunction or produce inaccurate results—poses another layer of complexity. Cybercriminals could develop techniques to bypass AI-driven defenses by exploiting the weaknesses in their algorithms, leading to a growing cat-and-mouse game between attackers and defenders in the digital realm. Organizations must stay ahead by continually updating their strategies and reinforcing their AI systems to mitigate these threats. Ultimately, the impact of agentic AI on cybersecurity is a double-edged sword. While it offers opportunities for enhanced security protocols and efficient threat management, the associated risks cannot be ignored. The need for robust policies and governance surrounding the use of AI in cybersecurity will be paramount in ensuring that these technologies are developed and deployed responsibly. As we move forward, collaboration between researchers, policymakers, and technology developers will be essential to harness the full potential of agentic AI while safeguarding against its inherent risks, fostering an environment where innovation can thrive alongside security.

2026 All rights reserved