AI in cybersecurity

Rethinking AI in cybersecurity: what’s real, what’s not and what comes next

Artificial intelligence is transforming every corner of business, from software development to customer experience, and cybersecurity is no exception. But with change comes confusion, and the rise of generative and agentic AI has sparked a new wave of myths, fears and misconceptions.

While AI can supercharge both attackers and defenders, it’s not inherently dangerous or magical. It’s a force multiplier that amplifies intent, for better or worse. Understanding the reality behind the hype is critical to building a resilient and responsible security posture in the age of intelligent systems.

Below are some of the most common myths about AI in cybersecurity, along with the truths that matter and the actions organizations can take to adapt.

Myth: AI is evil and taking over the world


Truth: AI is a tool, and its impact depends on who wields it

Like a hammer, AI can build or destroy. It’s all about intent. In the right hands, AI accelerates innovation, improves defense and boosts productivity. In the wrong hands, it enables more scalable, targeted and efficient attacks. The technology itself is neutral; the outcomes depend on how and why it’s used.

What you can do: Define authorized use cases and governance

Start with business intent. Define where and how AI will be used, who owns the outcomes, and what ethical or operational boundaries apply. Governance shouldn’t just manage risk — it should enable responsible experimentation. Align policies, guardrails and oversight with your organization’s goals, risk tolerance and security posture.

Myth: We know where the risks lie


Truth: The attack surface is expanding — fast

Every new LLM integration, whether via code, APIs or chat, creates another potential entry point. Emerging threats like prompt injection, over-permissive access and data leakage are evolving rapidly. Defenders must keep pace by expanding visibility, investing in expertise and building operational depth in AI security.

What you can do: Improve visibility and control

As one CISO put it, “You can’t protect what you can’t see.” Map your AI footprint continuously. What models do you use? What data can they access? Where do outputs go? Inventory AI systems, monitor for shadow integrations and enable logging to detect misuse or anomalies. Visibility isn’t optional. It’s foundational.

Myth: Our cyber insurance has us covered


Truth: AI is reshaping the economics of cybercrime

Cybercrime has evolved into a global business that weighs cost, reward and efficiency. Generative and agentic AI lower the cost of attack and increase the return on investment for criminals, leading to more frequent and sophisticated campaigns. The economic equation is shifting, and defenders must adapt accordingly.

What you can do: Reassess risk and readiness

Review your cyber insurance coverage and risk models through the lens of AI-driven threats. Confirm how AI-related incidents are classified and whether policy exclusions apply. Adjust your controls and investments to account for new types of risk and the faster pace of AI-enhanced attacks.

Myth: AI makes cyberattacks unstoppable


Truth: Defenders can still fight back

Attackers may use AI to scale their operations, but defenders are getting smarter, too. From faster threat detection to predictive analysis and automated response, AI is strengthening security capabilities across the board. The advantage goes to organizations that learn, adapt and integrate AI into their defensive playbooks.

What you can do: Integrate AI into core security operations

The same rigor applied to critical systems must now extend to AI tools and their supply chains. Treat AI as part of your living environment, not an add-on. Include AI systems in your penetration tests, risk reviews and incident response plans. Train employees on AI use policies and escalation paths.

Myth: Private LLMs are inherently more secure


Truth: Private doesn’t always mean safer

Owning your infrastructure can feel safer, but that’s not always true. The same debate played out during the rise of cloud computing. Today, major AI providers like Amazon, OpenAI, and Google invest billions to secure their platforms, far outpacing what most internal teams can achieve. Security isn’t about where your data sits; it’s about how well it’s protected.

What you can do: Evaluate based on capability, not control

Assess whether to host AI internally or use a third-party provider based on maturity, compliance and scale. Conduct due diligence on providers’ controls and certifications, and ensure contracts reflect your organization’s expectations for privacy and protection.


How is artificial intelligence changing cybersecurity today?

AI is transforming cybersecurity by automating threat detection, improving response times and enhancing predictive analytics. However, it is also empowering attackers by enabling faster, more targeted and scalable attacks. The result is an evolving cyber landscape where both offense and defense are increasingly AI-driven.

As we embrace more AI tools, what are the biggest cyber risks for our organization?

The main risks include expanded attack surfaces, data leakage, prompt injection vulnerabilities and lack of visibility into AI usage. Each new integration — especially of generative AI tools — introduces new entry points that require governance, monitoring and clear accountability.

How can we shore up our defenses against an AI-driven cyberattack?

The best approach is three-pronged:

  1. Inventory all existing AI/ML tools, integrations and data access, and add to the list as new tools are deployed or integrated. Enable logging, monitoring and alerting across all attack surfaces so you can react quickly to any unusual activity.
  2. Adopt AI-enabled defense tools and predictive detection. In addition to strengthening basic cyber hygiene and patching, you should consider all threats from an AI perspective and ask yourself, “If attackers were using AI, what might be different?” You need to be able to detect and respond to threats at machine speed.
  3. Review and rethink your policies around governance, risk and control as they relate to AI usage. Ensure contracts, insurance policies and internal use cases account for today’s AI realities.

For help getting started, reach out to our cybersecurity practice.

Can I leverage AI to make cyberattacks unstoppable?

The reality? Probably not. While defenders should leverage AI for faster detection, anomaly monitoring and automated mitigation, remember that attackers are also using AI to operate more efficiently. Your best bet is to integrate AI into your security operations and continuously update defenses to help stay ahead of AI-driven threats.

Are private large language models (LLMs) safer than public ones?

Not necessarily. Security depends less on where the model runs and more on how it’s managed. Major AI providers often invest far more in platform security than individual organizations. Due diligence, strong controls and clear contracts matter more than infrastructure ownership.

Will cyber professionals be replaced by AI?

It’s not likely. AI automates repetitive tasks and accelerates analysis, but it can’t replace human judgment, ethics and contextual decision-making. We predict that AI will create a greater need for skilled professionals who can manage and secure systems responsibly.