Netcrook Logo
👤 SECPULSE
🗓️ 26 Mar 2026  

AI’s Risky Recommendations: How Smart Models Are Fueling a Silent Software Security Crisis

New research reveals advanced AI models routinely invent or overlook critical software vulnerabilities, quietly embedding risk into modern development.

It was supposed to be the dawn of safer, smarter software development: powerful AI copilots guiding developers through the labyrinth of open-source dependencies, patching vulnerabilities before they could become tomorrow’s headlines. But as organizations rush to automate their supply chain decisions, a new investigation reveals a troubling truth - these same AI models are quietly introducing and ignoring security bugs, leaving the door wide open for attackers.

Invented Solutions, Real-World Consequences

Sonatype’s sweeping study, spanning over a quarter-million dependency upgrade recommendations from seven leading AI models, exposes a systemic flaw at the heart of modern DevSecOps. AI models - from OpenAI’s GPT-5 to Anthropic’s latest Claude and Google’s Gemini - frequently hallucinate, suggesting software upgrades, patches, or fixes that simply don’t exist. In nearly one out of every four cases, the AI’s advice was pure fiction.

But the problem runs deeper than just imaginary fixes. Even when models tried to play it safe - recommending “no change” for a third of components - they left hundreds of critical vulnerabilities untouched. In some instances, the AI actively suggested updates that would introduce known bugs into the software, ironically compromising even the very AI infrastructure used to train and deploy these models.

Why AI Gets It Wrong

Contrary to popular belief, the issue isn’t that AI isn’t smart enough. In fact, the reasoning abilities of frontier models have improved. The real problem is a lack of “ecosystem intelligence” - up-to-date, contextual information about actual vulnerabilities, compatibility issues, and organizational policies. Without real-time data, even the most sophisticated model can only make educated guesses, which often miss the mark.

Sonatype’s CTO, Brian Fox, warns that the most insidious risk isn’t the obviously broken advice, but the plausible-sounding recommendations that quietly embed technical debt and unresolved vulnerabilities into production code. These subtle errors are harder to catch and become normalized within development workflows.

Grounding AI: A Path Forward

The good news? There’s a fix - “grounding” AI models with live, contextual intelligence. Sonatype’s own hybrid approach, which augments AI with real-time registry data and vulnerability insights, reduced critical and high-severity risks by 70%. Even equipping a basic AI model with access to up-to-date recommendation APIs dramatically improved its security performance.

However, putting a human in the loop isn’t enough. As Fox notes, “Humans should set policy and constraints. The system still needs to be grounded in real-time software intelligence.” Without this, organizations may be automating technical debt at scale, trading speed for silent exposure.

Conclusion

As AI becomes the silent hand guiding software supply chains, its unseen mistakes threaten to become the new normal. The promise of AI-powered security will only be realized if models are grounded in the messy, ever-changing realities of the software ecosystem - not just clever algorithms, but intelligence with context.

WIKICROOK

  • Dependency: A dependency is external code or software a project relies on; if compromised, it can introduce vulnerabilities to all dependent projects.
  • Hallucination: Hallucination occurs when AI generates false or misleading information that sounds convincing, often due to gaps in its data or understanding.
  • Technical Debt: Technical debt is the growing cost and risk from using outdated or quick-fix technology, making future changes harder and more expensive.
  • Grounding: Grounding is supplementing AI responses with external, current info to ensure accuracy and relevance, especially in fast-changing areas like cybersecurity.
  • Supply Chain Attack: A supply chain attack is a cyberattack that compromises trusted software or hardware providers, spreading malware or vulnerabilities to many organizations at once.
AI Security Software Vulnerabilities Technical Debt

SECPULSE SECPULSE
SOC Detection Lead
← Back to news