June 25, 2025 | Dark Reading
Generative AI Exacerbates Software Supply Chain Risks
Malicious actors are exploiting AI-fabricated software components — presenting a major challenge for securing software supply chains.
June 25, 2025 | Dark Reading
Generative AI Exacerbates Software Supply Chain Risks
Malicious actors are exploiting AI-fabricated software components — presenting a major challenge for securing software supply chains.
Generative artificial intelligence (GenAI) might be good at drafting business emails, but it is dangerously bad at writing software code. Malicious actors are exploiting AI-fabricated software components, which presents a major challenge for securing software supply chains. New transparency requirements are urgently needed to address this issue.
Despite its virtues, GenAI is prone to “hallucinations,” wherein models produce plausible-sounding but inaccurate or misleading information. A recent study by university researchers found that 16 different large language models (LLMs) — AI systems that absorb large volumes of written content and identify patterns in how words and phrases relate, thereby enabling predictions of what text will follow — all confidently suggest software components that do not exist. Nefarious actors take advantage of these hallucinations by creating malware with names matching the fake AI-generated components so that unsuspecting developers download code riddled with viruses.
Like many busy professionals, software developers are increasingly relying on AI assistants in their workflows. Too often, however, they accept AI-suggested tools without verifying them against official documentation from the creators of open source tools. Where developers once consulted detailed guides to understand and safely implement packages, many now skip this step — at the risk of introducing serious flaws in their software.
Since AI hallucinations are predictable, malicious actors have a low-effort, high-reward opportunity to pass off their packages as legitimate. First, the attacker repeatedly prompts AI models with common coding questions to see which package names the models tend to hallucinate. Once they identify a nonexistent package name that the AI frequently recommends, the attacker registers a software package under that exact name on public repositories or websites that host free, downloadable code for developers.
When an unsuspecting software developer receives the AI’s suggestion, they will find a piece of software matching the name the AI recommended and install it without considering that it might contain malware or hidden vulnerabilities.
These packages could contain credential-stealing malware that gives attackers access to users’ logins. The packages could also contain cryptomining software to run on a company’s servers and make money for the attackers. The packages could even provide a nation-state actor persistent access for long-term espionage.
Malicious Actor Attacks Increasing
In the past year alone, malicious actors have uploaded hundreds of thousands of malicious packages to open source software repositories. This exploitation of the open source ecosystem, combined with the frequency of hallucinations and confidence with which LLMs assert their accuracy, increases the risk that developers will fall victim to this scheme. This threatens the integrity of local and global software supply chains.
These supply chains are vulnerable to this type of corruption because transparency measures and vetting mechanisms are, at best, inconsistently implemented. For example, only 20% of organizations use software bills of material (SBOMs) to itemize the components in their software. Without an ingredient list, organizations have limited visibility into the composition and security of their software.
Efforts to encourage or require the use of SBOMs and vulnerability disclosures are valuable but do not solve the whole problem. Neither of these tools accounts for how AI models themselves can introduce vulnerabilities. These tools provide transparency into part of the software supply chain, but they do not assess the trustworthiness, training-data provenance, or risk profiles of AI models.
Indeed, organizations that use AI often lack automated systems to validate AI-generated recommendations, leaving developers to manually verify package authenticity — a process that is error-prone and one that busy developers can easily bypass.
Clear Requirements for AI Transparency Needed
To address this issue, the software development ecosystem needs clear requirements for AI transparency and a dedicated risk-disclosure framework. This framework should document key attributes such as training data sources, model versions, known limitations, and security features. Existing guidance from the National Institute of Standards and Technology (NIST) on supply chain risk management and secure software development offers a valuable model. It can inform the creation of new standards for AI provenance, traceability, tamper resistance, and version control.
Similarly, just as NIST frameworks define roles and responsibilities across developers, integrators, and users in the software supply chain, AI governance should establish distinct responsibilities for AI model creators, deployers, end users, and the organizations that oversee these systems. When issues arise, this would enable clearer attribution of accountability — making it easier for federal agencies, organizations, and individuals to identify where lapses in transparency or oversight occurred. This mirrors the governance principles used in securing the software supply chain.
Recent research also suggests that AI hallucinations often follow model-specific patterns, and with the right prompts or internal checks, models can potentially identify and filter out their own faulty outputs. Over time, this self-monitoring ability can help the model improve its reliability. Encouraging the development of self-regulating AI, while simultaneously enforcing transparency standards, would strike a balanced approach — combining carrots and sticks to help restore trust in AI systems.
Taken together, these measures would go far toward securing software supply chains, better enabling developers to embrace the promise of generative AI while carefully navigating its risks.
Dr. Georgianna “George” Shea serves as chief technologist of the Transformative Cyber Innovation Lab (TCIL) at the Foundation for Defense of Democracies (FDD). Elaine Ly is an intern for the Center on Cyber and Technology Innovation at FDD.