The Rise of Slopsquatting: A New Software Supply Chain Threat 

rahul-mehta
By Rahul Mehta
Product Marketing Analyst
June 12, 2025
in
Share
Follow

As organizations increasingly embrace AI-powered coding tools to accelerate development and reduce engineering overhead, a new threat is emerging at the intersection of generative AI and open-source software (OSS): slopsquatting. This novel software supply chain vulnerability exploits hallucinated package names generated by AI models — a subtle but potent attack vector that thrives in AI-assisted development environments. This blog is for Security, DevOps, DevSecOps, and Engineering teams seeking to harness AI’s benefits without compromising software integrity. 

What is Slopsquatting?

Slopsquatting targets consistently hallucinated package names generated by LLMs. When developers rely on these LLMs to write code, import statements for plausible-sounding but non-existent packages — like aws-helper-sdk or fastapi-middleware — can be silently introduced, making it easy for attackers to inject malicious payloads. According to a large-scale analysis of 16 leading code-generating large language models (LLMs) — covering over 576,000 AI-generated Python and JavaScript samples — researchers uncovered hundreds of thousands of fictitious package names. These hallucinations, if registered by bad actors in public repositories, will serve as backdoors into your enterprise’s systems. Just one developer installing a malicious “slop” package could compromise your organization’s pipeline, production systems, or internal infrastructure. 

  • 21.7% of packages generated by open-source LLMs were hallucinated.
  • 58% of hallucinated packages appeared repeatedly, making them easy and predictable targets.
  • AI is already responsible for generating 25% or more of new code at leading tech firms, and that number is growing rapidly.

How to Plan?

Traditional OSS security practices like maintaining a Software Bill of Materials (SBOM), enforcing dependency scanning, and using vetted sources are foundational — but they don’t account for a new reality: Generative AI is now creating the vulnerabilities. This new slopsquatting threat highlights how hallucinated package names can bypass even rigorous OSS controls. Security and DevOps teams must now evolve their playbook, planning not just for prevention, but for containment and resilience in a post-exploitation world. This includes: 

  • Network Segmentation:

    Once inside, attackers rely on lateral movement of threats. By isolating components and development environments into fine-grained network zones, you limit the blast radius of a breach. Microsegmentation ensures that a compromise in one component does not provide access to your broader production environment or critical assets, such as databases isolated in a DMZ. It’s one of the most effective ways to contain the damage from an unexpected slopsquatting payload.
  • Data Loss Prevention:

    Slopsquatting packages may be designed to quietly exfiltrate data once installed. Built-in DLP controls within a SASE platform can monitor how open-source components interact with sensitive assets — such as source code, credentials, and customer data — and block unauthorized transfers in real time. Security solutions that combine DLP with ZTNA and CASB capabilities can detect suspicious outbound flows, helping to contain the damage even if malicious code is already running.
  • Access Controls & Least Privilege:

    Enforcing strong access controls is essential for both preventing slopsquatting threats and limiting damage if exploitation occurs. Hallucinated packages often slip in during development or integration — stages where many users typically have broad permissions. To reduce this risk, organizations should both tighten user access and integrate authenticated, mature code & registry scanners into the development pipeline. Tightening access to OSS installation and deployment through Role-Based Access Controls (RBAC), combined with strict least privilege enforcement, ensures only essential personnel and systems can fetch or execute packages. If a malicious package makes it through, limited permissions can prevent privilege escalation and contain the threat before it spreads.

Final Thoughts:

Open-source software has always walked a fine line — fueling innovation and agility while simultaneously introducing risk. Generative AI amplifies both sides of that equation. Tools meant to accelerate development are now creating entirely new vulnerabilities, including hallucinated attack surfaces that legacy controls can’t detect. Security leaders must move beyond prevention and operate with the assumption that compromise is inevitable. 

Recent Posts








Topics





Top Tags


Gartner Research Report

2024 Gartner® Magic QuadrantTM for SD-WAN

For the fifth year in a row, Versa has been positioned as a Leader in the Gartner Magic Quadrant for SD-WAN. We are one of only three recognized vendors to be in the Gartner Magic Quadrant reports for SD-WAN, Single-Vendor SASE, and Security Service Edge.