27 Apr 20266 minute read

As SpaceX deal looms, Cursor partners with Chainguard to secure open-source dependencies in AI-built code
27 Apr 20266 minute read

Cursor has spent the past week in headlines after confirming a partnership with SpaceX that could eventually lead to a $60 billion acquisition. The deal, for now, centres on training more capable coding models using SpaceX’s compute infrastructure.
Alongside that push on model performance, however, Cursor is now addressing a separate issue: the reliability of the code those models produce.
Cursor has partnered with Chainguard, which provides verified open-source packages, to route dependencies through its curated repositories, aiming to reduce the risk of compromised components entering AI-built applications.
The announcement lands as AI coding tools push more software into production with less human review, raising questions about how much of that code can be trusted.
Supply chain risks in the agentic era
The partnership addresses a problem developers know all too well. Modern applications depend heavily on open-source libraries and container images, most of which are pulled from public registries such as npm, PyPI, and Docker Hub.
Those registries operate on openness, with limited checks in place. Developers — and now AI agents — often install dependencies without knowing who built them or whether they have been tampered with.
Recent incidents have underlined the risk. In March, projects such as Trivy, LiteLLM, Telnyx, and Axios were compromised, with attackers using poisoned packages to steal credentials and spread malware.
For teams using AI-generated code, the exposure increases. Agents can select and install dependencies automatically, making trust decisions at a pace that outstrips manual review.
As Chainguard co-founder and CEO Dan Lorenc put it, generating code is becoming routine — checking its integrity is where the pressure now sits.
“AI agents are making dependency decisions at a scale and speed no security team can manually review,” he wrote in a blog post. “As organizations adopt agentic development, the biggest blocker is no longer how fast code can be generated – it’s whether that code can be trusted.”
A curated path for dependencies
Under the partnership, Cursor users can pull libraries and container images from Chainguard’s repository instead of public registries. The company says its catalogue includes millions of vetted library versions across Python, JavaScript, and Java, along with thousands of minimal container images.
The filtering process is strict. Chainguard builds packages only from publicly available source code and avoids components that rely on install-time scripts — a common vector for hidden payloads. If a package cannot be traced back to a verifiable source, it doesn’t make the cut.
The goal is to narrow the attack surface without changing how developers work. Projects can be migrated through a simple prompt inside Cursor, after which dependencies are swapped out behind the scenes.
“Recent supply chain attacks showcased how bad actors are working to manipulate the public tools and registries we’ve historically relied on to consume open source,” said Brian McCarthy, a senior executive at Cursor. “With agents writing the majority of code at top businesses around the world, new tools to help ensure the code is trusted and the ability to review and monitor at speed creates a safer paradigm.”
Why this matters for AI-built software
The partnership reflects a broader industry-wide shift in how software is produced and protected. AI coding tools are no longer limited to suggesting snippets; they are assembling full applications, including the dependencies those applications rely on.
That changes the risk profile massively. The bottleneck isn’t writing code, but confirming that every component — including third-party packages — is safe to run in production.
Without stronger controls, a single compromised dependency can expose sensitive data or halt development while teams investigate and rotate credentials. Incidents tied to supply chain attacks can take days or weeks to unwind.
Other companies are approaching the same problem from different angles. This includes Tessl, which recently introduced security scoring for open source packages in its registry, using data from Snyk to help developers assess risk before pulling in dependencies.
By inserting a verification layer into the dependency pipeline, Chainguard and Cursor are trying to address that weak point directly. The approach doesn’t eliminate the risk entirely, but it narrows the range of unknowns by limiting what can enter a project in the first place.
For Cursor, the move also reflects the expectations of larger customers, particularly as it draws attention from companies such as SpaceX. As AI coding tools edge further into enterprise use, assurances around security are likely to carry as much weight as speed or capability.
Resources
Related Articles
More by Paul Sawers

Replit launches “Security Agent” to scan and fix vulnerabilities in AI-built apps
22 Apr 2026
Paul Sawers

Cloudflare introduces “Agent Memory” to help AI agents remember across sessions
20 Apr 2026
Paul Sawers

Google adds subagents to Gemini CLI to handle parallel coding tasks
20 Apr 2026
Paul Sawers

Anthropic adds 'routines' to Claude Code for scheduled agent tasks
16 Apr 2026
Paul Sawers

Vercel open-sources Open Agents to help companies build their own AI coding agents
15 Apr 2026
Paul Sawers




