These Hackers Launched A ‘Nightmare’ Attack On AI Developers
On Tuesday, the FBI’s Cyber Division issued a critical alert that a hacker crew had breached two hugely-popular developer tools, creating a security disaster for millions of AI creators. Now the crew, known as TeamPCP, tells Forbes over encrypted chat that it used AI to turbocharge the attacks. It’s an early example of how tools that are supposed to secure AI software are themselves vulnerable to hackers who’re speeding up and enhancing their attacks with AI.
“It’s a nightmare scenario for the cyber community, not just in the case of open source, but the rise of AI agents has made one of the most expensive parts of offensive cyber much cheaper than it used to be,” says Ben Hirschberg, CTO and cofounder at Israeli cybersecurity company Armo.
TeamPCP’s first major target was Trivy, a popular open source security scanner used by as many as 10,000 companies to look for weaknesses in their software before it’s released. TeamPCP used an AI agent to trick the security tool into handing over a key to its GitHub account. Then the hackers used that access to release malicious versions of Trivy.
“This attack wasn’t highly sophisticated at all but it was initially effective.” A TeamPCP hacker
“This attack wasn’t highly sophisticated at all but it was initially effective.”
Aqua, the $1 billion cybersecurity company that manages the Trivy open source project, said in security advisories it was investigating and improving access controls. The commercial versions of Trivy, used by paying Aqua customers, were developed outside of Github and were unaffected.
The Trivy hack led to TeamPCP’s second scalp. One Trivy user was LiteLLM, an open source AI “proxy” or “gateway” which allows app makers to easily incorporate multiple large language models like GPT-5 or Claude into one piece of software; it’s been downloaded by 95 million users. The hackers told Forbes that they found the keys to LiteLLM’s code publishing platform, which they used to release infected versions to the general public.
The breaches could have been more damaging if a user hadn’t quickly discovered the LiteLLM hack, which made their computer crash. LiteLLM’s leadership is now in clean-up mode, and has brought in Google’s Mandiant division to investigate the breach. “Our technical teams are working with the utmost urgency to secure our infrastructure and ensure the continued protection of our community,” says Ishaan Jaffer, LiteLLM’s CTO and cofounder. The startup is backed by Y Combinator and has raised $2 million since its founding in 2023.
Forbes verified it was talking to a spokesperson for the group after they posted a direct message from TeamPCP’s X account. They then posted a blog on their dark web site with a unique string of characters shortly after telling Forbes they would do so. Both the X account and website had previously been linked on the hackers’ Telegram page, further confirming them as a TeamPCP member.
The spokesperson, using the name T00001B, says the group is a loose-knit group of teenagers and young adults who couldn’t find paying work, so they turned to cybercrime. To make money, they sell access to victims’ networks. The buyers can then either launch ransomware attacks or steal information that they can later monetize. TeamPCP also sometimes takes a cut of any ransom, or extorts a company directly.
T00001B declined to name what tool the group had used to identify the vulnerability in Trivy. But they confirmed they'd used Anthropic’s Claude to build some components that helped the malware spread across infected systems. They didn’t use Claude for finding additional vulnerabilities, though Anthropic’s AI has become increasingly adept at doing that.
Cybercrime groups are increasingly turning to AI to help them code up attacks. In recent months, Bloomberg reported that hackers used Claude to breach Mexican government agencies. In November Anthropic said Chinese spies had used its AI for a “large-scale cyberattack executed without substantial human intervention.”
But such AI-generated hacks often work because of an age-old problem: poor security. T00001B says many AI developers appeared to be blindly downloading tools like LiteLLM, believing that the open source community had made them safe. “This kind of blew my mind,” they said, adding that any well-funded company should build their own features and not rush to use others’ code to get a product out quickly.
“This attack wasn’t highly sophisticated at all but it was initially effective for this reason,” T00001B says. “Nobody expected this to snowball as hard as it did.”
Ben Read, a cyber researcher at Wiz (which was just acquired by Google for $32 billion), says the hacks represent classic examples of a supply chain cyberattack, where hackers breach one company, infect their tools so they can hack an entire customer base. “This is the supply chain, compromises are not uncommon,” Read said. “You need to have a playbook for when this happens.” He said companies should make sure they aren’t leaking secrets like keys, and auditing every version of software before releasing it.
The group has also made headlines for another malware, one that wipes any system it infects that’s located in Iran. T00001B says the hack was partly to test its malicious code—though much of Iran is without internet. It was, they added, “also for the lulz.”
