What Anthropic’s Leak Means For The Coming Wave Of ‘Dark Code’
Not all coding problems are AI-generated. On Tuesday, Anthropic said a leak triggered by a “human error” exposed 500,000 lines of source code underlying its marquee AI coding assistant Claude Code. The company said the leak didn’t impact sensitive customer data or its AI models, but it did reveal some details about how Anthropic built the tool’s user interface. By Wednesday, Anthropic used copyright takedown requests to pull down 8,000 copies posted on GitHub.
It’s a bad look for Anthropic. Claude Code, which brought in $2.5 billion in run rate revenue as of February, has helped it gain an edge against its rivals and boost its business.
Worse, it’s Anthropic’s second data mishap of the week. On Monday, the company accidentally published a blog post announcing its next big model update: Claude Mythos, The Information reported.
Although the Claude Code incident was human error, security breaches aren’t isolated incidents. Data labelling startup Mercor confirmed that it was hit by a security incident linked to open source project LiteLLM, TechCrunch reported. Extortion group Lapsus$ claimed it had gotten access to data like emails, phone numbers and resumes of data labelers as well as source code but it remains unclear if that data was affected by the breach. These types of attacks are bound to be more prevalent as AI agents write code and ship software, Sarah Guo, founder of AI-focused VC firm Conviction wrote in a post on X.
Guo calls it “dark code.” Before AI, when all code was written slowly and deliberately by humans, programmers were forced to deeply understand the systems they were building. But now that AI agents write code super-fast, no one fully understands it or the decisions the agent made, making it difficult to pinpoint why a data leak or security incident happened. Because coding agents select tools and execute plans in real time, documentation of their reasoning and the steps they take can disappear— making it difficult to ensure the system is secure. Plus, thanks to AI, non-technical employees like product managers or marketers can produce complex software, bypassing traditional security checks.
“Shipping before you fully understand what you've built isn't a character flaw. Today, it's how you compete,” she wrote.
A gaggle of cybersecurity startups are jumping on the opportunity to help defend systems against malicious threats. Many like Depthfirst, which just raised $120 million in funding, are using agents to fight against security vulnerabilities created by other AI agents.
Now let’s get into the headlines.
OpenAI closed its historic $122 billion funding round, which values the AI titan at $852 billion— making it the largest-ever private tech financing. The round includes backing from Amazon ($50 billion), Nvidia ($30 billion) and Softbank ($30 billion) as well as individual investors through banks and global institutions. The company will also be included in publicly traded ETFs managed by Ark Invest, allowing retail investors to get exposure to OpenAI for the first time.
Some experts noted that a lot of that money appears to be returning to the companies that wrote the checks in one way or another. OpenAI plans to spend $100 billion over the next eight years on compute infrastructure from AWS. Its models run on Nvidia’s chips and infrastructure. For Amazon and Nvidia, “it is a way to convert their dormant cash pile into revenue, which Wall Street loves,” Dan Taylor, a professor of accounting at Wharton wrote on X.
The ChatGPT maker is now generating $2 billion in revenue every month — that’s four times faster than both Meta and Alphabet. The company expects ChatGPT to grow to 1 billion weekly active users soon.
But that rapid growth has come at a cost. OpenAI has had to kill several products (most recently its popular video sharing app Sora) along the way. Here’s a full list of all the deals and products that haven’t materialized.
Oracle is laying off tens of thousands workers, CNBC reported. The tech giant has faced investor pressure over the vast sums of debt it’s raised to fund its AI investments, including building data centers for OpenAI, combined with dwindling cash flow. In February, Oracle said it plans to raise $45 billion to $50 billion in debt and equity in 2026. Cutting the jobs should help free up as much as $10 billion in cash flow for its capital expenditures. The company’s shares have plunged 57% since reaching its peak of $345.72 last September, Forbes reported.
Why This AI Law Firm Is Ditching The Billable Hour
Ross Weiser isn’t like most lawyers. Rather than spending hours digging through documents and responding to client emails, on a typical day he oversees a swarm of AI agents. After they’ve combed through contracts and marked them up with comments and suggestions, he’ll review the agents’ work, addressing legal nuances they missed and making sure they haven’t made anything up. Then Weiser works with a team of AI researchers, suggesting tweaks to prompts and explaining why one AI-generated answer is better than another from a legal standpoint.
Six months ago, Weiser joined Crosby, an entirely new type of law firm where a suite of AI agents and 30 lawyers collaborate to speed up reviews for commercial contracts like services agreements, data processing agreements and NDAs. The gig is wildly different from Weiser’s previous job as an associate at storied law firm Sullivan & Cromwell. While the firm rolled out ChatGPT to help with legal work, he found the chatbot unhelpful for complex tasks and frustrating to work with.
“It felt like with some more prompting, maybe I could get it to give me what I want, but I didn't have the time for that because you’ve got to bill your hours and you've got deadlines,” he tells Forbes.
Now there’s no need to bill hours at all. Instead, Crosby charges by the contract, which its AI agents can review in a matter of hours instead of days or weeks, with a human lawyer to do a final check. The idea is to align the firm's financial incentives with those of its clients: closing deals faster. “This is I think the most dramatic change for lawyers in a hundred years,” says CEO Ryan Daniels, a former lawyer who worked as in-house counsel for over a decade at multiple AI startups. He cofounded Crosby in September 2024 with ex-Ramp engineering manager John Sarihan.
The nascent law firm is already servicing about 100 clients, including buzzy AI startups like Cursor, Clay, ListenLabs, Rogo and Cognition as well as massive companies like real estate firm Tishman Speyer ($64.4 billion in assets under management). Its agents have reviewed 13,000 contracts to date and revenue has grown about 400% since October.
At its core, Crosby is upending the way traditional law firms work. Under the current hourly billing model, lawyers tediously track their work in six-minute increments and send multiple invoices for each contract — a set-up Daniels calls “taxing.” Instead, Crosby charges anywhere from $250 to $1,000 per contract, roughly based on the number of pages, about $10 to $50 per page.
Read the full story on Forbes.
Entrepreneur Matt Cortland created AI agents using synthetic voice startup ElevenLabs, then directed them to call 3,000 bartenders across Ireland to inquire about the cost of a pint of Guinness, Fortune reported. He then turned to Claude Code to create “Guinndex” — a real-time and up-to-date consumer price index for a pint of the drink across the island. Now Cortland can compare his local pub’s €7.80 ($8.93) pint against others sold across the country.
