The Pentagon’s AI Contract Scuffle Exposed A Danger To Businesses
Since President Donald Trump’s second term began, he’s had a tight relationship with Big Tech. At his inauguration, tech titans including Mark Zuckerberg, Jeff Bezos, Elon Musk and Google CEO Sundar Pichai sat behind him. Trump has traveled internationally with Nvidia’s Jensen Huang, OpenAI’s Sam Altman, Apple CEO Tim Cook and Microsoft CEO Satya Nadella. And Amazon, Apple, Meta, Microsoft, Google and Nvidia are all donors to Trump’s planned $300 million White House ballroom.
Meanwhile, many of Trump’s actions have been controversial since the first day of his second term. Few companies have stood up to policies they might not agree with—including immigration crackdowns, the elimination of DEI programs, tariffs, major changes to U.S. energy strategy and military action. So when Anthropic—a major player in the growing AI space—issued red lines, including limits on how the military could use AI, that it would not cross for a defense contract, people throughout the business and tech community noticed.
Most AI contracts are not nearly as consequential as the one Anthropic was negotiating with the Pentagon, but any new AI project can fail if a company’s data is not prepared. Many companies dive into AI projects but don’t have a data foundation to handle them. I spoke with Mike Meyers, CIO of Clari + Salesloft, about how to get going on AI—even if there’s also data work you need to do at the same time. An excerpt from our conversation appears later in this newsletter.
This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.
The past week saw the expansion of two wars. One was the U.S. and Israel beginning armed conflict against Iran. The other was between the federal government, Anthropic and OpenAI.
The Pentagon sought unrestricted access to Anthropic’s AI tools for military purposes. Anthropic demurred, saying it didn’t want its models to be used for mass domestic surveillance or fully autonomous weapons. Nearly 800 people—many of whom are Google and OpenAI employees—signed a letter last week supporting Anthropic’s red lines, and urging other companies to limit how their technology can be used for military purposes.
The deadline for Anthropic and the Defense Department to finalize their negotiated agreement was 5:01 p.m. Friday. In the absence of an agreement, the government could either cancel its $200 million contract with Anthropic, or demand Anthropic’s technology against its will. No agreement was reached, and Defense Secretary Pete Hegseth said he will deem Anthropic a supply chain risk to national security, meaning anyone doing business with the U.S. military will not be able to use Anthropic’s technology.
Hours later, OpenAI reached an agreement with the Pentagon to provide AI for military classified systems. OpenAI CEO Sam Altman said in a social media post that they secured prohibitions on domestic mass surveillance and autonomous weapons systems in their agreement. But he quickly stepped back on that, writes Forbes’ Thomas Brewster. AI and policy experts have pointed out that Altman’s arrangement—that its AI could be used for lawful purposes—had huge privacy loopholes. The Pentagon's interpretation of some security and surveillance laws has enabled the much-criticized bulk data gathering of U.S. citizens' information, and the OpenAI agreement didn't appear to prohibit using AI on this data and other information on Americans—one of Anthropic's red lines.
The way the deal played out has tilted the fortunes of the two AI companies for the time being. Anthropic’s Claude has overtaken OpenAI’s ChatGPT as the most-downloaded app on Apple since the weekend. Earlier this week, Altman told OpenAI employees that he stands by the deal, saying he feels “terrible for subjecting” them to the fallout. Meanwhile, Anthropic cofounder and CEO Dario Amodei has been lauded as one of the very few leaders in the business and tech sector who stood up to the Trump Administration, writes Forbes’ Richard Nieva.
Forbes senior contributor Michael Posner writes that this issue exposes a dangerous risk for any U.S. business. If businesses disagree with any aspect of the Trump Administration’s plans, its actions toward Anthropic show the federal government can not only cancel its contracts, but also threaten them with further sanctions—like deeming them a national security risk. However, the business and praise Anthropic is receiving in response to their decision may prove that defying Trump could be worth the risk.
Last week, before the news of the Pentagon contract, OpenAI announced a $110 billion funding round, the largest private capital raise for any company in history. The round values the company at $730 billion, and includes money from notable AI companies: $50 billion from Amazon and $30 billion from Nvidia.
While on the surface, this fundraising shows more optimism for OpenAI, there’s a bigger issue just a tiny bit deeper. The funding is really long-term supply contracts, writes Forbes contributor Renana Ashkenazi—a supply chain deal structured as a venture round.
For Amazon, OpenAI will expand its deal with AWS, giving the web services provider exclusive third-party distribution rights for OpenAI Frontier, the company’s enterprise agent platform. OpenAI is also committing to buying 2 gigawatts of Amazon’s custom Tranium chips. Forbes senior contributor Janakiram MSV writes that Amazon has funding deals with other AI companies, so this one deal doesn’t do much toward minting “winners” in AI—though deals to run different AI tools on different platforms could create contracting headaches for CIOs.
The funding seems to be circular, MSV writes: Amazon invests $50 billion in OpenAI, which in turn commits to spend $138 billion on Amazon chips and hardware over eight years. The same goes for Nvidia, which received a commitment from OpenAI to use its data centers—at an estimated cost of $35 billion per gigawatt of capacity. Where does this come out to profit for any company? Ashkenazi writes that this funding round is essentially a closed loop.
This week, Apple announced a new MacBook model with a much lower price tag. The MacBook Neo will cost $599—a steep drop from the second-cheapest model, the $1,099 MacBook Air. The Neo has a 13-inch liquid retina display, up to 16 hours of battery life, a 1080p FaceTime HD camera, storage capacity of 256 GB or more, and “AI capabilities.”
So how did Apple get the price down so far? There are a few standards the company cut back on, writes Forbes senior contributor David Phelan. It starts with the processor and RAM: the A18 Pro chip, which also powers the iPhone 16, is in the new computer. The Neo also has just 8GB of RAM, half the 16GB on regular MacBooks. There are some other physical changes as well. The trackpad does not have haptic response, it uses a regular USB-C connector to charge, there is no touchpad on the keyboard to unlock the computer, the keyboard is not backlit, and its screen is about a half-inch smaller than the standard MacBook.
One thing the Neo has that other MacBooks don’t: a more diverse color palette, available in silver, blush, citrus and indigo.
How To Get An AI Project Moving When The Foundation Still Needs Work
Companies are trying to move forward with AI initiatives as quickly as possible, but many of them aren’t quite ready. Mike Meyer, CIO of AI revenue software provider Clari + Salesloft, said that while just over half of companies think their data and infrastructure are AI-ready—he thinks that only about half of those actually have that status.
I talked to him about how a CIO can get through this disconnect. This conversation has been edited for length, clarity and continuity.
Can a company that is already moving forward with AI—and realizing that there are things that need to be fixed on the data layer—make those fixes and continue progress?
Meyer: It’s like a classic engineering or product mindset. If you think about the Agile framework, we don’t need to boil the ocean with our first release of an AI initiative. What we need to do is ensure that we’re solving pain at each stage of the journey and providing something valuable to our end users.
V1 of an AI initiative may be a single use case served: You get some lessons learned, gather some data about the usage of the product, make some minor enhancements. Then V2, that release becomes the next iteration as you’re addressing some of the areas where AI was not possible because we didn’t have the data or the process in place for it. Now it’s possible.
How can a company solve these problems?
The mantra that I have is: Don’t say that AI needs to happen. We need to figure out what problem you’re experiencing: What is the pain? What’s the metric that you’re missing? What levers do you not have in place to be able to move your ability to hit that KPI? Define that first. The answer may not be AI.
If it is AI, then depending on the business problem, there are quick wins that can be implemented to solve some of those problems.
It starts with having a viewpoint on the problem and the pain and then defining the solution from there, versus finding a shiny solution that doesn’t really solve the problem.
How can a CIO who is intimately familiar with the data and foundational issues that need to be corrected temper expectations among other executives for quick ROI?
You really have to, as a CIO, spend a lot of time with your executive partners to understand what are their goals, what are they trying to achieve? If you come to the table and say, ‘Hey, this isn’t possible, we just have to wait a year,’ that’s not going to be a very popular opinion.
What you have to be able to do is offer up some of the interim solutions of how we could improve the data model or improve whatever the sort of underlying issue is, make those iterative improvements and get us closer to the end goal.
What does a CIO need to do to make sure that problems are addressed and fixed one time so everything can be built on top of them?
Some of it is cultural. The way that you build your CRM, your motion around how you manage data, all of those things need to be tightly controlled. You’ve got to have good version control, and then you’ve got to have the right tooling. There are tools out there that can provide the quick wins that you need to be able to say, ‘Maybe we don’t have the data team or the revenue operations team to manage this, but we’ve got this tool that’s going to be our engine to make sure that we have clean data forever.’
Banking and retail technology company Diebold Nixdorf hired Andy Zosel as its new executive vice president and chief product and technology officer, effective March 3. Zosel joins the firm from Zebra Technologies where he worked as senior vice president and general manager of intelligent automation, and he fills a newly created role.
Financial data and software firm FactSet appointed Kate Stepp as chief AI officer and Bob Stolte as chief technology officer, effective March 2. Stepp previously worked in the CTO role, and joined FactSet in 2022. Stolte has worked in leadership roles for Citi and JPMorgan Chase.
Asset management firm Victory Capital selected Molly Weiss to be its new chief technology officer and head of digital innovation. Weiss most recently worked at Envestnet Financial Technology where she was group president of wealth platforms.
There is a big difference between alignment and agreement. When people are aligned, they are on the same page about the overall direction of a decision or action, but are comfortable challenging it to find the best possible solution. Here’s how to show that you value alignment above agreement.
Giving feedback to employees is an important part of leadership, but some phrases that sound good can backfire. Here are seven things you shouldn’t say—and suggestions for what to tell employees instead.
Which university sold a regional campus to Amazon Data Services for $427 million last week?
A. Harvard University
B. Fordham University
C. Stanford University
D. George Washington University
See if you got the answer right here.
