Your Biggest Cyber Risk Isn’t What You Think
Technology and the stock market seem to operate in separate universes. Looking at the stock market this week, one might think it’s a bad time for tech. Market indexes are all down, and tech is providing no relief. Nvidia’s stock is down 2.9% this week, and Microsoft, Google, Meta and Amazon are all trading at lower levels now than when markets opened on Monday.
The stock market masks the fact that Nvidia just projected $1 trillion in annual revenue this year—a development that one would think would generate excitement among investors in all kinds of tech, considering that Nvidia’s primary product is the infrastructure underlying most AI. And even without the ongoing war in Iran, the skyrocketing gas prices it brings, and dispiriting economic indicators, including a freeze in interest rates, tech stocks would likely still be relatively down. Wall Street tends to second-guess AI companies’ revenues and projections, questioning whether developments will translate into real increases in business. In Nvidia’s case, it’s consistently outperformed expectations in the recent past—though as the year progresses, we’ll see if it’s set the bar too high.
One thing that impacts every company in every sector is cybersecurity. Remedio CEO Tal Kollender says many companies are overlooking significant risks by focusing on weaknesses in big systems—but forgetting about smaller vulnerabilities across systems, applications and AI. I spoke with her about how to find these weak points and shore them up. An excerpt from our conversation is later in this newsletter.
This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.
Nvidia’s GTC 2026 wraps up today, and it’s been a week full of announcements about the next chapter of AI from the company that’s synonymous with its infrastructure. But the biggest announcement wasn’t truly about new products. At his keynote address, CEO Jensen Huang unveiled the next step for Nvidia: A five-layered full-stack platform approach to AI, showing that the company will own every section of the AI stack, writes Forbes senior contributor Janakiram MSV.
At the bottom is Nvidia’s compute substrate, anchored by its Vera Rubin platform with its seven specialized chips, five rack-scale systems and an integrated supercomputer built for AI workloads. Above that is the networking and data acceleration layer, with NVLink 6 and Spectrum-X. The agent runtime layer is in the middle, represented by the new enterprise NemoClaw stack. Next is the open model ecosystem through the Nemotron Coalition, covering six frontier model families. At the top is the AI factory design layer, with the DSX reference architecture and Omniverse Digital Twin blueprint.
MSV writes that this is the logical next step for Nvidia. The world’s most valuable company became dominant with its GPUs for AI training infrastructure. But as AI develops, the larger need is in inference at scale and always-on agentic systems generating myriad tokens—and Nvidia wants to run the entire ecosystem.
This ecosystem is built on both Nvidia's own technology and other providers' technologies. Forbes’ Phoebe Liu talked to Groq cofounder and CEO Jonathan Ross about the specialized AI chip company’s deal with Nvidia, announced last Christmas Eve: $20 billion to license Groq’s product and hire most of its staff. Groq’s chips will be part of the bottom layer of Nvidia’s full-stack platform, integrated with Vera Rubin GPUs.
Nvidia’s revenues have been exponential as the AI boom has remade technology. In just the last three fiscal years alone, they’ve grown about 700%, finishing FY 2026 with a record $215.9 billion in revenues. In his keynote address, Huang forecast $1 trillion in revenue for the current fiscal year, borne by demand for all of Nvidia’s infrastructure.
ARTIFICIAL INTELLIGENCE
By now, we know the extreme benefits and drawbacks of OpenClaw: An effective autonomous AI agent that operates on a personal computer and can act on all the data and processes it has, but that broad access also puts personal data at risk. OpenClaw wasn’t designed with security in mind. Some users have come to their computers to find their agents moved private data to places where hackers could easily access it, deleted important files and emails and used malicious code disguised as downloadable “skills” (or “learned” from other agents) to steal information or money.
Docker, an open-source platform that creates isolated computing environments for app development, recently launched its more secure version of the OpenClaw agent called NanoClaw. Creator Gavriel Cohen told Forbes senior contributor Gil Press that the fundamental principle for NanoClaw is creating barriers around agents, meaning they are contained in their own environments and have a limited area in which to take action, stopping bad agent decisions before they become huge problems—and keeping agents from interacting with other agents, which can make problems exponentially worse.
“The right approach isn’t better permission checks or smarter allowlists,” Cohen told Press. “It’s architecture that assumes agents will misbehave and contains the damage when they do.”
Because of its isolation—especially from other agents—NanoClaw is secure and safe by default, Cohen told Forbes senior contributor John Koetsier. Isolation doesn’t mean that NanoClaw can’t do things, though. NanoClaw says it can search the web and do research for you. It can build complex software. It can connect to messaging apps, run scheduled tasks and work with LLMs and other AI platforms. There are also skills to download for NanoClaw, like email reading and management and voice transcription.
While NanoClaw is popular among developers now, with more than 24,300 stars on GitHub to date, it’s part of a still-developing slate of more secure versions of OpenClaw-type agents. Another one—NemoClaw—was among Nvidia’s announcements at GTC 2026 this week.
TECHNOLOGY + INNOVATION
As computer-based AI models continue to improve, developers are also working on AI for the physical world: Robots that can operate machinery, work on assembly lines, perform physical labor in manufacturing facilities and warehouses, and do housework.
But physical AI is different from computer-based AI, which is measured by tokens and massive amounts of compute to process requests. There’s less to see in physical AI at this point—nobody is mass-producing not-quite-ready AI-powered robots—but developers are trying a different approach, writes Forbes senior contributor John Koetsier. Instead of racking up tokens and computing, there are several efforts to “teach” tasks to AI-powered robots in the same way as humans. AI pioneer Yann LeCun recently raised $1 billion for this sort of effort. Before that, researchers at Imperial College London published a paper demonstrating that this approach “taught” AI robots 1,000 different real-world manipulation tasks in a day. Some of the robot training came from a single human demonstration of the task.
Koetsier spoke with Edward Johns, Imperial College’s robotics lab director, about their approach to robot training. It involves breaking actions into several reusable pieces, such as learning to align a robotic hand with an object—a first step in many actions a robot would perform. By separating larger tasks into smaller pieces, the robot reuses prior learning, making it faster to "learn" full actions.
How To Address The Cybersecurity Threats In Your Own System
Cybersecurity strategies are often based on shoring up big systems, which Remedio's CEO, Tal Kollender, compares to locking the front door. But the back door—risks from different systems on top of systems, AI platforms and customizations, and vibe-coded tools—is often wide open. Many companies are unaware of these risks, and their teams’ responsibilities are too distributed to notice.
I spoke to Kollender—a former hacker who is familiar with finding unrealized weaknesses—about how CISOs can identify these vulnerabilities and make their enterprise security stronger. This conversation has been edited for length, clarity and continuity.
Where do you see the biggest cybersecurity vulnerabilities and threats?
Kollender: About 90% of all cybersecurity attacks, once they get in, abuse something called misconfiguration. Misconfiguration can be any kind of default setting or human error or whatnot, but it helps them move laterally.
What we see recently is that people, when they write their own AI agents, don’t think about security first. They think about: Let’s be more efficient. Let’s make sure that this and that works. And they don’t really pay attention. Close to 90% of all organizations adapt AI in one way or another, and they all say, ‘We have our own limitations. No, you don’t have your own limitations if you’re only trying to close [security] on the network level.
AI agents today are so sophisticated because they can create their own code and be malicious without you knowing. You cannot really protect it, and when you find it, it’s already too late.
From the enterprise level, what can a company do to protect against some of these threats?
It is 100% being proactive: not only seeing the risk, but also addressing the risk.
So many vendors say, ‘We have AI just to say we have AI because that’s the new trend.’ We understand it, but we want to stop the AI threat. And it’s not recklessness on purpose. You install OpenClaw or Claude Code and you manage to do pretty much anything, and you don’t understand that the basic configurations when you install it on your device are risky enough that it can go wrong no matter what you do.
It’s not only misconfiguration, but also the remediation on top of that because you need to shrink the attack surface without someone else coming, and it’ll be already too late.
The approach of proactive, autonomous or even auto remediation is something that helps so many customers. They switch a little bit to be in a better position. The security team, along with the infrastructure and the networking teams all need to be one unit and not saying, ‘No, I’m not doing remediation. Let me take you to the other team who is [working] on the remediation.’ They need to have the same agenda to help your organization protect against the next threat act.
What advice would you give to a CISO looking at this landscape and wanting to secure their enterprise?
You have enough visibility tools, you have enough things that will tell you what is wrong, but you don’t have enough actionability. And today, when you see things, you send it to another team to fix it. Usually you don’t get results after that because it’s hard. It’s a lot of manual efforts: processes, scripting, and people are afraid to break stuff. They need to have a proactive approach, and instead of only getting to see alerts, fixing them and working together at the one unit.
Software and cloud platform provider ServiceTitan appointed Abhishek Mathur as its chief technology and product officer. Mathur joins the company from Figma, where he worked as senior vice president of software engineering, and has also held leadership roles at Meta and Microsoft.
Safety and security solutions firm Pavion selected Keith Ikels as its chief information officer. Ikels joined the company from Pape-Dawson Engineers, where he worked in the same role, as well as Sirius Computer Solutions prior.
Cancer diagnostics company Veracyte hired Kevin Haas as its chief development and technology officer, effective March 24. Haas most recently worked as chief technology officer at Myriad Genetics, and fills a newly created leadership role.
AI is supposed to help workers do more, but it often leads them to spend more time on less important tasks—partly because of the interactive nature of AI chatbots. Here are ways companies have integrated AI into their operations more effectively, both to get tasks done and boost efficiency.
Today’s leaders know how to handle big displays of disagreement, but they may not be as in tune with smaller ones—eye rolls, sighs, dismissive tones of voice. These smaller gestures still carry weight, and can add up to larger issues on your team. Here’s what to look for, and how to intervene.
Which one of these domains did the federal government appear to register this week?
See if you got the answer right here.
