menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI can tank teams’ critical thinking skills. Here’s how to protect yours

9 58
22.02.2026

AI is transforming how teams work. But it’s not just the tools that matter. It’s what happens to thinking when those tools do the heavy lifting, and whether managers notice before the gap widens.

Across industries, there’s a common pattern. AI-supported work looks polished. The reports are clean. The analyses are structured. But when someone asks the team to defend a decision, not summarize one, the room goes quiet. The output is there, but the reasoning isn’t owned.

For David, the COO of a midsize financial services firm, the problem surfaced during quarterly planning. Multiple teams presented the same compelling statistic about regulatory timelines, one that turned out to be wrong. It had come from an AI-generated summary that blended outdated guidance with a recent policy draft. No one had checked it. No one had questioned it. It simply sounded right.

“We weren’t lazy,” David told us. “We just didn’t have a process that asked us to look twice.”

Subscribe to the Daily newsletter.Fast Company's trending stories delivered to you every day

Privacy PolicyFast Company Newsletters

Through our work advising teams navigating AI adoption, Jenny as an executive coach, learning and development designer, and Noam as an AI strategist, we have seen a clear distinction: there are teams where AI flattens performance, and teams where it deepens it. The difference isn’t whether AI is allowed. It’s whether judgment is designed back into the work.

In good news, teams can adopt practices to shift from producing answers to owning decisions. This new way of thinking doesn’t slow things down. It moves performance to where it actually matters—and protects the judgment that no machine can replace in the process.

1. The Fact Audit: Question AI’s Output

AI produces fluent language. That’s exactly what makes it dangerous. When output sounds authoritative, people stop checking it. It’s a pattern often called workslop: AI-generated output that looks polished but lacks the substance to hold up under scrutiny. In contrast, critical thinking strengthens when teams learn to treat AI as unverified input, not a final source.

Expand to continue reading ↓


© Fast Company