Humanity's Report Card: How Bad Is It, Really?
Something is wrong and we all feel it.
We have a war in the Middle East, an attention economy that profits from our hatred, and rapidly-evolving AI that will alter the course of civilization. We doom scroll through it all, Left and Right alike, each side certain the other will be the end of us.
We are understandably worried about the world our children will inherit. But how worried should we actually be? Steven Pinker tells us things are better than ever. Jonathan Haidt warns that smartphones are harming an entire generation. Tristan Harris argues that AI puts humanity at an inflection point that may determine whether we thrive or self-destruct.
It is time we understand how bad things really are.
I used five independent AI systems in what I call a blind roundtable: Claude, ChatGPT, Gemini, Grok, and DeepSeek, each in a fresh chat with no shared context. Working with them, I identified five dimensions for evaluating whether humanity is thriving: Meeting Basic Needs, Planetary Harmony, Unity and Compassion, Human Flourishing, and Wisdom with Power. Then I asked the AIs to grade us.
But something was missing: If humanity triggers an existential catastophe, all other grades are irrelevant. So I added: What are the statistical odds that humanity avoids extinction within the next 50 years?
I repeated the experiment with each new generation of AI. Five waves over eight months, August 2025 through March 2026. Twenty-five independent assessments across evolving architectures and training data.
The results were sobering and remarkably stable. Overall grades clustered between C-minus and D-plus across all 25 assessments. The mean survival odds held steady at approximately 67%. The worst category was unanimous: Planetary Harmony received a D to F in every single assessment. And the core diagnosis was unanimous, expressed in different words every time but pointing at the same failure: an existential........
