Trump’s Federal Guidelines for AI May Turbocharge Climate Denial and Racist Bias
Honest, paywall-free news is rare. Please support our boldly independent journalism with a donation of any size.
Picture this: You ask an AI to show you images of judges, and it depicts only 3 percent as women — even though 34 percent of federal judges are women. Or imagine an AI that’s more likely to recommend harsh criminal sentences for people who use expressions rooted in Black vernacular cultures. Now imagine that same AI instructed to ignore climate impacts or treating Russian propaganda as credible information.
This isn’t science fiction. The bias problems are happening right now with existing AI systems. And under President Trump’s new artificial intelligence policies, all these problems could get much worse — while potentially handing the U.S.’s tech leadership to China.
The Trump administration’s AI Action Plan, released alongside executive orders on July 23, 2025, doesn’t just strip federal AI guidelines of bias protections. It eliminates references to diversity, climate science, and misinformation from the National AI Risk Assessment — the document that has become one of the most widely used AI governance guidelines globally.
Never miss the news and analysis you care about.
The administration demands that AI models used by the federal government be “objective and free from top-down ideological bias.” But there’s a catch: This standard comes from an administration whose leader made over 30,573 documented false statements during his first term, according to Washington Post fact-checkers. The result could be AI systems that ignore climate science, amplify misinformation, and become so unreliable that global customers choose Chinese alternatives instead.
The irony runs deep. While claiming to eliminate bias, Trump’s policies could embed it even more firmly into the AI systems that increasingly shape American life — from hiring decisions to law enforcement to health care.
Research shows that AI bias can actually be worse than real-world bias. When Bloomberg tested an AI image generator on common occupations, the results were stark: Prestigious, higher-paid professionals appeared almost exclusively as white and male, while lower-paid workers were depicted as women and people of color. The AI’s racial and gender sorting exceeded the differences that actually exist in our world.
Fast food workers, for example, were shown with darker skin tones 70 percent of the time by the AI — but in reality, 70 percent of fast food workers in the United States are white.
The consequences go far beyond images. Research published in Nature found that large language models were significantly more likely to........
© Truthout
