menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The 3 AI Problem: How Chinese, European, and American Chatbots Reflect Diverging Worldviews

3 0
18.12.2025

Recent studies have shown that some of today’s most widely used large language models (LLMs) echo the values of Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. A 2024 paper demonstrated that GPT’s answers to the World Values Survey cluster with those of English-speaking protestant societies, emphasizing self-expression values such as LGBTQ rights, environmental protection, and individual autonomy. In contrast, GPT diverged from opinions common in countries such as the Philippines or Nigeria, suggesting a heavy imprint of Western-educated users.

This raises an important question: Do Chinese AIs carry a different worldview?

China is a non-WEIRD country and its LLMs have advanced rapidly: models like DeepSeek and QWen now reach global audiences. Their spread has geopolitical implications, especially given China’s approach to information governance. Upon release, DeepSeek drew attention for avoiding references to the Tiananmen protests, a reminder of Chinese censorship norms. Researchers later confirmed that DeepSeek delivered highly official-sounding answers when sensitive geopolitical topics were raised, sometimes phrased in a style resembling Chinese government statements. These patterns were especially visible in Mandarin and on politically charged questions such as protest participation.

Nevertheless, other analyses found an unexpected nuance: DeepSeek frequently adopted socially liberal positions in areas without a defined official narrative, behaving similarly to Western models on issues such as immigration, human rights, and individual freedoms. This suggests a mixed ideological profile shaped by training data but constrained by political guardrails.

A European model, Mistral, complicates this landscape further. Despite its EU origin, it avoids the left-leaning tilt that larger Western models sometimes display. Earlier research suggested that ideological bias increases with model size; as a smaller model emphasizing efficiency and customizability, Mistral often produced more balanced results than its American counterparts. But its........

© The Diplomat