Updated
Updated · nbc16.com · May 13
Nature Study Finds AI Favors Governments in Local Languages Across 37 Countries
Updated
Updated · nbc16.com · May 13

Nature Study Finds AI Favors Governments in Local Languages Across 37 Countries

2 articles · Updated · nbc16.com · May 13
  • A Nature study co-led by the University of Oregon found chatbots often give more government-friendly political answers in a country’s main language than in English, a pattern detected across 37 countries.
  • The researchers link that skew to state influence over online media: in Chinese-language Common Crawl-derived data, 3.1 million documents overlapped with state-coordinated media phrasing, or 1.64% of the dataset.
  • That share climbed to 23% in documents mentioning Chinese political leaders and institutions, and tests on a small open model showed adding such material made answers more pro-Chinese government—especially for Chinese prompts.
  • In commercial models, human raters judged Chinese-prompted answers on China-related political questions as more favorable 75.3% of the time; for Turkmenistan, Vietnam, Tajikistan and Uzbekistan, local-language answers were more favorable over 75% of the time.
  • The authors said the findings point to a governance issue rather than deliberate AI-company design, and called for more transparency on training data while warning against anti-bias measures that could slide into censorship.
Is your AI secretly a mouthpiece for state propaganda?
If we 'fix' an AI's political bias, are we just imposing our own?

The Hidden Influence: State Media Control and Systemic Bias in Large Language Models (Nature, 2026)

Overview

This report highlights a key discovery from a May 2026 Nature study: state-controlled media in many countries directly shapes the training data of large language models (LLMs). Because media is often under government oversight, the vast datasets used to train AI absorb these state-sanctioned narratives. As a result, AI systems do not generate information from a neutral standpoint but instead reflect and amplify government-influenced perspectives. This finding reveals that the influence of state media is deeply embedded in AI outputs, raising important concerns about the neutrality and reliability of AI-generated content worldwide.

...