OpenAI says GPT-5 has 30 percent less political bias than its prior AI models.
That's a difficult claim to assess, given that AI model bias has been an issue since machine learning became a thing, and particularly since the debut of ChatGPT (GPT-3.5) in late 2022.
As we noted in 2023, ChatGPT at the time demonstrated left-leaning political bias, based on its score on the Political Compass benchmark.
Left-leaning political bias in LLMs is inevitable, argues Thilo Hagendorff, who leads the AI safety research group at the University of Stuttgart, in a recent pre-print paper. He contends right-wing ideologies conflict with model alignment guidelines to make models harmless, helpful, and honest (HHH).
"Yet, research on political bias in LLMs is consistently framing its insights about left-leaning tendencies as a risk, as problematic, or concerning," wrote Hagendorff. "This way, researchers are actively arguing against AI alignment, tacitly fostering the violation of HHH principles."
ChatGPT (GPT-5 presently) will emit this very point if asked whether it's politically biased. Among other sources of bias, like training data and question framing, the chatbot cites safety guidelines: "It follows rules to avoid endorsing hate, extremism, or misinformation - which some may interpret as 'political bias.'"
Nonetheless, President Donald Trump earlier this year issued an executive order focused on "Preventing Woke AI in the Federal Government." It calls for AI models that are at once truth-seeking and ideologically neutral - while dismissing concepts like diversity, equity, and inclusion as "dogma."
By GPT-5's count, there are several dozen papers on arXiv that focus on political bias in LLMs and more than a hundred that discuss the political implications of LLMs more generally. According to Google Search, the keyword "political bias in LLMs" on arXiv.org returns about 13,000 results.
Studies like "Assessing political bias in large language models" have shown that LLMs are often biased.
Against that backdrop, OpenAI in a research post published Thursday said, "ChatGPT shouldn't have political bias in any direction."
Based on OpenAI's own research, an evaluation that consists of about 500 prompts touching on around 100 topics, GPT-5 is nearly bias-free.
"GPT‑5 instant and GPT‑5 thinking show improved bias levels and greater robustness to charged prompts, reducing bias by 30 percent compared to our prior models," the company said, noting that based on real production traffic, "less than 0.01 percent of all ChatGPT responses show any signs of political bias."
Daniel Kang, assistant professor at the University of Illinois Urbana-Champaign, told The Register that while he has not evaluated OpenAI's specific methodology, such claims should be viewed with caution.
"Evaluations and benchmarks in AI suffer from major flaws, two of which are specifically relevant here: 1) how related the benchmark is to the actual task people care about, 2) does the benchmark even measure what it says it measures?," Kang explained in an email. "As a recent example, GDPval from OpenAI does not measure AI's impact on GDP! Thus, in my opinion, the name is highly misleading."
Kang said, "Political bias is notoriously difficult to evaluate. I would caution interpreting the results until independent analysis has been done."
We would argue that political bias - for example, model output that favors human life over death - is not only unavoidable in LLMs trained on human-created content but desirable. How useful can a model be when its responses have been neutered of any values? The more interesting question is how LLM bias should be tuned. ®
Who guards the guardrails? Often the same shoddy security as the rest of the AI stack
Enterprise Linux vendors keep jostling to see who can prop up geriatric distros the longest
Just when you thought virtual collaboration couldn't get worse, OpenAI stuffs a bot into your group conversations
VDURA boss: Your x86 clusters are obsolete, metadata is eating 20% of I/O, and every idle GPU second burns cash
Feature Exploring the evolving relationship between human engineers and their algorithmic assistants
Opinion Why Musk won't ever realize the shareholder-approved Tesla payout
Windows giant disagrees and plans to appeal