(Not recommended in mobile/slow devices.)

An AI opportunity for EU and other regions?

The shortest summary of reactions to DeepSeek is: “Amazing! But so biased in favour of Chinese Communist Party”.

A market desire for less biased AI exists. The challenge might be just the kind of thing innovators outside of the US and China can address succesfully. Due to the interest in transparency in Europe, EU and UK innovators seem particularly skilled for this challenge.

Political biases are the norm in LLMs

DeepSeek is politically biased. Here is an article about it. And another. And another. If you ask it about things considered sensitive in China, it will auto-censor itself.

But this is hardly news.

LLMs, as a whole, are fairly politically biased.

The majority of LLMs favour the political left. Ask them about policy; the answer will likely lean left (here’s another visualisation). Someone even ran a test to see if they would be more likely to support Biden or Trump, and the results showed a preference for Biden.

Additionally, some LLMs have been reported to refuse to answer about sensitive topics in the US.

An opportunity

The biases of LLMs are a market opportunity. The last thing users want these days is to end up manipulated by an LLM.

There is space for technological solutions that help users manage LLM biases.

Do NOT misjudge the difficulty of the challenge

Beware of thinking, however, that unbiased LLMs are feasible (at least in the short term).

What passes for “unbiased” for leftist folks is not what a right-leaning person would consider unbiased, and vice-versa. A quote from this article phrases the challenge clearly: “people often say they want the unbiased truth, but then they end up sticking to their preferred news source like Fox or CNN”.

In fact, say you manage to build an LLM that actually avoids strong right- or left-leaning biases. Even this wouldn’t do it. Centrist folks would love it, but right- and left-wingers probably wouldn’t consider that LLM to be unbiased.

An open challenge

What do you do in a world where tools cannot avoid having some political bias?

That seems a billion-dollar question waiting to be answered.

It also seems to be something that falls right in the skillset of innovators outside the US and China, particularly in Europe due to pre-existent interest in making AI more explainable.