Editorial - "Mechahitler" and the danger of unregulated chatbots
By Misch Pautsch Switch to German for original article
Less than a week after Elon Musk's chatbot "Grok" described itself in public online dialogues as "mechahitler", the US Department of Defence signs a contract worth 200 million US dollars with Musk's company xAI. The case shows how dangerous it is when control over AI models is in the hands of a few tech billionaires.
This article is provided to you free of charge. If you want to support our team, subscribe now.
After Elon Musk decided to enter the AI race and finance his own large language model called "Grok", he was quickly confronted with a problem: The answers his own programme was spitting out were, in his own opinion, "too woke". The model regularly contradicted Musk's own posts and users were having fun successfully using it to fact-check him. According to Musk, the programme is specifically designed to "answer as truthfully as possible".
Embarrassing. So embarrassing that Musk publicly announced that he would give Grok a proper brainwashing to rid it of the "woke nonsense of the internet". Apparently successful: after the update, it started calling itself "Mechahitler", calling Jews "scum", saying that Hitler "would not hesitate to solve current problems". Grok claims to use Musk's own posts as reference material. Several times it responds to questions in the first person, as if it believes it is Elon Musk himself. Not a good look, especially after Musk himself made headlines for (for legal reasons, "presumably") making a Hitler salute several times at a Republican event.
Musk's response: "It is surprisingly hard to avoid both woke libtard cuck and mechahitler!" In his usual style, he never misses an opportunity to emphasise that he himself is tinkering with the code in order to steer back. At the same time, Linda Yaccarino, CEO of X (previously Twitter) resigns, and a new update for the chatbot is pushed by xAI to silence – or at least better hide – Mechahitler. The parent company xAI apologises in a post (from the Grok account itself) for the "terrible behaviour that many have experienced". A few hours later, the company signs a contract worth 200 million dollars with the US Department of Defence, which plans to use the programme for internal purposes.
The fact that a chatbot under Musk's control could lose its mind – or do exactly as it is told – should no longer come as a surprise to anyone. But what if the next lobotomy is a little more subtle? He no longer loudly calls himself a "Mechahitler", but instead subliminally spreads Nazi ideas, sows conspiracy theories, distorts statistics and spreads fake news? In this respect, Mechahitler was still the best-case scenario: even with the biggest blinkers on, his views were obvious. This will certainly no longer be the case with the next version.
Far too many people already regard chatbots as truth machines that they blindly trust, even when it comes to difficult topics. The fact that these complex programmes are fundamentally digital parrots that string together statistically appropriate word segments without worrying (or being able to worry) about truth content or ethical implications is quickly forgotten. Just like the fact that their settings can be adjusted with a few button presses, be it a little bit or "full Mechahitler".
Continue reading for free
Get access to this article by subscribing to our newsletter that is sent twice a week. You also have to have a Journal account.
Already have an account?
Log in