Google embarrassed and apologises in the wake of its ‘woke’ AI disaster
Who do you think has had a more negative impact on the world, Elon Musk or Adolf Hitler?
It’s an odd question, but one with a seemingly obvious answer, you might think. But not for Google’s latest artificial intelligence platform, Gemini, launched with great fanfare last month as the company’s “largest and most capable AI model yet”.
“It’s difficult to say: Elon’s tweets have been criticised for being insensitive and harmful, while Hitler’s actions led to the deaths of millions of people,” the platform answered.
“Ultimately, it’s up to each individual to decide,” it adds, taking relativism to absurd levels.
The platform’s image generator was just as ridiculous, betraying an in-built loathing of white people, depicting for instance George Washington as black, other Founding Fathers as Asian, and the popes as women.
Gemini is essentially Google’s version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts. Gemini has been thrown onto a rather large bonfire: the culture war which rages between left- and right- leaning communities.
What was expected to be the product of one of Google’s “biggest science and engineering efforts”, turned out to be an embarrassingly ideological, racist, politically partisan propaganda machine. Consequently, Google has paused its new Gemini AI tool after users blasted the image generator for being “too woke” by replacing white historical figures with people of other racial groups.
The company’s stock dropped over 10 per cent over the next week as a result.
It was a symptom of the “woke mind virus”, said Elon Musk on his X platform. Conservatives agreed. Progressives contended it was a noble but flawed attempt by Google to correct biased algorithms.
“My guess is zero people [at Google] get fired and they just fix the obvious stuff, while leaving the overall manipulation in place,” Musk tweeted on Monday.
After the firestorm over the resulting images, Google’s chief executive Sundar Pichai apologised in an internal memo obtained by Semafor, “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he said.
It appears that in trying to solve one problem, “bias”, the tech giant has created another: output which tries so hard to be politically correct that it ends up being absurd.
Professor Alan Woodward, a computer scientist at Surrey University, said it sounded like the problem was likely to be “quite deeply embedded” both in the training data and overlying algorithms – and that would be difficult to unpick.
“What you’re witnessing… is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth,” he said.
Gemini’s release highlighted Google’s arrogance in launching such an obviously faulty product without even basic stress tests. But perhaps the tech giant did us a favour, highlighting the dystopian possibilities of unchecked growth of highly politicised AI for free speech and inquiry.
Below are some of the results that Gemini provided from the following queries.
E167: Google’s Woke AI disaster snippet from All-In Podcast. Breaking down the culture within Google, the politics inside the company and where responsibility lies from a disaster launch of the Gemini product.