Google explains how Gemini’s AI image generation went wrong, and how it’ll fix it
Everybody makes mistakes…
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
A few weeks agoGooglelaunched a new image generation tool forGemini(the suite of AI tools formerly known as Bard and Duet) which allowed users to generate all sorts of images from simple text prompts. Unfortunately, Google’s AI tool repeatedly missed the mark and generated inaccurate and even offensive images that led a lot of us to wonder - how did the bot get things so wrong? Well, the company has finally released a statement explaining what went wrong, and how it plans to fix Gemini.
Theofficial blog postaddressing the issue states that when designing the text-to-image feature for Gemini, the team behind Gemini wanted to “ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people.” The post further explains that users probably don’t want to keep seeing people of just one ethnicity or other prominent characteristic.
So, to offer a pretty basic explanation for what’s been going on: Gemini has been throwing up images of people of color when prompted to generate images of white historical figures, giving users ‘diverse Nazis’, or simply ignoring the part of your prompt where you’ve specified exactly what you’re looking for. While Gemini’s image capabilities are currently on hold, when you could access the feature you’d specify exactly who you’re trying to generate - Google uses the example “a white veterinarian with a dog” - and Gemini would seemingly ignore the first half of that prompt and generate veterinarians of all racesexceptthe one you asked for.
Google went on to explain that this was the outcome of two crucial failings - firstly, Gemini was showing a range of different people without considering a rangenot to show. Alongside that, in trying to make a more conscious, less biased generative AI, Google admits the “model became way more cautious than we intended and refused to answer certain prompts entirely - wrongly interpreting some very anodyne prompts as sensitive.”
So, what’s next?
At the time of writing, the ability to generate images of people on Gemini has been paused while the Gemini team works to fix the inaccuracies and carry out further testing. The blog post notes that AI ‘hallucinations’ are nothing new when it comes to complex deep learning models - even Bard andChatGPT had some questionable tantrumsas the creators of those bots worked out the kinks.
The post ends with a promise from Google to keep working on Gemini’s AI-powered people generation until everything is sorted, with the note that while the team can’t promise it won’tevergenerate “embarrassing, inaccurate or offensive results”, action is being taken to make sure it happens as little as possible.
All in all, this whole episode puts into perspective thatAI is only as smart as we make it. Our editor-in-chief Lance Ulanoff succinctly noted that “When an AI doesn’t know history, you can’t blame the AI.” With how quickly artificial intelligence has swooped in and crammed itself into various facets of our daily lives - whether we want it or not - it’s easy to forget that the public proliferation of AI started just 18 months ago. As impressive as the tools currently available to us are, we’re ultimately still in the early days of artificial intelligence.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
We can’t rain on Google Gemini’s parade just because the mistakes were more visually striking than say,ChatGPT’s recent gibberish-filled meltdown. Google’s temporary pause and reworking will ultimately lead to a better product, and sooner or later we’ll see the tool as it was meant to be.
You might also like…
Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison.
Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she’s joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you’ve got questions, moral concerns or just an interest in anything ChatGPT or general AI, you’re in the right place.
Muskaan also somehow managed to install a game on her work MacBook’s Touch Bar, without the IT department finding out (yet).
How to delete a character from Character AI
How to turn off Meta AI
NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)