Google took down Gemini in February 2024 because there were reports that it was depicting people with dark skin colors in historical or fictional roles that were not appropriate for those characters.
Gemini, formerly Bard, creates realistic images based on users’ descriptions in a similar manner to OpenAI’s ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. Google has created their AI Principals that must be followed and are monitored and updated by Google’s AI Governance which is a centralized team that is dedicated to ethical reviews of new AI and advanced technologies.
Soon after the launch of the image generator part of Gemini, there were complaints that it overcorrected towards generating images of women and BIPOC, such that they are featured in historical contexts inaccurately, for instance in depictions of Viking kings or German soldiers from the second world war. In fact, it went as far as refusing to depict any Caucasians even when specifically asked stating that it was racist.
The Google Blog issued a statement that included:
“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.
“These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.”
The model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.
Now for the quandary. A.I. is based on us, humans, Homo sapiens (the most common and widespread species of primate, and the last surviving species of the genus Homo). Why is this important? Because now we are mixing humanity with algorithms and computers and that simply does not compute.
If A.I. is based on us and although human’s have rules and laws, humans constantly break them. A.I. will have access to that information and if A.I. truly is based off humans, won’t it learn that it can break the rules and laws too? That is an extreme case that has been in a lot of movies but for now, let’s keep it simple.
In the beginning, A.I. took the information off the internet and it gave mostly Caucasian results. This is bad (and overly simplified). There should be diversity. However, when it comes to history, the A.I. should always remain factual. Now we all know that history is written by the winners and that gives even more bias to finding the truth, but neither A.I. nor humans will ever know the truths of history.
“Since all information is created by biased people, we cannot reject information simply because it is biased.”
University of Iowa Libraries: Evaluating Online Information: Bias & Disinformation
All that is know is what is written in books. These words we take as fact, especially if there are numerous sources that state the same “facts”. (Sadly, that’s not always the case either but that’s a story for another day). Anyway, A.I. learned from the internet and the internet has always been completely biased.
“Since all information is created by biased people, we cannot reject information simply because it is biased.”[2] That leads us down the path of, how do we know what is biased? How do we know if we are falling into the trenches of confirmation bias? Most importantly, how do we teach A.I. not to fall into that trap?
Will A.I. end up telling people what they want to hear? Even our internet searches end up being biased to what we want to click on with an algorithmic bias. (and what each Search Engine determines we should be reading rather than always what we’re looking for, yet another conversation for the a future blog.)
“People see evidence that disagrees with them as weaker, because ultimately, they’re asking themselves fundamentally different questions when evaluating that evidence, depending on whether they want to believe what it suggests or not, according to psychologist Tom Gilovich. “For desired conclusions,” he writes, “it is as if we ask ourselves ‘Can I believe this?’, but for unpalatable conclusions we ask, ‘Must I believe this?’” People come to some information seeking permission to believe, and to other information looking for escape routes.” [1]
“For desired conclusions,” he writes, “it is as if we ask ourselves ‘Can I believe this?’, but for unpalatable conclusions we ask, ‘Must I believe this?’”
– Thomas Gilovich, How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life
Combining multiple biases will not resolve the issue either. Bias will not, cannot, ever go away no matter how hard we strive against it. There is no such thing as true objectivity. Reporters used to do their best to be objective, but there will always be some bias. It has gotten worse with the internet and the use of click bait because it is a source of funding. Cognitive bias is a valuable source of income because people will always want to find information that fits their ideology. Basically, humans search the internet to prove their point or to feel validated.
If A.I. is getting their information from news reports, from blogs, from online articles, even from places like Stack Overflow or Reddit, how can we really trust its information?
Let’s go down the rabbit hole a little further. A.I. will soon be (and likely already is) helping authors write books. Humans have already proven that they are inherently biased. Our history is inherently biased. Knowing what we know now, is everything we do not see with our own eyes a lie? No of course not, but it is biased.
How do we move away from the bias that is our world now and in the future?
Can we?
Is A.I. going to make things even worse as it continues to integrate into our daily lives?
Share your thoughts in the comments.
No part of this article was written by A.I.
These thoughts are my own other than the sources listed below.
Sources:
1. This Article Won’t Change Your Mind: The facts on why facts alone can’t fight false beliefs By Julie Beck
2. University of Iowa Libraries: Evaluating Online Information: Bias & Disinformation
3. https://www.surrey.ac.uk/artificial-intelligence
This blog is part of a 52 week challenge