<- all articles

(Over)correcting bias in AI

Dries De Rydt

Last week, Google had some explaining to do after people discovered its Gemini AI tool generated racially diverse Nazis and other inaccuracies in historical image generation depictions. Many people asked themselves how this is possible, but actually, it’s not surprising this happened. Bias in AI is not new, and this was a (poor) attempt of Google to subvert long-standing racial and gender bias problems in AI, resulting in an overcorrection. Why does Google have to (over)correct bias? In this article, we’re discussing where bias in AI comes from and what we can do to correct it.

Artificially intelligent, but biased, models

Modern machine learning algorithms set themselves apart from traditional ones primarily through their capacity to generalize and identify patterns. These patterns enable the system to handle new data and overlook minor differences. For example, AI models easily differentiate between cats and dogs — a task virtually impossible with only human-crafted rules. Deep neural networks can discern complex patterns in training data, something that would take months to emulate with classical algorithms through rule-setting, often without consistent results. However, this strength in pattern recognition can be a double-edged sword, as AI can also detect and integrate biases present in the training data without any hesitation.

Understanding bias in AI

Consider a cat/dog AI model trained on cat images with watermarks and dog images without. The model might wrongly associate watermarks with cats — a spurious correlation stemming from the data, not reflecting reality. This example highlights the crucial role of training data quality and the issues it can raise, including more subtle, potentially malicious expressions.

But what happens when the patterns in our data are not the ones we desire? No data scientist intends to build an unethical model — hopefully — but good intentions can't compensate for biased data.

Attributes a model can use

Imagine a recruitment firm using an AI system to identify suitable candidates. Given that in the US, about 58% of management positions are held by men (source), the model might unintentionally favor men if it has access to gender data. This was infamously illustrated by Amazon’s hiring tool, which reinforced existing biases.

Additionally, the model could infer that individuals from predominantly white areas or educational backgrounds are better fits for the job, simply reflecting the profile of current employees. This bias could occur without directly using race or gender as explicit attributes. This is exactly why the explainability of models remains important, even when using modern, deep networks.

If you develop these models, you could do the following. For hiring, revealing which sentences most influenced candidate scoring could be insightful. Alternatively, focusing solely on a normalized skill set for scoring, instead of blackbox language models, might be a better approach.

Data you source

Datasets inherently possess some form of bias as the origin significantly affects AI model fairness. For instance, Clarifai’s attempt to create a content moderation system failed because it misclassified people of color as pornographic. This error was due to unrepresentative training data, with safe content from stock photos featuring predominantly white individuals and scraping explicit content from adult websites, rendering the model ineffective.

Another instance is Tay, Microsoft's chatbot that turned racist after exposure to numerous Twitter posts, and more recently, ChatGPT. Initially, ChatGPT could be easily tricked into making controversial statements (see below). OpenAI's response involved extensive human labeling by teams in Kenya to mitigate this. Sourcing data online is cheap, but curating and cleaning it is costly, leading to bias.

bias1

Consider Midjourney — uploading a female image and prompting it as a tram driver or police officer often results in a male transformation, illustrating bias. On the other side of the spectrum, you can take Google Gemini with its overcorrection to go against these specific biases, creating an entirely different problem.

Google’s way to handle bias wasn’t the best. So how can you actually solve this? First, start by critically examining your data. A diverse team of data scientists could provide critical insights, helping ensure that datasets are representative and biases are addressed. The more diverse your team is, the more likely it is that they will investigate these types of issues.

Most of the given examples are based on obvious underlying patterns in data. But what about more subtile patterns? More and more, people source AI tools like ChatGPT for inspiration, advice, and to find answers to questions. But what if in these answers, certain values are promoted more than others? How will the use of AI tools influence a user’s behavior? This currently is still an under-researched topic that impacts people globally, as the adoption of AI assistance increases.

Measuring model performance

High accuracy does not equate to fairness. So, even when your model has a high performance, it could still be biased. Models often perform well on biased test sets, mirroring the skewed representation in training data, as the bias in training data is often present in test data as well.

A risk that comes with randomly sampling from your data is that you might miss demographics or segments. A famous example here is Google’s initial attempts at face detection, which failed to accurately identify people of color. This did not come to their attention until one of the developers noticed it through manual testing — a stark reminder of the necessity for inclusive testing. And it’s not a thing of the past, as even recent studies show that AI services provided by cloud providers still have significant error rates on people of color as well as people of age.

So, what now?

As data scientists, we need to consider the implications of AI biases and continually scrutinize our datasets, refine our models, and uphold the best standards of fairness. By doing so, we can ensure AI serves as a tool for empowerment and progress, not discrimination. Moreover, as managers, we need to put KPIs on these demographics, rather than looking at broad metrics such as accuracy on an unbalanced dataset. Lastly, we need to put together diverse teams to provide critical insights and create unbiased systems, just make sure it doesn’t go too far, like in Google Gemini’s case. We can try to remove bias today and in the future, but we cannot erase real history of race and gender discrimination.

Even though Galileo Galilei could not have imagined what we can do with AI today, his wisdom is still relevant today: Measure what is measurable, and make measurable what is not so.

Written by

Dries De Rydt

Want to know more?

Related articles