Avoiding “The Other Race Effect!”
Humans are more likely to misidentify faces from other races (ethnic backgrounds) than their own. You might have experienced this yourself when visiting a foreign country and it appears that everyone looks alike. This is actually a form of unconscious bias and has many names including “the other race effect”, “cross-race effect, “cross-race bias”, “other-race bias” and “own-race bias”. There are lots of cognitive and social studies around this phenomenon (check out Google or Wikipedia). This is a topic that interested me and prompted me to investigate it further when I was working on my PhD research on human facial analysis algorithms in Japan.
Usually racial biases are about wider topics of racism and favouritism that relate to the higher levels of our cognitive process and more complex factors involved. However, just identifying familiar vs unfamiliar faces without any further decision or judgement relies on more fundamental cognitive process and reduces the complexity of other social or cultural implications. And here we can find another interesting similarity between Artificial Intelligence (AI) and the decision process of our minds, which we should learn from.
When I first moved to Japan, I often misrecognised Japanese faces, but I eventually got a lot better. After living there for eight years, my mistake rates dramatically reduced. Similarly, Japanese people who did not closely interact with me easily confused my face with my friend (who I thought clearly looked different!). Interestingly, when I moved to Australia – which is a more multi-cultural country with people from all around the world – this misidentification rate was much lower and I was almost never misidentified as my friend. This is simply because in Australia most people have been exposed to a greater diversity of faces and our cognitive system is trained based on it. When we haven’t seen many faces from a different ethnicity with quite different visual appearance, our brain starts categorizing them as one group. Rather than building enough recognition power to holistically identify the faces (as we do within our race), it relies on some obviously different, but not necessarily efficient, features to recognize those faces which consequently increases the chance of mistakes when we try to distinguish faces within that other race.
AI algorithms are not very different from us on this aspect. If we do not use diverse enough sample data when we build and train them, they can easily make similar mistakes and cause unwanted biases in their decisions without us noticing. There are well documented cases of where embedded bias has made serious mistakes, such as COMPAS or Optum.
“Diversity and Inclusion” has been an important and hot topic in HR and workforce discussions for a while, but its impacts go beyond just a HR program or even a business strategy from the HR lens. With increasing use of AI in a wide range of technologies, diversity can have direct and real effect in the success or failure of such AI-based solutions impacting our businesses and lives.
I’ve used this known phenomenon in the human cognitive system to open a conversation about the lessons we can learn, however, there is a lot more to delve into which deserves a separate discussion. For example, it is important to notice both “data” and the “logic” can be flawed by lack of enough diversity. Additionally, we need to pay attention to more dimensions of diversity in data sets than just race or gender – such as various demographics of people, socio-economic factors, cultural and physical aspects, time and location and so on. And we need good measures in place to at least identify the bias before working on eliminating it. A good starting point is to make sure that the team involved in designing the solution is itself a diverse group of people.
Organisations leading the development and adoption of AI-based solutions need to pay close attention to such details to address these valid concerns, and ultimately benefit from more accurate decisions and a better experience for us all.