
Sexist, racist and with tunnel vision: Three critical points of AI
Recommendations from search engines, customised advertising or traffic counts: The use of AI is diverse. It can have very positive effects, but also very negative ones. I'll tell you where the opportunities and risks of artificial intelligence lurk.
I'm currently writing an article about eGPUs and once again need Windows 10 to do so. After installation, Cortana greets me. I feel like I've heard her voice a thousand times, even though I don't use the Microsoft voice assistant in my everyday life. This time I'm puzzled. Why do the smart assistants all speak in a female voice by default?
Sexist AI?
You can now also select male voices. But it's taken a few years to get here. While searching for explanations, I came across a study by the EU. It states that IT technologies are largely developed by men. Women are still massively underrepresented in IT professions in Europe.
The topic has already been discussed. According to official statements from developers, we humans consider female voices to be more pleasant and confidence-inspiring than male voices. That may be true. But I still don't think it's right that I couldn't choose the language of the assistant, at least in the early days. Why? Because it suggests that assistants are female and that the feminine must be considered subordinate. The feminine - represented by the feminine voice of the assistant - is there to serve the owner. The assistant thus finds herself in a hierarchically subordinate role. She is the weaker of two limbs.
You may be thinking to yourself that this is not a big deal, as it only affects the voice output of the assistant.
Apropos: Have you noticed anything?
We only refer to the assistants in their male form. We don't refer to them as assistants, but as voice assistants. We refer to the intelligent part of the technology as male, even though it speaks to us in a female voice. Here, we draw a linguistic line between technology as masculine and intelligent and the thing doing the talking - i.e. the serving voice assistant - as feminine.
It is this systematic subordination of the feminine that needs to be scrutinised in the development of AI. Otherwise, the image of the female as hierarchically subordinate is not only cemented in the minds of us humans, but is also unconsciously adopted by artificial intelligence. I am not claiming that male developers are sexist across the board. But we are all influenced by certain images and ideas, and we are often unaware of this. We don't reflect on them. In a professional group where the proportion of men is as high as it is in IT, certain prejudices are more prevalent.
The example shows well that AI can very well be influenced and is therefore never truly free of prejudice. If developers are aware of this fact and consist of diverse teams, this can be taken into account. By diverse teams, I mean teams made up of people of different genders, ages and backgrounds. This way, AI can actually become more unbiased.
Filter bubbles
The internet is free. Anyone can access the information they want. At least that's the theory. But that's not quite true with social media or search engine algorithms. Social media AI only shows users content that matches their interests. If I like something, then similar content is shown to me. Filtering the content reduces my field of vision: I only see the content that matches my interests.
For example, if I follow Steve Bannon on Twitter, I mainly receive recommendations for right-wing political content. If I follow DigitalFoundry on YouTube, on the other hand, I mainly receive recommendations for game tech news. My horizons are narrowed. Fake news sends its regards.
What is a disadvantage on the one hand can also be the opposite. If I'm interested in other content, the social media or search engine algorithms open up new perspectives for me. I get into new topics more quickly, socialise with new people and broaden my horizons.
Cultural implications
This point is a consequence of filter bubbles. Like most other streaming providers, Spotify also has a recommendation function. This has a major impact on who listens to what.
Let's assume that, thanks to big data, Spotify knows more about us than what we listen to. Based on our age, gender and origin, Spotify now gives us specific recommendations based on what other users with similar characteristics are listening to. As a man in his mid-30s, a native of Switzerland and a passionate gamer, Spotify is most likely to recommend metal to me. An African American living in the Bronx mainly receives recommendations for hip hop.
Racial profiling sends its regards.
You realise: I also have a problem with the fact that we are forced into a certain mould. And that's what the AI does in this example. It tells us what music we should listen to based on certain characteristics. There is so much to discover. And who likes to be pigeonholed? Personally, I find the world much more interesting when not everyone conforms to a certain stereotype.
Conclusion
AI in itself is neither good nor evil. But it harbours opportunities and risks. The direction it takes depends on how it is developed and utilised. It can help us to be more non-judgemental and experience diversity in all its facets. However, it can also lead to homogenisation, reinforce existing prejudices and stereotype certain groups and individuals.
We still decide which direction AI takes - at least for now. To do this, however, we need to reflect on existing social circumstances and incorporate them into the development of AI. Diverse teams can also help to keep AI as value-free as possible. <p
13 people like this article


From big data to big brother, Cyborgs to Sci-Fi. All aspects of technology and society fascinate me.