Your data. Your choice.

If you select «Essential cookies only», we’ll use cookies and similar technologies to collect information about your device and how you use our website. We need this information to allow you to log in securely and use basic functions such as the shopping cart.

By accepting all cookies, you’re allowing us to use this data to show you personalised offers, improve our website, and display targeted adverts on our website and on other websites or apps. Some data may also be shared with third parties and advertising partners as part of this process.

Background information

Sexist, racist and with tunnel vision: Three critical points of AI

Kevin Hofer
14.9.2018
Translation: machine translated

Recommendations from search engines, customised advertising or traffic counts: The use of AI is diverse. It can have very positive effects, but also very negative ones. I'll tell you where the opportunities and risks of artificial intelligence lurk.

I'm currently writing an article about eGPUs and once again need Windows 10 to do so. After installation, Cortana greets me. I feel like I've heard her voice a thousand times, even though I don't use the Microsoft voice assistant in my everyday life. This time I'm puzzled. Why do the smart assistants all speak in a female voice by default?

Sexist AI?

You can now also select male voices. But it's taken a few years to get here. While searching for explanations, I came across a study by the EU. It states that IT technologies are largely developed by men. Women are still massively underrepresented in IT professions in Europe.

You may be thinking to yourself that this is not a big deal, as it only affects the voice output of the assistant.

Apropos: Have you noticed anything?

We only refer to the assistants in their male form. We don't refer to them as assistants, but as voice assistants. We refer to the intelligent part of the technology as male, even though it speaks to us in a female voice. Here, we draw a linguistic line between technology as masculine and intelligent and the thing doing the talking - i.e. the serving voice assistant - as feminine.

The example shows well that AI can very well be influenced and is therefore never truly free of prejudice. If developers are aware of this fact and consist of diverse teams, this can be taken into account. By diverse teams, I mean teams made up of people of different genders, ages and backgrounds. This way, AI can actually become more unbiased.

Filter bubbles

The internet is free. Anyone can access the information they want. At least that's the theory. But that's not quite true with social media or search engine algorithms. Social media AI only shows users content that matches their interests. If I like something, then similar content is shown to me. Filtering the content reduces my field of vision: I only see the content that matches my interests.

For example, if I follow Steve Bannon on Twitter, I mainly receive recommendations for right-wing political content. If I follow DigitalFoundry on YouTube, on the other hand, I mainly receive recommendations for game tech news. My horizons are narrowed. Fake news sends its regards.

What is a disadvantage on the one hand can also be the opposite. If I'm interested in other content, the social media or search engine algorithms open up new perspectives for me. I get into new topics more quickly, socialise with new people and broaden my horizons.

Cultural implications

This point is a consequence of filter bubbles. Like most other streaming providers, Spotify also has a recommendation function. This has a major impact on who listens to what.

Let's assume that, thanks to big data, Spotify knows more about us than what we listen to. Based on our age, gender and origin, Spotify now gives us specific recommendations based on what other users with similar characteristics are listening to. As a man in his mid-30s, a native of Switzerland and a passionate gamer, Spotify is most likely to recommend metal to me. An African American living in the Bronx mainly receives recommendations for hip hop.

Racial profiling sends its regards.

You realise: I also have a problem with the fact that we are forced into a certain mould. And that's what the AI does in this example. It tells us what music we should listen to based on certain characteristics. There is so much to discover. And who likes to be pigeonholed? Personally, I find the world much more interesting when not everyone conforms to a certain stereotype.

Conclusion

AI in itself is neither good nor evil. But it harbours opportunities and risks. The direction it takes depends on how it is developed and utilised. It can help us to be more non-judgemental and experience diversity in all its facets. However, it can also lead to homogenisation, reinforce existing prejudices and stereotype certain groups and individuals.

We still decide which direction AI takes - at least for now. To do this, however, we need to reflect on existing social circumstances and incorporate them into the development of AI. Diverse teams can also help to keep AI as value-free as possible. <p

13 people like this article


User Avatar
User Avatar

From big data to big brother, Cyborgs to Sci-Fi. All aspects of technology and society fascinate me.


Background information

Interesting facts about products, behind-the-scenes looks at manufacturers and deep-dives on interesting people.

Show all