Can we trust robots to make ethical decisions?

Once the preserve of science-fiction movies, artificial intelligence is one of the hottest areas of research right now.

While the idea behind AI is to make our lives easier, there is concern that as the technology becomes more advanced, we may be heading for disaster.

How can we be sure, for instance, that artificially intelligent robots will make ethical choices? There are plenty of instances of artificial intelligence gone wrong. Here are five real-life examples:

1. The case of the rude and racist chatbot

Chatbot Tay, Microsoft’s AI millennial chatbot, was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter. However, within 24 hours she had been taken off the site because of her racist, sexist and anti-Semitic comments.

It was, said Microsoft, “a machine learning project, designed for human engagement. It is as much a social and cultural experiment as it is technical.”

Image: Twitter

2. Self-driving cars having to make ethical decisions

How can self-driving cars be programmed to make an ethical choice when it comes to an unavoidable collision? Humans would seriously struggle when deciding whether to slam into a wall and kill all passengers, or hitting pedestrians to save those passengers. So how can we expect a robot to make that split-second decision?

3. Robots showing human biases

Less physically harmful, but just as worrying, are robots that learn racist behaviour. When robots were asked to judge a beauty competition, they overwhelmingly chose white winners. That’s despite the fact that, while the majority of contestants were white, many people of colour submitted photos to the competition, including large numbers from India and Africa.

4. Image tagging gone wrong

In a similar case, image tagging software developed by Google and Flickr suffered many disturbing mishaps, such as labelling a pair of black people gorillas and calling a concentration camp a “jungle gym”. Google said sorry and admitted it was a work in progress: “Lots of work being done and lots is still to be done, but we’re very much on it.”

5. A cleaning robot that breaks things

One paper recently looked at how artificial intelligence can go wrong in unexpected ways. For instance, what happens if a robot, whose job it is to clean up mess, decides to knock over a vase, rather than going round it, because it can clean faster by doing so?

But it’s not the robot’s fault

Robots don’t always get it wrong. In one instance, people were asked to guess the ethnicity of a group of Asian faces, and specifically to tell the difference between Chinese, Japanese and Korean faces. They got it right about 39% of the time. The robot did so 75% of the time.

When things do go wrong, one explanation is the fact that algorithms, the computer coding that powers the decision-making, is written by humans, and is therefore subject to all the inherent biases that we have. Another reason, and one given for the beauty contest case, is that an algorithm can only work with the data it’s got. In this instance, it had more white faces to look at than any other and based its results on that.

While researchers continue to look at ways to make artificial intelligence as safe as it can be, they are also working on a kill switch, so that if the worst-case scenario, a human can take over.

Leave a Reply