Will computers ever be conscious


Are computers conscious?

It gets really exciting when it comes to the question of whether a computer can develop consciousness and free will. There is a nice anecdote about this: When IBM's computer "Watson" won the American television quiz "Jeopardy" in February 2011 against the best candidates ever, a German newspaper asked the German Research Center for Artificial Intelligence (DFKI) how long it would take it was still taking computers to tell people what to do. The dry - and perhaps not very serious - answer from one scientist was: "It will take forever. At least ten years."

In reality, computers are already telling people what to do - or they are doing it themselves. Millisecond transactions, for example, that have been carried out on stock exchanges every millisecond, can only be done by machine for a long time. And not even artificial intelligence is at work here. Gartner analyst Fenn wonders how humans ever want to find out when there is conscious action in a computer and when an AI system reveals its own goals and emotions in a way that humans can understand. The Turing test from 64 years ago is celebrating a happy start here.

Turning off the electricity made difficult

From the Tamagotchi times of the 90s, we know that people develop an emotional relationship with computers if their behavior appears to be somehow "human". Dutch scientists have also found out in an experiment that it is difficult for people to turn off the power to a "robot cat" when it begs for mercy. If the cat looked intelligent and lovable, the human being got all the more scruples about flipping the switch. So there is no limit to the imagination of what happens when we have AI-controlled robots around us at some point.

In the film "Her", the operating system Samantha can basically only speak, see and "think". But these alone are skills with which a computer can already carry out extensive analyzes of a person's emotional state. A team led by Marian Bartlett from the University of California organized a test for this. It had a computer assess pain signals based on the facial expressions of test subjects. The videos of the test subjects were also shown to people. These were wrong in half of the cases when assessing who of the test subjects was only simulating pain, whereas the computer was 85 percent correct.

Gartner analyst Fenn sees artificial intelligence and intelligent machines as the defining trends of the coming decade. This inevitably also raises the question of how companies will design their processes and make decisions in the future. That is when people and computers stand side by side and cooperate. There are still many challenges to be overcome before such a constellation becomes a reality. In any case, according to Fenn in good faith, it is always the person who will continue to make decisions in the future. If she's not wrong.