The urgency of digital ethics
To fully understand the urgency of digital ethics, we need to go back to the basis of AI. There are many definitions of what AI is or could be, but I think most clarifying is what the “god-father” of our current computers has to say about this topic. Seventy years ago, Alan Turing described in his ground-breaking paper “Computing Machinery and Intelligence” (Mind, 1950), what according to him makes machines intelligent. In his paper Turing explains that it is not about definitions but about being pragmatic. He describes his now-called Turing test, in order to test whether machines can think:
Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game which we call the 'imitation game.' It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
"My hair is shingled, and the longest strands are about nine inches long."
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?" Let’s translate this test into nowadays terminology. The situation Turing describes is very familiar: it is about a man active on social media like facebook or twitter, impersonating to be a woman. Where the interrogator is the moderator of this social media trying to figure out, based on their posts whether the man and the woman are indeed the persons they pretend to be. There can be no discussion on ethics here: according to our standards we consider the behavior of this man, impersonating a woman, unethical. When uncovered, his account, based on the policies of Facebook and Twitter, will be suspended or shutdown. The point made by Turing is that he considers machines being really intelligent at the moment a machine is as good as the man in this experiment in impersonating the woman. Restated, machines are truly intelligent when they can behave as unethical as humans.
Here’s the urgency of digital ethics in its bare essence. It’s not new, it has been there for seventy years already, right from the start and Turing was the first to make this point.
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:
"My hair is shingled, and the longest strands are about nine inches long."
In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?" Let’s translate this test into nowadays terminology. The situation Turing describes is very familiar: it is about a man active on social media like facebook or twitter, impersonating to be a woman. Where the interrogator is the moderator of this social media trying to figure out, based on their posts whether the man and the woman are indeed the persons they pretend to be. There can be no discussion on ethics here: according to our standards we consider the behavior of this man, impersonating a woman, unethical. When uncovered, his account, based on the policies of Facebook and Twitter, will be suspended or shutdown. The point made by Turing is that he considers machines being really intelligent at the moment a machine is as good as the man in this experiment in impersonating the woman. Restated, machines are truly intelligent when they can behave as unethical as humans.
Here’s the urgency of digital ethics in its bare essence. It’s not new, it has been there for seventy years already, right from the start and Turing was the first to make this point.
Comments
Post a Comment