Monday, October 31, 2011

Giving Artificial Intelligence the (Freudian) Slip

Sifting through mountains of material yielded by a Google Scholar search of Freud and robot yielded this bit about one of the founders of AI and a former colleague of Marvin Minsky (who was mentioned by Liu in The Freudian Robot):

From:
John McCarthy's Home Page

(Professor Emeritus of Computer Science at Stanford University)

http://www-formal.stanford.edu/jmc/index.html

Making Robots Conscious of their Mental States was given at Machine Intelligence 15, 1995 August in Oxford. It's in the Proceedings of that workshop. The idea is that many tasks will require the computer programs examine their own computational structures in ways like those involved in human consciousness and indeed self-consciousness.

Robots Should Not be Equipped with Human-like Emotions

Human emotional and motivational structure is likely to be much farther from what we want to design than is human consciousness from robot consciousness.gif

Some authors, [Sloman and Croucher, 1981], have argued that sufficiently intelligent robots would automatically have emotions somewhat like those of humans. However, I think that it would be possible to make robots with human-like emotions, but it would require a special effort distinct from that required to make intelligent robots. In order to make this argument, it is necessary to assume something, as little as possible, about human emotions. Here are some points.

  1. Human reasoning operates primarily on the collection of ideas of which the person is immediately conscious.
  2. Other ideas are in the background and come into consciousness by various processes.
  3. Because reasoning is so often nonmonotonic, conclusions can be reached on the basis of the ideas in consciousness that would not be reached if certain additional ideas were also in consciousness. gif
  4. Human emotions influence human thought by influencing what ideas come into consciousness. For example, anger brings into consciousness ideas about the target of anger and also about ways of attacking this target.
  5. According to these notions, paranoia, schizophrenia, depression and other mental illnesses would involve malfunctions of the chemical mechanisms that gate ideas into consciousness. A paranoid who believes the CIA is following him and influencing him with radio waves can lose these ideas when he takes his medicine and regain them when he stops. Certainly his blood chemistry cannot encode complicated paranoid theories, but they can bring ideas about threats from wherever or however they are stored.
  6. Hormones analogous to neurostransmitters open synaptic gates to admit whole classes of beliefs into consciousness. They are analogs of similar substances and gates in animals.
  7. A design that uses environmental or internal stimuli to bring whole classes of ideas into consciousness is entirely appropriate for a lower animals. We inherit this mechanism from our animal ancestors.
  8. Building the analog of a chemically influenced gating mechanism would require a special effort.

These facts suggest the following design considerations.

  1. We don't want robots to bring ideas into consciousness in an uncontrolled way. Robots that are to react against people (say) considered harmful, should include such reactions in their goal structures and prioritize them together with other goals. Indeed we humans advise ourselves to react rationally to danger, insult and injury. ``Panic'' is our name for reacting directly to perceptions of danger rather than rationally.
  2. Putting such a mechanism, e.g. panic, in a robot is certainly feasible. It could be done by maintaining some numerical variables, e.g. level of fear, in the system and making the mechanism that brings sentences into consciousness (short term memory) depend on these variables. However, such human-like emotional structures are not an automatic byproduct of human-level intelligence.
  3. Another aspect of the human mind that we shouldn't build into robots is that subgoals, e.g. ideas of good and bad learned to please parents, can become independent of the larger goal that motivated them. Robots should not let subgoals come to dominate the larger goals that gave rise to them.
  4. It is also practically important to avoid making robots that are reasonable targets for either human sympathy or dislike. If robots are visibly sad, bored or angry, humans, starting with children, will react to them as persons. Then they would very likely come to occupy some status in human society. Human society is complicated enough already.
Sadly, I also discovered the following:

John McCarthy, pioneer in artificial intelligence, dies at 84

The Washington Post

John McCarthy, a computer scientist often credited with creating the very name of the futuristic field in which he was an honored pioneer -- artificial intelligence -- died Oct. 24 at his home in Stanford, Calif. He was 84.

The death was announced by Stanford University, where he was a professor in mathematics and later computer science from 1962 until his retirement in 2001.

In the late 1950s, he and Marvin Minsky, a friend and fellow AI specialist, helped start the AI lab at the Massachusetts Institute of Technology. Their views of the discipline began to diverge and, in 1962, McCarthy returned to Stanford, where he had briefly taught. He soon founded Stanford's artificial intelligence laboratory.

In a 2007 article, McCarthy described artificial intelligence as "the science and engineering of making intelligent machines" and said intelligence was "the computational part of the ability to achieve goals in the world."

No comments:

Post a Comment