next up previous
Next: Consequences Up: Artificial Intelligence and Consciousness Previous: A brief history of

Controversy.

In this chapter, I will discuss some of the views of people involved in the field from a philosophical point of view. I will start out by giving views opposing the capability of computers or robots ever obtaining intelligence or consciousness; namely the views by Penrose, Dreyfus and Toulmin (there are others, but I believe that by using their views I more or less give a broad overview. Then I will discuss the theories by Dennett, who is in favor of the (theoretical) possibility of creating AI.

Hubert Dreyfus (MIT) is one of the people who has criticized the whole idea of AI almost right from the beginning. His paper, 'Alchemy and Artificial Intelligence' (1964) was the first published sound of disapproval of the research carried out. His criticism hits the research right in the heart: he denies that reality has a rational structure. According to Dreyfus reality is partly set by human needs and actions. Knowledge does not only have a rational structure, but an intuitive one as well. His analysis puts forward that the research into AI is based on four basic assumptions, namely:

  1. Biological assumption: intelligent organisms are symbol processing; meaning that we represent knowledge in an abstract way, just as computers do (zeros and ones)
  2. Psychological assumption: Human intelligence is simply a matter of manipulating symbols according to formal rules; the content of the information is not important for the way it is processed by the formal rules.
  3. Epistemological assumption: All knowledge can be formalized or in other words, be put in context-independent formal rules and thus be stored in a computer.
  4. Ontological assumption: the reality as known by humans has a structure that can be formalized since it is built up from series of independent facts that can all be known.

Dreyfus claims that the first two assumptions have no empirical prove. Neither psychology nor neurophysiology has been able to show we are symbol processing machines. It might well be that we process information as icons (an representation that looks like the content of the information and so is not abstract). The second assumption has also not been proven. It might well be that processing occurs based upon the contents of the information. Dreyfus' main criticism however is on the third assumption. Maybe one could approximately describe human knowledge on a computer, but it would never be able to reproduce this knowledge or apply it. The capability to use knowledge in relevant situations (reasoning, actions and communications) is something that is lacking in computers, in contrast to human beings. The computer is lacking common sense. As an example, Dreyfus gives the interpretation of language; something which is very difficult if done according to formal rules, one needs intuition (does 'hot' mean 'very warm' or 'spicy'?). It is also connected to the fourth assumption; Dreyfus argues that in a reality that is fully structured, and which is built up out of separate series of independent facts, the computer can live without common sense. But this is not the case in reality where it is not so structured, and in which connections are achieved for a great part through intuition.

Instead of the above assumptions, Dreyfus states that humans do not apply formal rules or use internal representations. He claims that intelligence is situated, meaning it is dependent on the situation that one is in. It is a phenomenological perspective. The description of the world is not completely objective, but filled with feelings, instincts, scents etc. It is also not completely subjective since the world changes with all the activities. In any case, many parts of our behavior work automatically, without going through a cognitive process. On top of that, Dreyfus states that intelligence is not only situated in the brains, but also in the rest of the body: the sensomotoric intelligence, and this is only created by use of that body.

It is clear that Dreyfus' accusations are mainly directed towards the development of the weak AI. Some of the points that he made have been used to create the newer version of artificial intelligence, like the ones created by John Brooks. Dreyfus is strongly opposed to any claim that the CA or networks describe intelligence of any kind whatsoever since they're based on issues that he claims are false.

Roger Penrose attacks the research into AI from a completely different angle (although some of the conclusions are similar). In fact he does more than that; he gives a completely new vision on how to reach a description of consciousness and intelligence. According to Penrose, the way that AI has been treated so far is highly classical mechanical. It uses algorithms (software) that 'runs' our intelligence and consciousness (hardware). By using Gödels theorem he then points out that algorithms can not create consciousness. Gödels theorem states that in every algorithmic system that is used to confirm reality or in every formal system that is used as a criteria for truth there is at least one statement that can not be proven within that system. This external statement is called the Gödel proposition.

This means that one can not tell whether something is true or false by just using algorithms; an external view is needed. Penrose then claims that consciousness is exactly that external view and that without it, it would be impossible to claim truth. The artificial brain completely built up by algorithms would not be able to have this external view (by definition!) and simply be not conscious. To put it in a different way: in the end the artificial brain would lack the capability of validating the information transfer between algorithms (similar to the lack of 'common sense' as described by Dreyfus). The next step by Penrose is to include quantum mechanics in the description of consciousness. Could it be that by seeing reality in a quantum mechanical way, one would be able to make statements about the validity checking capability of consciousness? In quantum mechanics, reality is described by the Schrödinger equations whose solution is a superposition of possible states. Through obeservation one of the possible states becomes reality. However, no statement is made about how the 'true' state is chosen. Since the macroscopic reality as each of us observes it is not a superposition of states (unless one believes in the theory of coexisting realities)it seems that something essential is failing in Quantum mechanics: a non-linear part (the Schrödinger equation is linear) that reduces the state vector (chooses one of the possible states). So, quantum mechanics fails to explain reality. Just as classical mechanics gives a good description of the macroscopic reality, quantum mechanics gives a good approximation on the quantum level. The reason, according to Penrose is the fact that in quantum mechanics, the dynamical part (the Schrödinger equation) and the boundary conditions are separated. In an accurate description, both should be integrated beforehand. He calls this not-yet-discovered theory the correct quantum gravitation theory (CQG; it should also include relativistic effect and thus fill the hole between relativity theory and quantum theory). This theory would be physically objective in the sense that it is not dependent on consciousness. Only then could we try to understand the true nature of our consciousness. The universe is controlled by physical laws that allow consciousness to exist.

Another critical point towards AI is raised by Stephen Toulmin. According to Toulmin, the research has never gotten out of the Cartesian Theater in the sense that there is still a line drawn between mind and body. The mind or rather consciousness, is seen as an entity by the AI-researchers that can be constructed using one single formula. Toulmin now claims that consciousness is built up out of several parts. An important part is the cultural and historical background of our society, which is largely neglected in the research into AI. People have different kinds of consciousness dependent on several factors; applying the same stimuli on different people have different effects. But some stimuli can have approximately the same effect on groups of people, as long as these group have a mutual background, for example race. According to Toulmin, AI based on algorithm only tackles one of the parts that is making consciousness, and it is not even the most important part. It is an oversimplification.

Daniel C. Dennett (Tufts University, MIT) is one of the scholars who is convinced that theoretically there is not a problem in construction Artificial Intelligence. Although he thinks it is highly unlikely that we will ever make a robot that is conscious like humans are, since it might involve techniques that are too costly and maybe not even be interesting (what is the reason for creating more 'human' beings in such a difficult manner?). In his various writings he describes four reasonings used by people against the possibility of making artificial intelligence, which can easily be dismissed. I will mention them first and Dennett's response to them before going into more complex matters:

  1. Old-fashioned dualism Robots are purely material things, and consciousness requires immaterial mind-stuff. Dennett poses that such a thought is simply a desire of humans to be 'special'. There is however no reason for this thought and no prove whatsoever.
  2. Vitalism. Robots are inorganic, and consciousness can only exist in an organic brain. According to Dennett, biochemistry has shown that the powers of organic compounds are themselves mechanistically reducible and hence mechanistically reproducible at one scale or another in alternative physical media. It might be that inorganic material is needed to obtain for example enough speed or flexibility, although Dennett finds that unlikely. Even so, why would a robot be necessarily inorganic? It could well have some organic parts.
  3. Origin essentialism. A mind should be natural and born. In response Dennett makes a comparison with for example wine. If one could make, in some laboratory experiment, exactly the same wine as produced from the wine field in France, would there be any difference? No, there's no distinction whatsoever. One could argue that there is some mystic difference, but that would be highly 'chauvinistic' (as put by Dennett). An exact copy of a Rembrandt painting would still be a counterfeit, but intrinsically it would be the same. To extend the examples to human beings: an exact copy of yourself would not be legally you, but it would have the same human rights.
  4. A robotic brain will always be too simple. One could never construct such a brain for technical reasons or cost reasons. If this were so and this were the only reason that we would not be able to construct an artificial brain; it would still teach us a lot. Besides, so far many body parts have already been reconstructed artificially (heart, etc.) which have a much more simpler structure than the true heart. This kind of complexity is not what matters.

After putting aside these matters, Dennett has investigated some of the more serious problems with AI. According to him, basicly there are two:

The first problem relates to the general reasoning that making AI or a robot (Dennett usually speaks in terms of robots) means that we, as people, are deterministic machines. Dennett claims that determinism and free will are not contradictions. To prove so, he starts out by differentiating determinism from fatalism. Determinism means that things will happen because certain steps cannot not be taken. But it does matter which steps are taken. This in contradiction to fatalism: something will happen, no matter what steps are taken. In a robot, built based on the structures described in the first part of this paper (a 'complex' cellular automata) it is determinism that plays a role, not fatalism. The common sight towards free will is that one has acted freely only if one could have done otherwise; if we have real opportunities. In this sense determinism and free will are incompatible. Dennett however says that 'could have done otherwise' is not the decisive factor, but deliberation is: we are free because we deliberate. And thus free will is not in contradiction to determinism. Stated slightly differently: determinism does not imply predictability. Consider for example two lotteries: in one of them first the tickets are sold and afterwards a number is drawn completely randomly, in the other one the winning number is picked out on forehand and put in a safe and then the tickets are sold. In the latter case the outcome is determined but the winner is as unpredictable as in the former case.

However creating such a robot (a deterministic deliberator) does not make it conscious, although it has a free will. To obtain consciousness it not only needs pattern recognition in the environment but also pattern recognition in its own environment: the iteration process of thought (as stated by D. Hofstädter). According to Dennett this is something that must be learned or in other words recognition must be taught. However the basis for the ability to learn is something that can be constructed artificially. Also, this kind of information gathering will make our consciousness undeterministic since determinism ignores the information gattering process.

Dennett has constructed a consciousness order which is based on ideas first stated by Von Neumann. Portions of the universe have the property of being designed to resist their own dissolution (a tree or a chair or a simple cellualr automata). Some of these have the further property of being caused to have reliable expectations about what will happen next and hence have the capacity to control things. They have a free will ('lower' animals, a deterministic deliberator, the mechanical creatures by Rodney Brooks of MIT, chess computers). Some of these have the capacity of significant self-improvement through learning (higher animals, young children). And finally, some of these have the open ended capacity (requiring a language of self-description) for radical self-evaluation: selfcontrol, talent, decision making, interests, values, selfdefinition (adult people, a truly conscious robot). Summarizing, using the techniques and ideas developed by Von Neumann to Brooks, one can construct a deterministic deliberator that not only in principle has the capacity to learn and recognize and in doing so the ability to obtain consciousness.


next up previous
Next: Consequences Up: Artificial Intelligence and Consciousness Previous: A brief history of

zegers
Wed Jul 12 11:26:51 JST 2000