next up previous
Next: Controversy. Up: Artificial Intelligence and Consciousness Previous: Introduction

A brief history of the research into AI.

In many ways our present society is still very much influenced by the Aristotelian belief that 'bodies exist only for the sake of the soul'. In other words that organisms can be distinguished from their inanimate surroundings because of the possession of a soul. Even for many nonreligious people this is a very strong idea; that our moral functions and our dignity are products of something we cannot grasp and that there is a strict line between our body and our soul. In the 17th century, a seemingly big change in the way that people regarded the world developed. A more mechanical view took over as proclaimed by people like Descartes, Leibniz and Newton. Physical processes were from that period on believed to be fully describable by mathematical laws. It was believed that we could fully understand the physical world and the doubts about whether we would ever gain insight into the way things worked (raised by for example Copernicus) were put aside. Still a big or rather vital distinction was made between humans and all other life forms: the possession of a soul. But apart from the soul, life was seen as a mechanical process, an automaton. For example animals were seen as a class of automata that could be fully described in term of physical laws. Steps were taken, if only for amusement, to recreate automata. Jacques de Vaucanson produced an artificial duck (1738) that would swim, spatter, eat and digest food. A second version by Rechsteiner made in 1844 convinced many that some kind of intelligence was created. The mechanical way of viewing the world was enhanced by the revelations of the Newtonian physics. The industrial revolution and the laws of thermodynamics extended the domain of science further into previously mystical areas. The barrier between body and soul still existed and it would take another revolution to overcome this way of thinking. The dualism in the way of thinking about life, in the sense of the above distinction, is now referred to as the Cartesian Theater: that humans are the knowing, soul-possessing entities, in a world full of mechanically describable automata. Humans posses an elan vital which is a divine component of the total us.

With the 'discovery' of quantum mechanics, it became clear that the way we perceive the world including ourselves and the processes that we want to describe cannot be decoupled. To understand the full scope of what is going on, it is necessary to understand how we, as an entity and not as two separable parts (the mechanical and the 'soul') process what we see. According to John Von Neumann, this could only mean that we as humans are fully part of the (mechanical) world and that all life is a reconstructable concatenation of events and interactions. No mysticism or randomness could play a role; logic is the only basis. There is not a clear distinction between entities that posses intelligence and consciousness and those who don't. Instead there is a gradual scale based on the ability to process, distribute and use information. Information used in a dynamical system, complex enough to reproduce and bear offspring could match the mental functions of any living creature. Intelligence and consciousness are parts of a complex class of automata. Von Neumann used terms from the newly developing field of Chaos Theory to clarify this idea. Complexity is the source of life. Similar to the way that non-linear dynamics could turn simple behavior into complex patterns, information processing in a complex enough network would create intelligence. He stressed the point of self-organization. 'Life wants to happen' as he said. Certain natural tendencies ('Rules of the Universe') encourage phenomena that can be described as intelligent or as having consciousness (Von Neumann used the term 'Life'). Such phenomena are self-reproduction and self-reflection.

Von Neumann was in many ways advanced in his thinking and many of his ideas took a more practical path later on. He is widely accepted to be the founding father of the research into artificial intelligence. Although the AI research really kicked off only after the invention of the electronic digital computer, Von Neumann's ideas formed its basis (in fact computers are referred to as Von Neumann processors). He has earned this title as being one of the first to sidestep from the Cartesian thinking and understand the implications of doing so.

During the early days of the computer era in the 60's, Horton Conway (University of Cambridge) started the investigation into Cellular Automata (CA). He created a world that consisted of cells (like on a chess-board) that either were occupied or open. This was to be decided upon the situation in the surroundings of a certain cell in the previous time step. The mathematical rules were very simple: each cell has eight neighbors. A certain cell will survive a next time step if there are either two or three neighbors also alive. It there are more it will die of overcrowding, if there are less it will die of exposure. If a cell is dead it will only come alive if three of the neighbors are alive in the previous cycle. (We know this as Conway's game of life and remains one of the more popular screensavers.) He realised that this system in fact contained all the characteristics for life as described by Von Neumann. Information was passed by neighbors and distributed to the offspring, the system could be enormously complex and thereby creating an increasingly dynamical system. Still the system would organize itself based on simple rules and selfreproduce. For example, seemingly random patterns would ignite so-called gliders that passed on information. Subsequently much work was done on CA in this manner (and still is: just search for 'Game of Life' in the search engine of your WWW-browser). The whole study into CA, mainly conducted at the Massachussets Institute of Technology (MIT), was regarded as being very obscure by the rest of the scientific community. Opinions would only change in the early 80's.

At that moment Stephen Wolfram (Institute of Advanced Study) started his investigation into CA. It became clear to him that not enough effort was put into the practical consequences of the research. It was his idea that one could use CA to make significant statements about complex systems in general. Starting with very simple rules, one can get out something very complicated, even seemingly random. He used a two dimensional version of Conway's game of life (which had the advantage of only a limited amount of set rules, so he could backtrack what had happened) to show that biological reality could be described by a few simple rules. The claim was not that the CA actually accounted for such phenomena, but the simulations gave an insight in how physical reality (biology) which seemed to behave randomly or chaoticly could be described by a few very simple rules.

This idea was extended by Craig Reynolds. He attempted to describe the flocking of birds by a few simple rules:

The result was quite astonishing. If the 'strength' parameters of the forces used were set between certain limits, the behavior of the simulated flocking really looked like true flocking . Similar methods were used by the Unit of Theoretical Behavioral Ecology at the Free University of Brussels to describe collective behavior of social insects like ants, with similar success. Their underlying idea was to amplify the theories of self-organization put forward by Ilya Prigogine. Whereas traditionally it was believed that complexity of collective behavior could be traced back to complex behavior in individuals, the Belgium group tried to prove that in fact very simple rules could describe the apparent chaos. Their successes formed the start of merging the study of adaptive processes in nature with the study into artificial systems.

The next step was taken by Christopher Gale Langton (Logic of Computers Group, Michigan). After studying Von Neumann's work he set out to construct CA that not only reproduced, but also represented informational states that could be passed on to a next generation and that could undergo evolution. He did so by creating loops that had two types of cells:

The evolution would take place by mutations in the genotype that was passed on to the phenotype and thus to the offspring. His success in constructing these self reproducing loops made him believe that the forces of nature can be reproduced in machines. Rule based structures such as language can hold the keys to reproducing beings. Connecting his own work with the work done on describing collective behavior, he realized that the essential ingredient was the amount of information flow. No 'overseer' was necessary. The behavior works bottom-up. If enough information was passed around, a workable 'universe' could be constructed. He quantified the ability to pass around information by the so-called tex2html_wrap_inline50 parameter (which would simply be one of the strength parameters in the rules describing CA). If tex2html_wrap_inline50 was very low, no information could be passed around. These would be systems where simply nothing happened. If tex2html_wrap_inline50 was slightly higher one would enter the area of periodic behavior. Langton compared this part with for example the growth of crystals. However if tex2html_wrap_inline50 was very high, information was passed around so freely (chaotically) that life could not be supported. In a sense such a system doesn't have a memory. But in between is an area where complexity reaches a maximum. And according to Langton, this was exactly the area in which intelligence (or 'life' in his own words) is situated. So intelligence is a property of the organization of matter, rather than a property of matter organized in a certain way.

The ideas of Langton were worked out by Stuart Kaufmann(Santa Fe Institute) who expanded the research into the field of networks. He thought that the non-reductionist way that Langton had taken would be the best way to understand much more complex systems than CA. A system that doesn't have a few variables, but a few hundred thousand that all influence each other cannot be diagrammed, not every connection can be mapped out. And this is exactly the problem for example in the case of molecular biology. In that field, the biological reality was seen as programmed circuitry. Kaufmann, however, suggested that it was the self-organizing power of such a systems that produced the manageable order. He tested his theorem using a network consisting of 100 nodes (which is already impossible to calculate). At first the system seemed to behave in a completely random manner. But after a certain period of time the network would reach a loop. This happened independently from the starting parameters of the system. In chaos theory, such a system is called a periodic attractor. The precise construction of the network did not really seem to matter; the local reactions and the passing around of information seemed to imply that order should arise. For example, it was the starting point for the research into pre-biotic soups: the stuff that produced the first complex molecules on earth. Also, his ideas form the basis of world-wide networks created through the internet.

The procedures described above have since then been used with more or less success. It is however clear that no higher order intelligence could be described in these terms, let alone consciousness. Still, the points made are vital for the construction of more developed systems (usually referred to as 'Strong AI' in contrast to 'Weak AI' described above). Until the late 1980's the way that one believed that Robots should work was based upon the reductionist point of view. The totality of the behavior was cut up into pieces and every piece did some processing according to a set of rules which had been programmed by men. If a signal would reach the robot (through the sensors) action would only take place after passing through a so-called 'bottleneck of cognition': perception, modeling, planning, task execution and motor control. Rodney Brooks (MIT), influenced by the result from the study into weak AI, believed however that there should only be two steps: perception and action. He thought up the following scheme. Instead of going through the bottleneck of cognition, information from all sensors should be processed in parallel; without going through a program to interpret and handle the signal. The behavior would then emerge from the continuous series of actions. In this manner, the robot could also be placed in real life surroundings, instead of the specially designed surroundings of for example industrial robots, where it is necessary to create the right boundary conditions to make the cognition possible. Brooks and his team set out to do this task only using analog signals. No computers were involved. It was the start of the creation of mechanical insects. Since then many people started working in the same manner and whole worlds full of such mechanical insects were created. When put together these creatures would behave like the Cellular automata of Conway. Prigogine's concept of dissipative structures, namely that self-organizing power would arise from the responses of the system in rough equilibrium to outside disturbances was put into practice: social behavior seemed to emerge. Creating in these mechanical creatures certain basic needs, like food in the form of battery power or companionship (very simply rules like in the case of CA and its later versions) made their society similar to those of social insects, like ants.

This is basically the point at which the more applied research into AI is at this moment of time. But as we will see in the next chapter it has had major consequences and these will lead to next steps into obtaining Artificial Intelligence and Consciousness. At the same time many new doubts have arisen which cannot be dismissed so easily. They will also be discussed in the next chapters.


next up previous
Next: Controversy. Up: Artificial Intelligence and Consciousness Previous: Introduction

zegers
Wed Jul 12 11:26:51 JST 2000