Saturday, May 28, 2011

Ants as Brains: Emergence

Introduction

Questions concerning the nature of thought characterise the history of our development as human beings. In his Meditations on First Philosophy (1641), Rene Descartes divided body and Mind. Following 300 years of intellectual development during the Age of Enlightenment the notion of a thinking machine was posited by Turings simple question: Can machines think? Modern cognitive theory has examined many models of Mind, predominantly based around the idea of computing machines.

In this essay I shall argue that an ant nest contains all of the necessary and sufficient criteria to be considered a model of Mind. I shall begin by summarising modern approaches to the idea of thought and intentionality, looking at some of the earlier psychological developments and showing how these grew into the predominantly Symbol-processing hegemony of the mid to late 20th Century. I shall then touch briefly on the reassessment of the Standard Social Science Model (Toobey & Cosmides, 1992) and show how Evolutionary Psychology proposed an alternative approach.

Having laid a base for understanding ideas related to thought and intentionality, I shall look at the predominant and diametrically opposed theories concerning models of the Mind and examine them in the light of two constraints, the first being that any model of the Mind should be compatible with the Evolutionary evidence concerning adaptability, and the second that such a model should take into account the flexibility and universal nature of behaviour. Following Wells (1996) I will propose a simple Adaptationist model that fulfils both criteria.

The third section of the essay will argue that the Adaptationist model has much in common not only with standard machine or computer architecture but also with the humble ants nest, and draw comparisons between neurons and ants. I will draw briefly on the notions of Emergence Theory and Ant Algorithms to illustrate my points, and argue that the ant nest is an adequate model of Mind, fulfilling all constraints as noted previously. Finally, I will summarise the discussion.

Understanding Thought

In order for us to understand thought, I have chosen to look at it from the perspective of problem solving and learning. For a long time it was thought that learning was one of the things that differentiated humans from the rest of the animal kingdom. In 1911, Edward Thorndike published work showing that there was more to it than that. Experiments on cats and other animals showed that given a simple task to perform in order to receive food, animals tested over a number of trials began reducing the time necessary to perform the task. This reduction came about through what he called trial and error, and accidental success a phase most o ften reduced to trial and error.

Thorndike noted that through satisfying certain requirements, animals were able to learn particularly when they practiced the action many times, a finding characterised as the learning curve. Work by Pavlov and the Behaviourists Watson & Skinner developed these ideas and led to an understanding that learning occurs not when the stimulus and reward appear together, but when there is some discrepancy between an expected coincidence and what actually happens. If the Mind makes a prediction error expecting a reward after a stimulus and not getting it, or vice versa then the Mind must change its expectations : it must learn. Subsequent work has found that this pattern of learning related to conditioning and surprise is ubiquitous in nature.

It was once a very commonly held belief that the Mind was nothing more than an empty slate written on by repeated patterns of reward and punishment. As Thomas Aquinas commented, there is nothing in the intellect which was not previously in the senses. A model of Mind would thus be nothing more than a set of learned rules in situation x, do action y. However, in examining this idea, Harlow (1958) showed that baby monkeys did in fact have fairly well developed instincts. Given a choice between a wire-frame surrogate mother which provided food and a cloth mother which did not, Behaviourist theory predicts that the monkey would go directly for that which provided food. Instead, as the images show, the monkeys clearly preferred the cloth mother and used the wire mother only to feed.

Later work by Mineka et al (1986: cited in Ridley, 2003) at the University of Wisconsin investigated the instinctual fear of snakes in lab-reared and wild-reared Rhesus monkeys. Reared in the lab, the animals had no prior exposure to snakes. The psychologists showed the monkeys a videotape of wild-reared monkeys reacting with horror to snakes. Within 24 minutes, the lab monkeys acqui red a fear of snakes. The psychologists then edited fake flowers, a toy snake, a toy rabbit, and a toy crocodile into the video. Tests later showed that after 40 to 60 seconds of exposure to each object, the monkeys feared only the toy snakes and crocodiles.

Through these and many other studies it seems that we can see the Mind as a combination of learned and instinctual behaviours : how, though, does the Mind work? In 1949, Donald Hebb suggested that :

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change place in one or both cells such that As efficiency as one of the cells firing B, is increased.

Together with the notion of back propagation, proposed in the late 1950s by Frank Rosenblatt and comprising the notion that simple weightings and error procedures can induce learning, a view to the Mind as a connected architecture of perceptrons intended to mirror n eurons in a simple way was introduced and came to be known as Connectionism. By manipulating Symbols according to simple rules, these networks mimicked real-world states and could provide convincing evidence that a computer could behave like a Brain.

The problem with connectionism is that, as the contemporary thinker Steven Pinker commented, it is rather like a stone soup the more vegetables one adds, the better it tastes. While true that the Brain is open to learning, the more one adds some level of semantic content, the more the syntax seems to make sense. This problem that of meaning, or understanding, has dogged all attempts to build models of the Mind. Alan Turings famous Turing Test (1950) suggested that a solution to the problem of whether a machine can think could be answered by whether sa id machine were able to convince an interrogator that it were human solely through its answers to questions. This test has stood the test of time.

As we ended the 20th Century, then, models of Mind had been built on a range of foundations and theories, of which some were touched on above. Models were built on the idea of the general purpose computer, or von Neumann machine a set of tasks and a set of data related to these tasks. Predominantly, models used Symbols to represent meaning or semantics. In their 1992 paper, The Psychological Foundations of Culture, John Tooby and Leda Cosmides argued that the idea of Mind as a number of content independent or domain general mechanisms which had no connection with Evolutionary or psychological foundation was radically defective. They called this set of the ories the Standard Social Science Model, and established the basic principals of what would thenceforward be known as Evolutionary Psychology.

Models of the Mind

The notion of programmability is fundamental to modern computing. A program is a series of instructions that is stored in memory and executed by the processor it specifies the functional relationship between the input a machine received and the output it produces. The ability to program the machine is equivalent to an ability to change this relationship a point worth noting, since it highlights the huge range of useless purposes to which a general purpose machine can be put. A second general point is that complex computing operations can be performed by constructing complex internal models of the environment the program rarely interacts directly with the environment but rather through some interpretive layer.

In looking at and understanding the role of computers with respect to models of the Min d, there is one other aspect that is critical to our understanding the architecture of processors and the relationship between processors and programs. A processor is a special-purpose device designed to carry out a specific set of instructions these can be simple procedures such as addition and subtraction but can also contain the logical engines that make computers the powerful machines they are. These processors interact indirectly with the external environment through coded input and output. On the other hand, programs are most often sets of operations that are composed as sequences of basic instructions. They encode representations of the external world, and are executed by a central processor.

If we are to build a functioning model of the Mind, we have two clear constraints. Firstly, the Mind is a product of biological Evolution. This constraint stands in direct opposition to any dualism between body and Mind without introducing the problems of solipsism. The s econd constraint is that the Mind is capable of immense behavioural flexibility, including apparently indefinitely complex information processing. Wells (1996) refers to these two constraints as the Evolutionary constraint and the universality constraint, and notes that the difficulty of combining them both in a theory of cognitive architecture is difficult because they appear to be mutually incompatible :

The Evolutionary constraint leads in the direction of special-purpose mechanisms, and thus, in the direction of task-specific behaviour rather than universality; whereas the universality constraint leads in the direction of general-purpose mechanisms, and thus, in the direction of maximal behavioural flexibility but away from the space of designs that seem plausible given the Evolutionary constraint.

If human cognitive architecture is the result of Evolution, then as Tooby & Cosmides (1992) note, any given theory must be capable of explaining how we have solved the myriad problems that presented themselves over the Evolutionary timeframe. In their paper, cited above, the pair provide a substantial list of problems that evolving man solved including such things as capturing animals, mating, and cooperating. Typically, the argument given for how Evolution has solved these problems is through the selection of increasingly specialized mechanisms for example, the human eye.

The key issue here is that while such specialized mechanisms are extremely effective at solving specialized problems, they do so only by forfeiting an ability to address a more general class of problems that is, they fulfil the Evolutionary constraint, but not the universality constraint. Another problem is how Adaptations may combine for example, in an action or behaviour that combines both vision and movement.

Newell & Simon (1976) claimed that Symbols lie at the root of all intelligent action, going on to claim that A physical-Symbol system has the necessary and sufficient means for general intelligent action. This base, supported by the commonly held notion that human mental representations or Symbols are of the same kind as the representations used by computers, was extended to refer to all kinds of universal computational system and thus, by definition, Symbol systems can be said to fulfil the universality constraint.

Being universal machines, it is also the case that Symbol systems are programmable Symbol structures are programs and representations of objects and events in the external environment. This view of input and output relationship modifiers leads to the inherently attractive view of mental representations as structures in some kind of human machine language. However, herein lies a significant issue. A representation is always a representation for someone a danger which leads to an infinite regress. Searle (1980) saw this as a critical problem for Artificial Intelligence. In his famous Chinese Room argument, Searle suggested that for there to be any intentionality in a Symbol system, at some level there had to be an entity capable of understanding the Symbols otherwise, they would have no meaning.

We have seen that the Symbol systems approach satisfies the universality constraint. It remains unclear whether there is any level of compatibility with the Evolutionary constraint. The concern here is driven by the efficienc y of a generalized mechanism vs. a specific mechanism in solving a specific problem, combined with what we know of the pressures driving Evolution. Put bluntly, an organism equipped with a mechanism that avoids snakes will, over time, be more likely to survive than one whose generalized mechanism will require proof of the snakes danger before running away.

Additionally, Wells (1996) cites Conrads 1985 work into biological computation, which formalised study into computational systems along lines of programmability, efficiency and adaptability. The key message is as follows : unlike the human Brain, small changes in the structure of a program can lead to massive changes in behaviour or even lack of function. Whereas the Brain is gradually transformable, that is to say it only changes behaviour a small amount given small structural change up to an including the destruction of large parts of it this tends not to be the case in program-based systems. What is more, there is a concern in the relationship between the system and its inputs, in that a system operating without a program has no intermediary between it and the outside world a programmable system needs such inputs to be coded into a form that the processor can deal with.

It is clear that neither a Symbol-processing nor a strictly Evolutionary approach satisfies the constraints proposed. This is not to say, however, that there are not aspects of both that seem essentially correct : the idea of an aggregate of Adaptations working in parallel is not inherently flawed, but is hard to make universal. If, instead of a programmable Symbolic memor y and a processor rather like a computer, we assume the Brain to be like a processor, we may be able to take a step forward :

one should think of the set of Adaptations that Evolutionary psychologists consider to be the basis of cognitive architecture as the instruction set of a processor designed by Evolution The evolved processor is, among other things, a Symbol processor par excellence, but the Symbol structures it possesses are external (Wells, op. cit.)

By combining the evolved part of the Brain, a collection of specific mechanisms, with a processor possessing the power to carry out Symbolic instructions such as reading a map or cooking from a recipe, we have a view to a cognitive architecture that encompasses external Symbolic artefacts with which the thinker interacts. Memory, then, consists of a combination of external Symbol storage such as books, computer records and so on, and internal states which may have been adapted for in serving a specific mechanism and subsequently have a generalised function (Sherry & Schachter, 1987: cited in Wells, 1996).

Ant Nest as Brain

In examining models of the Mind, we considered two key constraints that any model should consider the Evolutionary evidence concerning adaptability, and that it should take into account the flexibility and universal nature of behaviour. The Adaptationist model proposed suggested a model of Mind comprising a number of specialized mechanisms making up the instruction set of a Symbol processor, combined with an external world incorporating Symbolic representations. It is my view that an ants nest provides a model of Mind along these lines that is only quantitatively and not qualitatively different from a human Brain.

In this model, I pro pose that the ants themselves function as a combination of neurons, synapses, and also work to bring sensory information into the overall nest. Thus the information gathered by a single ant and communicated to another may over time influence the behaviour of the nest, in much the same way that the presence of a heat source may influence a humans behaviour depending on its proximity and power. In order to make this model convincing, we need to consider each of the aspects of our Adaptationist model of Mind in turn and assess whether an ants nest model of Mind could be compelling.

We begin by looking directly at the Evolutionary constraint and understanding the idea of specific mechanisms suited to specific tasks. In all ants, development takes the ant through a number of changes in behaviour, which define what are called temporal castes. Thus, behaviour changes from caring for the queen, to digging & nest work, and finally to foraging and defense. In some ants, these ch anges can correspond to physical changes, with the soldier or major ants being significantly larger in size than the minor ants.

These different behaviour types are evolved mechanisms for coping with different requirements. If we see the nest as the model for the Mind, different means of interacting with the world dependent on requirement represent evolved mechanisms for dealing with incoming information where we see danger in the movement of a snake, so the ant nest reacts to unexpected shaking by furious activity which in some species may also herald the arrival of soldier castes to protect the nest. The different behaviour of different ant species dependent on Evolutionary environment is further evidence in this case.

In addition to this, there are many documented cases of symbiotic relationships with ants most common perhaps being that of aphids, which secrete a sweet liquid called honeydew. Normally this is allowed to fall to the ground, but around ants it is kept for them to collect. The ants in turn keep predators away and will move the aphids around to better feeding locations. It is my view that this behaviour can again be seen as evidence of the advanced nature of the ant nest and its suitability as a model of the Mind.

Our second area of concern when considering the ant nest and Adaptationist model together is that of the general purpose nature of the processor. We have already seen how the application of ants to the Travelling Salesman Problem resulted in solutions equivalent to those of other general purpose heuristic mechanisms, and it is my position that this evidences the ability of such a model to satisfy the universality constraint.

The third and most difficult area to consider is that of the ability of the processor to handle Symbol s to deal with semantic content. In order to avoid the problems faced by the Symbol-systems approach, the Adaptationist model proposed that the Symbols processed were those in the outside world. This view presupposes that for the human Mind to process said Symbols in a way consistent with meaning, the Symbols themselves must possess some form of significance that is, they are assumed to be the product of some Mind or Minds. All information presented to the Mind which is not of such a form would not be processed by the Symbol-processing aspect but rather by the appropriate mechanism.

Interpreting the idea of semantic content in this way leads us to the view that semantic content is nothing more than an aspect of the environment that has been changed in such a way as to impart a message. As any simple definition of ant communication would detail, ant communication is primarily through chemicals called pheromones. For instance, when a forager finds food on its way home, it will leave a trail along the ground, which in a short time other ants will follow. When they return home they will reinforce the t rail, bringing other ants, until the food is exhausted, after which the trail is not reinforced and so slowly dissipates. We recall that this was the method of interacting with the environment which was used in Ant Algorithms, and with good reason. I suggest that this level of semantic content, together with other aspects of the model considered is strong evidence for the conclusion that an ant nest is an adequate model of Mind.

Summary & Conclusion

I began this essay with a proposal to discuss the question that an ant nest is an adequate model of Mind. In order to examine this question, I first reviewed at a high level the notion of thought and some of the psychological history that relates to it, looking at specific examples related to the understanding of instinct vs. learned behaviour. I talked about the development of Connectionism and commented on the reassessment of the SSSM (Tooby & Cosmides, 1992) which gave rise to the field of Evolutionary Psychology.< /p>

I developed these ideas further in the second section of the essay, in which I looked at the twin constraints for any model of Mind these being the universality constraint and the Evolutionary constraint. I showed how predominant models cannot in truth satisfy both, and following Wells (1996) proposed a model of Mind in which the evolved Adaptations served as the instruction set for a highly efficient Symbol processor, these Symbols residing in the external world. This model satisfied both criteria.

Building on this model of Mind, I demonstrated how many of the characteristics of an ant nest have clear parallels in the theoret ical model, and used this similarity to suggest the ant nest as a model of Mind. Based on the evidence presented and the research done I therefore maintain that an ant nest has all of the necessary and sufficient criteria to model human thought & intentionality, and as such can be said to differ from the human Brain only in quantitative terms.

Author: Stephen Levy writes for Dispatx Art Collective.

Dispatx Art Collective was created in 2004 by Oliver Luker, Vanessa Oniboni and David Stent. We work with collaborating artists to develop ideas and display works related to specific themes.

The website functions as a rigorous concept-space for the exploration of these ideas and is used both for the exhibition of completed works and as a focus for the exploration and advancement of collective projects.

Press : [english [castellano


Author:: Stephen Levy
Keywords:: Emergence,Adaptation,Psychology,Evolution,Mind,Brain,Symbol
Post by History of the Computer | Computer safety tips

No comments:

Post a Comment