Historically, the neurological system has been frequently modelled as if it were a technological artefact. For example, the brain has been modelled as if it were a factory production line, and more recently as a computer, with inputs being “processed” in one section or module, creating outputs which then become inputs for other sections or modules; nerve fibres have been modelled as if they are the wires of a telephone network, carrying “messages” from one region to another[1]; and memory has been modelled as if was a storage location from which information is “retrieved”, or “accessed”, and acted upon, first as if it were an office filing system, and more latterly as if it were a place in a computer: “information is stored in memory”.
Using technology as a model for the neurological system is an example of semiotic generalisation[2], in the sense used in previous chapters, namely: meanings evolved in one context spread into another where they are proffered for selection. But given that neurological systems are phenotypic products of biological evolution and technological systems are phenotypic products of semiotic evolution, a more self-consistent and parsimonious approach would be to apply biological models to phenomena deemed to be biological systems, as Edelman (1989) has done with his Theory of Neuronal Group Selection (TNGS).[3] Biological models of biological phenomena are more likely to survive longterm semiotic selection than other models, if only because they are smaller innovations — just as smaller genetic innovations are more likely to survive biological selection. Selection against the technological model of the brain occurs, inter alia, every time a specialist in the field decides that the cost of the approach exceeds its benefits in terms of experiential consistency.[4]
Footnotes:
[1] Where this model is used, there is often a failure to distinguish between information as the flow of electro-chemical difference in neural circuits and information in the sense of categories of experience arising from a substrate of brain activity in individuals in ecosystems that include social-semiotic contexts.
[2] In the field of cognitive linguistics, this mapping of the relations of a ‘source’ domain (here: technology) onto a ‘target’ domain (here: neurology) is known as conceptual metaphor. The mapping here is within the larger mapping of ‘an organism is a machine’: ‘a brain is a computer (that processes inputs, such as language)’, ‘a brain region is a processing unit’, ‘nerve fibres are communication lines’, and so on.
[3] Compare Einstein’s maxim that the best model of a duck is a duck, and if possible, the same duck.
[4] See the discussion of ‘truth’ later in this chapter.