Historically, the neurological system has been frequently modelled as if it were a technological artefact. For example, the brain has been modelled as if it were a factory production line, and more recently as a computer, with inputs being “processed” in one section or module, creating outputs which then become inputs for other sections or modules; nerve fibres have been modelled as if they are the wires of a telephone network, carrying “messages” from one region to another[1]; and memory has been modelled as if was a storage location from which information is “retrieved”, or “accessed”, and acted upon, first as if it were an office filing system, and more latterly as if it were a place in a computer: “information is stored in memory”.
Using technology as a model for the neurological system is an example of semiotic generalisation
[2], in the sense used in previous chapters, namely: meanings evolved in one context spread into another where they are proffered for selection. But given that neurological systems are phenotypic products of biological evolution and technological systems are phenotypic products of semiotic evolution, a more self-consistent and parsimonious approach would be to apply biological models to phenomena deemed to be biological systems, as Edelman (1989) has done with his Theory of Neuronal Group Selection (TNGS).
[3] Biological models of biological phenomena are more likely to survive longterm semiotic selection than other models, if only because they are smaller innovations — just as smaller genetic innovations are more likely to survive biological selection. Selection against the technological model of the brain occurs, inter alia, every time a specialist in the field decides that the cost of the approach exceeds its benefits in terms of experiential consistency.
[4]
Footnotes:
[1] Where this model is used, there is often a failure to distinguish between information as the flow of electro-chemical difference in neural circuits and information in the sense of categories of experience arising from a substrate of brain activity in individuals in ecosystems that include social-semiotic contexts.
[2] In the field of cognitive linguistics, this mapping of the relations of a ‘source’ domain (here: technology) onto a ‘target’ domain (here: neurology) is known as conceptual metaphor. The mapping here is within the larger mapping of ‘an organism is a machine’: ‘a brain is a computer (that processes inputs, such as language)’, ‘a brain region is a processing unit’, ‘nerve fibres are communication lines’, and so on.
[3] Compare Einstein’s maxim that the best model of a duck is a duck, and if possible, the same duck.
[4] See the discussion of ‘truth’ later in this chapter.
ChatGPT said:
Technological Models of the Brain: A Semiotic Generalisation
Historically, neurological systems have often been understood through the lens of technological artefacts. From the brain as a factory production line to more recent conceptualisations of it as a computer, these models have sought to map the brain’s workings onto familiar human-made systems. For example, nerve fibres have been likened to wires in a telephone network, conveying "messages" between brain regions; memory has been compared to a storage system, with information "retrieved" when needed; and the brain itself is sometimes treated as a processing unit that handles inputs and outputs in a manner analogous to a computer's functionality.
This analogy of the brain as a machine, drawing from the technological world, is an example of semiotic generalisation. As discussed in earlier chapters, semiotic generalisation refers to the process by which meanings that arise in one context are extended to another. In this case, the structure and function of technology are mapped onto the brain, offering a convenient, if imperfect, metaphor. However, there is an important distinction to make: while neurological systems are biological products of evolution, technological systems are human-made constructs. Given the differences between biological and technological phenomena, a more consistent and parsimonious approach would be to use biological models to understand neurological systems—such as Edelman’s Theory of Neuronal Group Selection (TNGS), which accounts for the brain's complexity in a way that is more consistent with evolutionary biology.
This move towards biological models aligns with the principles of biological selection, which, much like the process that shapes living organisms, favours smaller, more adaptive innovations over time. Technological models of the brain, while sometimes useful for providing a framework, often encounter limitations as they fail to reflect the full complexity and evolutionary origins of neural systems. In the long run, these models are less likely to hold up under semiotic selection, especially as they are often at odds with the brain’s fundamental biological processes. Selection against the technological metaphor occurs every time it becomes clear that it doesn't adequately capture the brain’s operations, particularly when more biologically grounded models provide better explanatory power.
This isn’t to dismiss the technological metaphors entirely—they have served as useful stepping stones in the development of neurological theory. But as we understand more about the brain’s evolution and functionality, it becomes apparent that the technological metaphors are increasingly inadequate, and that a biological model offers a far richer and more sustainable explanation.