Saturday, 18 February 2012

Brain: Biology Or Technology?

Historically, the neurological system has been frequently modelled as if it were a technological artefact. For example, the brain has been modelled as if it were a factory production line, and more recently as a computer, with inputs being “processed” in one section or module, creating outputs which then become inputs for other sections or modules; nerve fibres have been modelled as if they are the wires of a telephone network, carrying “messages” from one region to another[1]; and memory has been modelled as if was a storage location from which information is “retrieved”, or “accessed”, and acted upon, first as if it were an office filing system, and more latterly as if it were a place in a computer: “information is stored in memory”.

Using technology as a model for the neurological system is an example of semiotic generalisation[2], in the sense used in previous chapters, namely: meanings evolved in one context spread into another where they are proffered for selection. But given that neurological systems are phenotypic products of biological evolution and technological systems are phenotypic products of semiotic evolution, a more self-consistent and parsimonious approach would be to apply biological models to phenomena deemed to be biological systems, as Edelman (1989) has done with his Theory of Neuronal Group Selection (TNGS).[3] Biological models of biological phenomena are more likely to survive longterm semiotic selection than other models, if only because they are smaller innovations — just as smaller genetic innovations are more likely to survive biological selection. Selection against the technological model of the brain occurs, inter alia, every time a specialist in the field decides that the cost of the approach exceeds its benefits in terms of experiential consistency.[4]


[1] Where this model is used, there is often a failure to distinguish between information as the flow of electro-chemical difference in neural circuits and information in the sense of categories of experience arising from a substrate of brain activity in individuals in ecosystems that include social-semiotic contexts. 

[2] In the field of cognitive linguistics, this mapping of the relations of a ‘source’ domain (here: technology) onto a ‘target’ domain (here: neurology) is known as conceptual metaphor. The mapping here is within the larger mapping of ‘an organism is a machine’: ‘a brain is a computer (that processes inputs, such as language)’, ‘a brain region is a processing unit’, ‘nerve fibres are communication lines’, and so on. 

[3] Compare Einstein’s maxim that the best model of a duck is a duck, and if possible, the same duck. 

[4] See the discussion of ‘truth’ later in this chapter.

Brain Organisation

On the model presented here, the brain is organised as a supervenience hierarchy, such that higher levels of organisation emerge from interactions at lower levels of organisation during development and experience. There is no interaction between levels; for example, higher levels of organisation do not control lower levels — ‘downward causation’ — any more than a clock controls the molecules on which it supervenes as a level of organisation, because levels of organisation are complementary perspectives on the same phenomenon. On the TNGS model, the notion of ‘control’ is better reinterpreted in terms of selectional interaction between systems at the same level of organisation, if only because it is a ‘category error’ to map a hierarchy of degrees of control onto a hierarchy of organisational levels. 

In considering interactions between systems at the same level of organisation, each system is necessary but not sufficient for the function it performs, just as a specific gene, as “for” iris colour, is necessary but not sufficient for its function (phenotypic expression); a gene only functions in the context of (the functions of) other genes, and its function is distinguished by contrast with the functions of other genes in the genome. Similarly, neurological functions are carried out in the context of other functions and each function is distinguished by contrast with those other functions. Absence or disruption of a specific function results from the absence or disruption of a necessary condition for its performance, as the absence or mutation of a gene results in the absence or variation of its phenotypic expression. By identifying loss of brain function with localised anatomical damage, some have argued that those functions are carried out in those areas, as if such areas are sufficient for the function. However, as brain imaging shows, even for something as “simple” as reciting digits, neural activity is distributed over many regions the brain, and the precise locations of activity vary from one individual to the next.

The Categorisable And The Categorising Are Distinct

A domain that can be categorised is distinct from any categorising of it.[1] Categories are not “out there” to be discovered, but are established through the interaction of recognition systems with a categorisable domain, which potentially includes the categorising processes themselves. The categorisable domain is potential, the categorising is a process. 


[1] Models don’t “construct reality” — they are organisations of categorisations of the categorisable. All models are organisations of categorisations, not of the categorisable.

The Perceivable Is Unlabelled

Gardner (1970: 227): 
The realisation that the world by itself contains no signs — that there is no connection whatever between things and their names except by way of a mind that finds the tags useful — is by no means a trivial philosophic insight. 

On this model, the perceivable[1] world isn’t labelled for categories, but it is of survival value to organisms to categorise the world in some ways rather than others. The perceivable world favours (selects) some ways of categorising over others, varying for species, but it does not follow from this that it contains categories independent of a categorising process. 

The contexts in which organisms are embedded are categorisable; they are recognition potential. From the perspective of categorisation, such domains are potential: they have, for example, the potential to kill, to end the categorising. They are a flux of varying probability. 

Categorising arises from interactions between, on the one hand, domains that can be categorised, that involve difference (information), and on the other, systems that can categorise.[2] Domains that can be categorised are differentiable: they have the potential to be differentiated by a categorising system. To experience a perceivable context is to differentiate it. The perceivable is both “experienceable”: categorisable and “act-upon-able”, and experienced: categorised and acted-upon. 


[1] Note that using the word phenomenon for ‘a perceivable’ would be confusing here. The (nontechnical) meaning is ‘a fact, occurrence, or circumstance observed or observable’ (Macquarie Dictionary 1992:1329), while Kant distinguishes phenomenon: ‘a thing as it appears to and is constructed by us’, from noumenon ‘a thing in itself’ (ibid).

[2] Strictly speaking, perceiving is an interaction between the universe of difference (in general) being perceived and a part of that difference (perceived by the modeller to be) organised as a categoriser and doing the perceiving.

The Participants In The Categorisation Process

The categorisation process is a systematic interaction between: 

(1) the perceivable: what can be detected and categorised; 

(2) a means of detecting the categorisable, specifically: light in the case of visual perception[1]; and 

(3) a recognition system that can categorise what it can detect through sensory modalities like vision, hearing, smell, taste, touch. 

The distinction between (1) and (2) does not hold for all sensory modalities. For example, in the case of touch, there is no intermediary between the perceivable surface and the tactile sensory detectors. The same is true for taste, where there is no intermediary between the perceivable chemicals and the olfactory sensory detectors. In the case of smell, there is no intermediary between the perceivable chemicals and the olfactory sensory detectors, but the perceivable chemicals may emanate from a source that is not otherwise directly perceivable. This is also the case for hearing: there is no intermediary between the perceivable air compression waves and the auditory sensory detectors, but the perceivable air compression waves may emanate from a source that is not otherwise directly perceivable. 

Crucially, the distinction does hold for the primary modality of humans: vision. Visual perception does involve an intermediary between the perceivable and the visual sensory detectors. What makes contact with the sensory modalities, photons, is distinct from the perceivable being categorised visually, but is the means by which the perceivable is detected. The one exception here is the visual perception of light sources, which patterns like smell and hearing, where the perceivable light emanates from a source that may not be otherwise directly perceivable. Visual perception is both atypical and the primary modality through which humans categorise the perceivable, which gives unique status to the rôle of photons in human experience.[2]

Failure to make this distinction between the visible and the visual means of perceiving the visible has resulted in confusions like “colour (unlike other properties) exists only in the head of the observer”. Colour perception involves the detection and categorisation of difference (categorised by other means as light frequencies) reflecting off and refracting through the visible, and depends, inter alia, on the light frequencies emitted by the source and the molecular arrangement of the visible. 


[1] A similar example in some species is the use of echo location, where the perceiver emits the radiation that reflects off surfaces in its vicinity.

[2] More of which later.

Perceiving As Correlating Difference

The general principle of a recognition system is the if…then relation: if event x in the recognisable domain, then event y in the recognition system. Recognition by neurological systems involves matching categorisables with neuronal activities that categorise. For perception, this means correlating difference (information) outside the system with difference (information) inside the system. The identity of any categorising activity within the system is given, therefore, by contrast to the other categorising activities within the system. Each categorising has no meaning without reference to other categorising.

How Difference Is Correlated

In the selectionist model of biological evolution proposed by Darwin and Wallace, Nature selects some variants (potentials) at the expense of others, depending on how well each functions in the contexts in which they have to function. These variants are the ones most likely to "happen again" in the next generation. In the selectionist model of brain function proposed by Edelman, the TNGS, in perceptual categorisation, the perceivable selects some variant neural events (potentials) at the expense of others. Selection involves the strengthening of synaptic connections between neurons in groups in maps, thereby increasing the probability that such configurations will fire again. These neural variants are the ones most likely to "happen again" in the next generation of neural firing in response to specific sensory detections of difference. 

This model can be understood in terms of the grammatical concept of ergativity: each perceptual categorising process is actualised through a medium, a brain as neurological recognition system, and caused by an external agent, a domain that can be detected and categorised.[1]

Note, by the way, that this is in stark contrast to an understanding based on the grammatical concept of transitivity. On such a model, a categorising process carries through from an actor, an external domain, to a goal, a brain as neurological recognition system. That is to say, categories flow into brains from the outside. Edelman (1992) labels this position instructionism since it involves seeing categorisation as a process whereby the outside instructs the brain, and he opposes it to selectionism, pointing out that instructionism is Lamarckism applied to the brain.[2] In the Lamarckian model of biological evolution, properties flow from the world into genomes, such that acquired characteristics can be inherited by offspring. That is, a coded world acts directly on a specific genome and changes it. 

There are probably many reasons why the instructionist/transitive model should have been previously favoured in modelling the brain. For example, it is more obviously recognisable from visual experience: we see that things move from one location to another.[3] Selectionism is more subtle, since it requires a generational timescale to observe the effects of selection in a visible domain. The view of human-as-agent may also have made the acceptance of the phenomenon-as-agent model less probable. 


[1] Phenomena make us sense: see, hear, smell, taste, feel (perception); phenomena make us feel (affect); phenomena make us think (cognition); phenomena make us want (desideration).

[2] Instructionism fails to explain, for example, why a patient of (Oliver) Sacks, ‘Virgil’, who receives sight for the first time at age 50 cannot make sense of what he sees.

[3] Cf Lakoff’s (1987) source-path-goal schema.

Categorising As Meaningful For An Organism

According to the TNGS, categorising occurs ‘on value’. That is, categorising neuronal systems are linked with value systems — cholinergic and aminergic — of the hedonic centres and limbic system, whose functions include homeostasis and appetites.[1] This means that categorising processes occur in the context of the current state of the organism, and the effect of this is to make categorising meaningful — to matter — to the categorising organism.


[1] Through this linkage, inherited value systems bias the categorising process in ways that have been of advantage to ancestors; the value systems of those who do not survive to reproduce are not passed down the generations of a biological lineage.

Brain Function Is Organised By The Recognisable

The brain as recognition system is organised by the domains that it recognises. The domains that it recognises are both ouside and inside the body. The outside, which can include the meaningful expressions of others, is recognised via sensory sheets which detect external difference. The inside includes domains both outside and inside the brain. Outside the brain includes the musculo-skeletal system, which it detects via the peripheral nervous system, and homeostatic systems, to which it is connected via the limbic system. Inside the brain includes all the processes involved in recognising domains both inside the body but outside the brain and outside the body — an ability that varies across animal species. 

The recognition process, as the selection of variants by the domains being recognised, can be understood as the brain adapting to those domains: to the ecological context of the body (which includes the behaviours of other bodies), to the somatic context of the brain, and to the brain’s own recognition processes. Just as “Nature” selects genetic potential-for-development in the evolution of a species, “Nature” selects neurological potential-for-behaviour in the evolution of a neurological system embedded in the body of an organism in its lifetime.

No Images, Representations Or Symbols Inside Heads

On this model, there are no images in the brain (or mind[1]) — there are neuronal firing patterns. But some neuronal firing patterns correlate with differentations of the visually perceivable. The firing of such patterns correlate with experiencing the differentiation of the visible, with having visual experiences.[2] Regenerations of past firing patterns can result in visual experiences in the absence of the originally experienced perceivable. By generating portions of different past visual experiences as an integrated whole, it is possible to create new visual experiences that have not previously been experienced — to imagine. And such simulations can be expressed through the skeleto-muscular system as pictorial images that others can experience. 

More generally, to be congruent[3], there are no symbols or representations (things) in brains[4]; there are cells and tissues and the functions they perform (processes). Neurological systems make symbols and representations possible, but, through skeleto-muscular action, as perceivable expressions which can be categorised and recategorised and re-expressed, and so on. Lamb (2005) has expressed this point clearly in modelling language with respect to neurological systems:
Likewise, if we consider production of speech, no one has ever found any evidence at all, neurological or otherwise, to support an hypothesis that it operates by the use of symbols represented somehow in the brain. The more realistic alternative is to suppose that what is internal is not symbolic representations of words or morphemes or the like, but the means of producing such forms (as speech or writing). 


[1] Sometimes the word ‘mind’ is used in these contexts rather than ‘brain’. However, since the mind is not detectable as a (material) thing, it cannot be construed congruently as a place in the material universe that science models, and it is thus incongruent to speak of ‘in the mind’. See the discussion of ‘mind-as-process’ later in the chapter.

[2] See Edelman & Tononi (2000: 202-3).

[3] Those who might claim to speaking metaphorically (eg Hofstadter, Ramachandran) do not provide a congruent reformulation.

[4] The idea that symbols exist in brains is consistent with the instructionist model and with naïve realism: the belief that the world consists of true or real categories, and these are represented (accurately or not) in brains.


Semiotic systems are both a means of modelling of the categorisable and one domain within the modelling. A model of semiosis is part of modelling the categorisable semiotically. For example, a semiotic model of the categorisable may divide the categorisable into two distinct domains: the material and the semiotic.

To Refer To The Categorisable Is To Use Categorisations

To refer to the categorisable is to use specific categorisations of it, to express a specific model of it. For example, to refer to the perceivable as ‘the environment’ (or ‘context’ or ‘the perceivable’) is to categorise it within a larger model of meaning-making. Because of this, there is no “pre-theoretical” position that can be adopted on any subject, though some stances may be modelled as “pre-theoretical” for social-semiotic purposes. 

Further, no categorisables are ineffable[1], since any categorisable can be modelled and semiosis is modelling. However, some categorisables are modelled as being ineffable for social-semiotic purposes. 


[1] Cf Wittgenstein.

Meaning As A Process That Occurs In The Perceivable World

In modelling the categorisable, such as two domains: the material and the semiotic, semiosis is modelled as a process that goes on in the perceivable universe, in the same sense that galaxy formation and supernova explosions are processes that go on in the perceivable universe. Asking and answering the question ‘Why is there something instead of nothing?’ are processes that go on in the perceivable universe. Making meaning of the perceivable universe is something that organisms do, part of the universe recognising itself. Meaning is not of the categorisable domain of which meaning is made, nor does it transcend the domain of interactions (the perceivable universe) in which it occurs.

Models As Systems Of Relations

Physics is not events, but observations; relativity is the understanding of the world, not as events, but as relations.
Smolin (1996: 289-90):
Indeed, for me the most important idea behind the developments of twentieth-century physics and cosmology is that things don’t have intrinsic properties at the fundamental level; all properties are about relations between things. This is the basic idea behind Einstein’s general theory of relativity, but it has a longer history; it goes back to at least to the seventeenth-century philosopher Leibniz, who opposed Newton’s ideas of space and time because Newton took space and time to exist absolutely, while Leibniz wanted to understand them as arising only as aspects of the relations among things. For me, this fight between those who want the world to be made out of absolute entities and those who want it to be made only out of relations is a key theme in the story of the development of modern physics. Moreover, I’m partial. I think Leibniz and the relationalists were right, and that what’s happening now in science can be understood as their triumph.
To model is to systematise categorisations, the valeur of each categorisation being defined by its relations to other categorisations. Individual categories are necessary but not sufficient: the function of each depends on the function of others, just as the function of a neuronal group depends on the functions of other neuronal groups, and the function of a gene depends on the functions of other genes. The process of categorising (analysis) distinguishes individual units, but these are not categorised without reference to what they differ from. Information is difference, in relation to other difference. 


[1] The Ascent Of Man episode 7: The Majestic Clockwork.

Modelling Is Relating

Like all semiosis, modelling, including scientific modelling, involves relating categories of experience to each other with various degrees and scopes of consistency. Mathematical equations do this by relating measurements, including changing quantities, to each other. In formal systems, such as geometry, new unknown relations are reasoned from known relations, thereby expanding the system of relations by making the implicit explicit.

The Metafunctional Dimensions Of Modelling Semiotically

The conditions of modelling the categorisable semiotically can be understood in terms of the metafunctional dimensions of meaning-making: the ideational, which includes the experiential and the logical, the interpersonal and the textual. 

Firstly, models are located in the space of ideational variation. Models are organised through experiencing the categorisable.[1] Categorisable difference selects the categories that may or may not be organised into a model of the categorisable. So models depend on the recognition functionality of the neurological system and the prosthetic technologies that extend its recognition abilities, and they depend on the specific experience trajectories of modellers. 

Secondly, models are located in the space of interpersonal variation. Models are organised by values that bias the orientation of modellers to different categories of experience. Limbic system functions that have been of adaptive value to ancestors invest the categories (that are selected by categorisable difference) with positive or negative value, and the complexification of categories within individuals — through the categorisation of categories and their differentiation through semiotic interactions with others — is the complexification of categorial values. Models are motivated organisations of categorisations. 

Thirdly, models are located in the space of textual variation. Models are organised through selective attention to value-categories: focusing on some categories as relevant, and filtering out others as irrelevant.[2] Some value-categories are prominent threads in the weaving together of meanings, while others are thin or absent. Because modelling occurs through a perspectival lens, it is both enabling and disabling: a model is conditional on the assumptions on which it is organised.[3] Models are ‘ways of seeing’ (Berger 1972). 


[1] The ‘categorisable’ includes categorisations of the categorisable, categorisations of categorisations of the categorisable, and so on.

[2] This relates to Pike’s notion that a theory is like a window that only faces in one direction. 

[3] The influence of ones perspectival lens on categorising was shown by a seminal psychology experiment by Rosenhan (reported in Slater 2004) in which subjects faking their way into mental institutions were not detected by most psychiatrists.

Metafunctional Consistency In Meaning-Making

One aspect of “truth”[1] is consistency in meaning-making, and given the metafunctional dimensions of meaning-making, this entails metafunctional consistency in meaning-making: experiential, logical, interpersonal and textual.[2] To be ideationally consistent in meaning-making is to be consistent both in the representation of experience and in the logical relations said to obtain between representations of experience.[3] To be interpersonally consistent in meaning-making is to be consistent in the values given to — the stance taken on — ideational meanings. To be textually consistent in meaning-making is to be consistent in what is attended to as relevant with regard to ideational and interpersonal meanings.  Different consistencies — including tensions between construals of experience, values inherent in construals, and attentions paid to construals — creates diversity in modelling. 


[1] The word ‘truth’ is a noun formed from the adjective ‘true’, which construes it metaphorically as an abstract thing in itself rather than congruently as a description of relations between things. 

[2] From interpersonal and textual perspectives, ideational distinctions are a means of elaborating what is valued and focussed upon. 

[3] The word ‘real’ is often used to mean ideationally true, but it is often also extended to mean that the specific categorisations of the description exist as properties of the perceivable, independent of the modelling framework.

‘Consistency’ Means ‘Mutual Fit’

Consistent meaning-making is meaning-making that fits in the context of other meaning-making. Metafunctionally, this is experiential meaning-making fitting in the context of other experiential meaning-making: construals of experience fitting other construals of experience; logical meaning-making fitting in the context of other logical meaning-making: logical relations (between construals of experience) fitting other logical relations (between construals of experience); interpersonal meaning-making fitting in the context of other interpersonal meaning-making: valuings (of construals of experience) fitting other valuings (of construals of experience); textual meaning-making fitting in the context of other textual meaning-making: saliences (of valuings and construals of experience) fitting other saliences (of valuings and construals of experience).

The Variable Scope Of Semiotic Consistency

Consistency is a gradable property, varying from as low a value as ‘not being inconsistent’ to ‘being in harmony’ to as high a value as ‘being wholly consistent’, and models may vary in terms of consistency within some domain of categorising. The “truth” of a model depends on the degree to which it fits other models. The scope within which meanings may be consistent varies from the very local to the more global. For example, ideational construals may be consistent within or across[1] fields, within or across tenors, within or across modes. Interpersonal values may be consistent within or across fields, within or across tenors, within or across modes. Textual saliences may be consistent within or across fields, within or across tenors, within or across modes.


[1] This book is an attempt to establish some degree of ideational consistency across fields.

The Phylogenetic Function Of Semiotic Consistency

Metafunctional consistency is the selection principle in the evolution of modelling. Models are selected for (adopted, used, believed) and selected against (rejected or ignored) on the basis of ideational, interpersonal and textual consistency. There is a biological basis to the origin of this process to the extent that consistent modelling, in terms of construing experiences and assessing their relative value and relevance, increases the survival and reproductive prospects of the modellers.[1]

The evolution of models is a process of each fitting, adapting to, other models (via their perceivable expressions) with which they interact (are correlated).[2] Models are adaptations to other models.[3] As each model adapts to others, the environment to which other related models adapt changes, so that the evolution of models is a continual pursuit of the moving target of fitting. 

So the sense in which the categorisable selects certain models over others is as follows. The categorisable (which includes categories of the categorisable) selects certain categorisings over others, and each model is built from categorisings that fit each other, in a process of each model adapting to similarly constructed models. 


[1] Note that natural selection favours not just genes for phenotypic traits but also genes in organisms whose learnt behaviours result in more offspring, even though those behaviours are not the direct expression of genes. Cf Baldwin Effect. 

[2] Just as, in biological evolution, gene complexes adapt (via phenotypes) to other gene complexes with which they interact. 

[3] Even though a model is not the model, models are not arbitrary in the sense that any is as good as the next, since not all models equally fit others with which they are correlated (interact).

Aligning With Specific Metafunctional Consistencies

Meaning potential thus involves a web of different networks of metafunctional consistencies: different construals, different values, different foci of attention. Meaning-makers variously align (consistently or inconsistently[1]) with different networks of consistency within the overall web of variant consistencies. Those who share a specific network of consistency potentially form a community of ‘like-minded’ individuals with a ‘common interest’: a community formed around a way of construing experience, a community formed around a way of valuing a construal, a community formed around a way of grading the relative importance of construals and values.[2] Since each individual can align (consistently or inconsistently) with multiple networks, each can belong to multiple communities, “us”, and disassociate from multiple communities, “you” or “them”.


[1] There is, of course, the question of consistency between what is said (semiotic behaviour) and what is done (non-semiotic behaviour). 

[2] If each variant consistency is located along three dimensions: the ideational, interpersonal and textual, individuals that align with specific consistencies can be located at different points in the metafunctional space defined by those dimensions; communities correspond to clusters in that space.

Facts And Objectivity

The term ‘fact’ is a label for a model or proposition whose ideational consistency is regarded as certain by an individual or community.[1] That is, the term ‘fact’ expresses an interpersonal stance toward an ideational construal. Similarly, the expression ‘just a theory’ is a label for a model whose ideational consistency is regarded as less certain by an individual or community. The scale of certainty is a dimension within the interpersonal system of modalisation: the semantic space between ‘yes’ and ‘no’ (Halliday & Matthiessen 2004: 616-25). 

On the other hand, the term ‘dogma’ is a label for a model or proposition whose ideational consistency is regarded as obligatory by an individual or community. The scale of obligation is a dimension within the interpersonal system of modulation: the semantic space between ‘do!’ and ‘don’t’ (ibid). Importantly, terms like ‘fact’, ‘only a theory’ and ‘dogma’ express a relation between a model or proposition and an individual or community. 

The terms ‘subjective’ and ‘objective’ similarly express an interpersonal orientation toward metafunctional consistency. Objective and subjective are ways of presenting propositions.[2] To present a proposition as ‘objective’ or ‘value-free’ or ‘reality’ is a value-laden claim about the proposition — the subjectivity sometimes disguised by being in tune with the shared values of a community — the interpersonal function of which is to remove the negotiability of the proposition. 


[1] Since facts are what we are certain of, they are propositions we believe to be true. 

[2] The subjective or objective orientations may be expressed explicitly or implicitly.

The Generation Of Variant Models

Variation (for selection) is created by recombining what has gone before into novel arrangements. This includes novel arrangements of meaning and novel arrangements of meaning with regard to the context in which they function. The biological analogues of these are, on the one hand, genetic mutation and recombination, and on the other hand, the diffusion of species to new habitats, such as the expansion of plants, invertebrates and vertebrates from aquatic into terrestrial environments.

(1) Recombining Potentials 

New variants of a model can be created by recombining meanings within the model.[1] If the degree of recombination is sufficient, the result may be a model that can no longer be considered a variant of the original. In science, this can happen when students rebel against a research tradition and build a model on the basis of new questions. As Bronowski[2] urges:
It’s important that students bring a certain ragamuffin irreverence to their studies. They’re not here to worship what is known, but to question it.

(2) Recombining Potential And Context: Generalisation 

One way to create a new model is to take an existing one and spread it to a new context. This is a process of generalisation in the sense that the scope or range of the model is expanded: from the specific functional context in which it evolved into other contexts where other models may have already evolved. A recent example of this is Edelman’s selectionist approach to neuroscience, selectionism having been successful as a model in immunology as well as evolutionary biology. Another example in neuroscience is the technological model of brain function that was mapped across from the field of computer science. A more ancient example is the mapping of body parts onto the landscape, such that rivers, for instance, have heads, mouths and arms. This body-environment mapping was reversed in mediæval Europe, as when the four elements: air, fire, earth and water were mapped onto the body as the four fluids: black bile, yellow bile, blood and phlegm, respectively, yielding the four humours: melancholic, choleric, sanguine and phlegmatic.[3]


[1] This is easier said than done, of course. Models are basins of attraction: once a model gains currency, it drags other attempts to theorise into it. 

[2] The Ascent Of Man episode 11: Knowledge Or Certainty

[3] Bartlett (2001 :204): 
Mediæval thinking was dominated by the theory of ‘correspondences’, derived not from Christian revelation but from the idea, Greek in origin, that an explanation, to be part of the divine plan, had to be economical, symmetrical and æsthetically satisfying. What it ignored was the Greeks’ readiness to test a hypothesis by observation and experiment.

The Selection Of Variant Models

Kœstler[1] (1968: 64):
However, even if European philosophy were only a series of footnotes to Plato, and even though Aristotle had a millenium stranglehold on physics and astronomy, their influence, when all is said, depended not so much on the originality of the teaching, as on a process of natural selection in the evolution of ideas. Out of a number of ideological mutations, a given society will select that philosophy which it unconsciously feels to be best suited for its need.

Monod (1971/1997: 165-6):
This selection [of ideas] must necessarily operate at two levels: that of the mind itself and that of performance.
The performance value of an idea depends upon the change it brings to the behaviour of the person or the group that adopts it. The human group upon which a given idea confers greater cohesiveness, greater ambition, and greater self-confidence thereby receives from it an added power to expand which will insure the promotion of the idea itself. Its capacity to “take”, the extent to which it can be “put over” has little to do with the amount of objective truth the idea may contain. The important thing about the stout armature a religious ideology constitutes for a society is not what goes into its structure, but the fact that this structure is accepted, that it gains sway. So one cannot well separate such an idea’s power to spread from its power to perform.
The “spreading power” — the infectivity, as it were — of ideas, is much more difficult to analyse. Let us say that it depends upon preexisting structures in the mind, among them ideas already implanted by culture, but also undoubtedly upon certain innate structures which we are hard put to identify. What is very plain, however, is that the ideas having the highest invading potential are those that explain man by assigning him his place in an immanent destiny, in whose bosom his anxiety dissolves.


[1] Kœstler (1979: 523ff) thought of the evolution of ideas as a continuation of biological evolution, involving mutation, selection, survival-value, and adaptation to a ‘period’s intellectual milieu’.

Controlling Variation: Institutions As Model Reproduction Nurseries

Depew & Weber (1996: 395):
Controlling what counts as phenomena is a real function of research traditions…
In the evolution of models, a specific model is selected every time it is used. One general means of influencing the selection of a model is the teaching of specific variants in institutions so that they are reproduced widely across a population and passed on to succeeding generations of model users. As semiotically organised social systems, institutions also differentially reward model users, depending on the model they use. By conferring social status and/or material wealth on the users of their favoured models, institutions make the use of the model more likely in the social domains under their control.

Selection And Certainty

A specific model is selected every time it is used, and usage implies some degree of belief in the model.  Belief can range from full acceptance to a suspension of disbelief, and so can be described as a scale of certainty.[1]  This corresponds to the previously discussed grammatical system of modalisation (eg Halliday 1985), the region of uncertainty between ‘yes’ and ‘no’, as set out in the table below.


I know that…
it’s certain that…
I believe that…
it’s probable that…
I suspect that…
it’s possible that…

Table 11.1 The Dimensions Of Modalisation

The table shows that statements of personal belief, from the high value ‘I know’ to the median value ‘I believe’ to the low value ‘I suspect’, are subjective metaphorical variants that separate the proposition in question from the modalisation value. The modalisation (probability) is rendered as a mental process and proposition as a metaphenomenon projected from that mental process.[2]

Thus, from this perspective, belief in a model is an interpersonal stance towards that model. As such, belief, and therefore the selection of models, involves the users of models acting upon each other in ways that either increase or decrease the certainty of potential users towards specific models, and therefore, the probability that such models will be used.


[1] As Halliday (1994: 362) points out:‘we only say we are certain when we are not.’

[2] See Halliday & Matthiessen (2004: 440-1).

Acting On Each Other: Modulation And Modalisation

By persuading others we convince ourselves.

Because the selection of a model is the usage of that model, any action that affects the probability that a specific model will be used or discarded amounts to selection pressure in the evolution of models. Selection pressure can be understood in terms of varying proportions of modulation and modalisation. 

At the modulation end of this scale, potential users are more or less obliged by others to use one model in preference to another.[1] This may depend on the power relations between the obligators and the obligated, but the obligation can come from solidarity pressures within a community of peers as well as from those with the greater — for example: institutional — authority. Such models are at risk of becoming dogmas. 

At the modalisation end of this scale, in communities where individuals are free to choose, the varying uncertainty of potential users towards the range of models on offer makes the choice of model usage negotiable, arguable. In this case, the action taken is to increase or decrease the certainty of others with regard to specific models. The means of doing so ranges from providing evidence[2] consistent with the model, such as interpretations of experimental results, to the rich panorama of fallacious argumentation. These include, for example, what Dennett (1993: 401) refers to as Philosopher’s Syndrome: mistaking a failure of imagination for an insight into necessity[3] (‘I can’t imagine how x could be, therefore y must be true’), what Dawkins (1995: 70) refers to as the Argument From Personal Incredulity (‘I can’t believe x, therefore y must be true’), and the multiply various explicit and implicit forms of Argumentum Ad Hominem (attacking the person instead of critiquing the argument).[4]


[1] My certainty (modalisation) is your obligation (modulation).

[2] When belief without evidence is valued, it is termed ‘faith’; when it is not valued, it is termed ‘credulity’.

[3] An example of Philosopher’s Syndrome in the field of linguistics is the so-called ‘poverty of the stimulus’ argument intended to “justify” the simplistic (Platonist) claim that language must be “innate”.

[4] The well-documented list of these seems endless, but another family of arguments that exploit user uncertainty in relation to models that deserves mention here involves the subtle shift from semiotic consistency-as-arbiter (see further in the text) to “I am the arbiter of the evidence you find”.

Metafunctional Consistency And Selection

The probability that a model will be selected by potential users is higher if it is assessed to be consistent with other ideational construals, interpersonal values and/or textual foci of which the potential users are most certain. As already discussed, model building, like all semiosis, is a matter of relating (fitting) specific meanings, qualities and quantities, to each other. While the rôle of ideational consistency is usually explicit in arguments about alternative models, the role of interpersonal and textual consistency is usually left implicit. The three types will be discussed here in turn.

Ideational Consistency As Overtly Influencing The Probability Of Selection

The scope of assessing ideational consistency can be restricted to a specific field of modelling, or can extend across fields. It has two components, logical consistency and experiential consistency, both of which are necessary for ideational consistency, but neither of which is sufficient in itself. That is, to be ideationally consistent, a model must be both logically consistent, internally and in relation to other models within the scope of meaning-making being assessed, and consistent in modelling experience, in its own terms and in relation to other models within the scope of meaning-making being assessed.[1]

The scientific method is an attempt to restrict the selection of models to only those that are ideationally consistent. In the modelling of the simplest systems, the principal means of influencing selection — by increasing user certainty in a specific model — is the use of experiments that test the experiential consistency of those variants believed to be logically consistent. It is only in modelling the simplest of systems, or the simplest domains of systems, that the range of potential interpretations of experimental results is narrowed from many to manageably few. 

Experiments are ostensibly designed to falsify hypotheses, not to verify them.[2] Indeed, for Popper, a model must be falsifiable to be considered scientific.[3] Failing to falsify an hypothesis is failing to demonstrate an inconsistency between it and other models of which potential users are reasonably certain. 


[1] The point that a model has to fit with those of its time is exemplified by the case of Aristarchus of Samos (310-230BC) who proposed a heliocentric system of what we now call the solar system. It took 18 centuries for it to begin to fit, beginning with the work Copernicus (1473-1543), then Kepler (1571-1630), Galileo (1564-1642), and Newton (1642-1727).

[2] A common example of the misrepresentation of the experimental method is the situation where an experiment whose design does not provide a means of falsifying a model (or its rivals) is interpreted as a verification of the said model.

[3] A strict application of this principle would probably destroy most of what is regarded as scientific, including whole disciplines.

Interpersonal And Textual Consistency As Covertly Influencing The Probability Of Selection

A model is not just an ideational construal of categorisable experience. Each construal involves, to various degrees, specific (interpersonal) values, rather than others, and specific (textual) foci of attention, rather than others. Because of this, arguing to influence the selection of models also involves the assessment of both the consistency of the values of the model with the values held by the potential users, and the consistency of the attentions emphasised in the model with what the potential users regard as deserving attention. 

Selection for interpersonally consistent values can result in an ideationally consistent model being rejected by potential users who judge it to be of negative interpersonal value — that is, to be inconsistent with the values they judge positively. For example, in the sciences, those who value reductionism or determinism negatively, especially with regard to modelling aspects of humanity, will be motivated to reject any model they judge to be reductionistic or deterministic. However, because arguments against scientific models are ostensibly required to be couched in terms of ideational meaning, the (interpersonal) motivation for the argument, and the (interpersonal) value at the nub of the dispute, will often be left unstated. 

On the other hand, selection for interpersonally consistent values can result in an ideationally inconsistent model being selected by potential users who judge it to be of positive interpersonal value — that is, to be consistent with the values they judge positively. If such judgements are made by practitioners of high standing in the particular field, the probability that others will also select the model is increased, since it increases the certainty (confidence) in the model in the community. In extreme cases, where the interpersonal values of a specific model are thought to depend fundamentally on the ideational construal, the model will be held dogmatically. 

Selection for textually consistent foci of attention can be exemplified by the oscillation in Western philosophy and science between emphasising the rôle of the mind (reason) in modelling, as in the rationalism of Descartes, and emphasising the rôle of the body (senses) in modelling, as in the empiricism of Locke. Different foci of attention in different ideational construals are also likely to be associated with different interpersonal values. This can be exemplified by the different values associated with focussing on either ‘Nature’ or ‘Nurture’ in modelling behaviour. This further enables the rejection of ideationally consistent models and the selection of ideationally inconsistent models.[1]

The evolutionary direction of ideational construals and textual attentions is thus guided by interpersonal values; the evolutionary direction of ideational construals and interpersonal values is guided by textual attentions; the evolutionary direction of interpersonal values and textual attentions is guided by ideational construals. 


[1] Note that metafunctional consistency as a selection principle explains why demonstrably false models of phenomena arise and are maintained, even amongst those who value the scientific method.

Social Immunological Systems

There are two principal ways that a model can be selected against: it can be largely ignored or it can be attacked. If a model is ignored, it goes extinct through being infertile: it leaves no descendants. If a model is attacked, it goes extinct through being killed off. If a model is ignored, it is regarded as being inconsistent with models about which users are more certain, but as being no threat to those models. If a model is attacked, it is perceived as being both inconsistent with, and a threat to, those models used with more certainty. Such attacks can be understood as (social-semiotic) immune responses by the community organised by the models perceived to be under threat, as will be explained below.

Communities As Bodies Organised By Shared Construals, Values And Attentions

Communities of users of shared construals-values-attentions emerge from social-semiotic interactions between potential users of models in a population. Bonding through shared values and attentions of specific construals of experience reinforces group identity and social identity of individuals in that group. It provides group cohesion and co-operation, creating a community of ‘us’ as an integrated ‘self’. Just as shared biological potential groups individuals as kin, as family, shared semiotic potential, in general, groups individuals together as kith, friends and acquaintances. But importantly, local bonding of ‘us’, the ‘self’, also defines the ‘not-us’ as the ‘not-self’, as the ‘other’. To bond is to exclude.[1] The self-other distinction is itself a continuum rather than a binary opposition: a scale that extends from ‘first person us’ to ‘second person you’ (those we talk to) to ‘third person them’ (those we talk about).[2]


[1] Male bonding, for example, is female exclusion 

[2] Compare the 20th Century political distinction (made by the ‘West’) of the 1st World (the ‘West’), the 2nd world (the ‘East’), and the 3rd world (the rest).

Semiosis As Social Immunological Process

If words are to enter men's minds and bear fruit, they must be the right words shaped cunningly to pass men's defences and explode silently and effectually within their minds.
J.B. Phillips

The specific construals, values and foci of attention that organise a specific community can be understood as functioning as a type of immunological system of that social body. That is, semiotic construals, values and foci are a means of extending immunological function beyond the confines of an individual body to the confines of a social body. This can be explained as follows. 

Immunological systems in bodies recognise molecules as either ‘self’ or ‘other’ with regard to the systems of the body. Those that are categorised as ‘self’ are allowed, while those that are categorised as ‘other’, antigens, trigger an immune response in the system: the molecular shape of the antigen selects the production of multiple antibodies whose molecular shape fits into the antigen (and any of its copies) in order to neutralise it as a threat to the body. 

Similarly, semiotic systems in social bodies recognise the construals, values and foci of attention in models as either ‘self’ or ‘other’ with regard to systems of the social body.[1] Those that are categorised as ‘self’ are allowed, while those that are categorised as ‘other’ trigger an immune response in the social-semiotic system: the metafunctional ‘shape’ (meanings) of the model selects the production of multiple ‘anti-texts’ (negative critiques) whose metafunctional ‘shape’ fits into the offending model (and any of its ‘copies’) in order to neutralise it as a threat to the social body (by undermining certainty in the offending model). 

Because the probability that a model will be selected is dependent on the recognition of it as ‘self’ or ‘other’ — in terms of consistency with the construals, values and attentions that potential users are most certain of — there is the possibility of misrecognition. On the one hand, ‘other’ construals, values and/or attentions can be misrecognised as ‘self’, and so not attacked. On the other hand, ‘self’ construals, values and/or attentions can be misrecognised as ‘other’ and consequently attacked with anti-texts. 


[1] Note that descriptions like ‘theory-neutral’ just mean ‘not inconsistent with the construals, values and/or foci of attention of the specific community’.

Evolution As Adapting To Fit To Changing Contexts

On this model, then, the evolution of models of the experienceable world is a process of models adapting to each other in the contexts in which they function. Empirical science, for example, is not a matter of approximating the categories of Nature and their relations, quantitative and qualitative, because Nature has no categories independent of systems that categorise it. Science involves Nature selecting models of phenomena, these evolving as the context of model building changes — as a consequence of the evolving models and the technologies they engender. 

So, there is no end to ‘the march of science’. Over time, models are expanded — elaborated, extended and enhanced — providing greater delicacy of description and functionality for those who use them. As each model changes, it potentially changes the context in which other models function, triggering continuous cascades of change through systemically related models. The rate of evolution varies, in part, with the degree of selection pressure brought about by changes in its environment. In as much as this process takes time, models are variably adapted to past contexts.[1]

The ability of any model to evolve[2] at given period of its history varies with its capacity to generate potentially useful variation, the “raw material” that is shaped by selection. For example, the evolution of particle physics is partially dependent on the concomitant evolution that provides the technological devices used to observe subatomic events. 


[1] It may also be that the increasing numbers of practitioners potentially increases the inertia of a discipline, ceteris paribus.

[2] Evolvability can be thought of as the potential of a lineage to exploit evolutionary time for adaptive purposes.

Evolution Vs Revolution

The evolution of models involves change within lineages and their speciation on the basis of differences in ideational construals, in interpersonal values, and/or in textual foci, creating different communities of users. A so-called revolution — Kuhn’s paradigm shift — occurs when one of the evolving variants used by a small marginalised community comes to be used (selected) by a significant number of practitioners — or in the short term, the significant practitioners — of a field. That is, revolution is lineage replacement: the (relatively rapid) supplanting of one lineage by another through a significant change in the proportion of potential users adopting the model. 

Ancestral Models

If the evolution of models involves a speciation of fields of meaning-making, then the more ancient the time period, the less differentiated such fields were. Out of all the present-day strands of meaning-making, the most ancient models that survive from the lineages of preliterate times are those preserved in religious traditions. The religious lineage might be seen as an original all-embracing trunk from which all other branches of modelling, including the arts and sciences, later diverged. As a unified system, it originally functioned not only as ‘religion’, but as ‘science’, ‘philosophy’, ‘history’, ‘law’, and the rest, for the small community who reproduced it, with variation, generation after generation.

The Categorisable Modelled As Social System

The most ancient models that have survived can be understood in terms of generalisation: the model of a social system of meaning-makers is mapped onto other domains of experience[1], a process that entails the personification of the categorisable.[2]

The model that is mapped is a social network of meaning-makers, each comprising a social being, a person, embedded[3] behind a perceivable surface[4] — itself one way of construing individual participants in socio-semiotic systems.[5] The distinction between the embedded being and the outer surface models them as separable entities. 

This model is mapped onto the perceivable environment, construing all as social participants in — and causes of — natural processes. This mapping results in such construals as persons embodied by other animals, such as serpents, by trees, by springs, rivers, and seas, by mountains, by the sun, the moon and the earth itself.[6]


[1] See Durkheim (1915/76). 

[2] In cognitive linguistics, such mappings of relations are modelled in terms of conceptual metaphor. 

[3] This largely involves the use of what cognitive linguistics terms the container schema. 

[4] In some traditions, the surface functions as the boundary of the domain of such persons, and to pass beyond the surface is to enter that domain. The Lascaux caves, which contain the earliest paintings in Europe, are thought to have been such a domain for that community; in the Celtic tradition, to enter a sidhe, ‘fairy-mound’ is to enter such an ‘otherworld’. 

[5] This ‘homunculus’ model, of course, continues with such concepts as spirit, ghost, soul, and in some usages: psyche, mind and consciousness. 

[6] The planet as fertile female figured largely in some agricultural traditions, since fertile soil would produce new life some time after having seed implanted in it. In the Celtic islands off the coast of western Europe, the ancient earth-mound, Newgrange, and stone circle, Stonhenge, are aligned to receive the rays of the sun within them at dawn on the spring equinox, possibly as a soil fertilising process. The concept continues with such ideas as panspermia and Fred Hoyle’s suggestion that comets seed planets with (the molecules necessary for) life.

Origins Of The Model

(1) The Reasoning Behind The Generalisation 

Social semiotic systems evolve from categorising the behaviours of interactants — body movements and their products — as meaningful. Semiotic beings, persons, have meaning potential: the potential to express meaning. Since other categorisables are also meaningful, have potential meaning, they can therefore be categorised as meaning-makers: semiotic beings with meaning potential, giving rise to a model of the environment as a network of persons.[1] Such personifications of nature therefore express meanings which provide the means of learning how to fit into the natural environment. Nature is construed as a resource of meaning potential (a ‘font of knowledge’) for the community to learn from. 

(2) The Genesis Of The Model 

Clues to the genesis of this model can be seen in current ontogenesis. Firstly, when a semiotic system reaches a threshold level of complexity, in terms of the number of ungrouped choices, it becomes simplified, and so made more efficient, through a process of generalisation. An example of this, in the ontogenesis of English, is the generalisation of the regular past tense morpheme to all verbs, including irregular verbs, by children who had previously used the irregular forms. This is then followed by the gradual differentiation of exceptions to that generalisation. The genesis of the ‘all-as-social-system’ model can be similarly understood as a generalisation that occurred when the model of the perceivable reached a level of complexity that induced simplification through regularisation. The depersonalising of the environment that has occurred since those times can be understood as the gradual differentiation of exceptions to that generalisation. 

Secondly, the personification of nonhuman domains is a pervasive feature of adult–child interaction. In many cultures, adults personify other species for children in stories such as nursery tales and fables; they personify bears and trains as toys, they personify the sun and moon, as in drawings, and they personify comforting adult behaviours as the tooth fairy, easter bunny and santa claus, for example.[2] For their part, children map themselves onto onto nonhuman participants, as when they imagine themselves as lions, elephants and so on. Crucially, children value the process of personification: they delight in it, which in turn rewards and encourages adults who engage in personification in their interactions with them. If the positive valuing of personifying is biologically heritable, then it suggests that this biasing of semiotic behaviour is a feature of the feedback system between parent and child that guides parenting behaviour in inexperienced adults and increases the survival prospects of children. 

The biological value may have been as little as slightly more children surviving in some generations from simply assuming that all noises are caused by personifications of ‘ever-present danger’. But inventing dangerous persons[3] is an extremely efficient way of managing the behaviour of curious children, since there is no need for adults to answer myriad questions — fear does all the work for them.[4]


[1] In a later model, in mediæval Europe, the natural world was interpreted as expressions of the meaning of a personified creator. That is, nature was modelled as the text of a semiotic being.

[2] Mapping the familiar and human onto the alien and inanimate has the interpersonal value of providing comfort.

[3] Interestingly, in raising the bonobo Kanzi in an English-speaking environment, Sue Savage-Rumbaugh invented an unseen dangerous monster as part of his behaviour management.

[4] Just as fear does all the work for politicians trying to manage a curious electorate.