55. Fear Of Being “Reduced”

Results without causes are much more impressive.[1]

The practice of analysing complex phenomena in terms of their parts risks the interpretation that such phenomena are ‘nothing but’ those parts. This erroneous interpretation is most explicitly signalled by the use of words such as ‘just’ ‘only’ ‘merely’ or ‘simply’. Grammatically, these words function as mood adjuncts of intensity: subcategory counterexpectancy: subcategory limiting. That is, their meaning is ‘contrary to expectations, the phenomenon is limited to x’. So, when applied to construals of humanity, these meanings trigger the fear that despite what anyone thinks, humans are limited to anatomy, chemicals, atoms. or whatever. As argued here, the constituency model is limited to static structures and, because of this, is insufficient to deal with organisations that emerge dynamically through the interaction, and which therefore cannot be understood solely in terms of constituents. 

The covert assumption that makes this misinterpretation threatening is the false belief that lower levels of organisation are somehow more “real” or “true” that higher levels — a view that can be held by both proponents and opponents of reductionist thinking. However, lower level models are not more “real”. Like all models, they are semiotic descriptions, and distinct from what is being modelled. Lower level models are (just only merely simply) more self-consistent — and so more reliable — than higher level models, and this is because they describe simpler systems, and therefore require less time to evolve in the history of modelling. 

This fear is related to the fear that any model of a phenomenon reduces it to that explanatory description.[2] (Again, this is to confuse what can be modelled with the model of it.) The reduction of ignorance is misconstrued as the reduction of value.[3] This assumption equates positive value with ignorance. Positively valued ignorance is called ‘mystery’. 


Footnotes:

[1] Sherlock Holmes in The Stockbroker’s Clerk by Conan Doyle.

[2] This is related to the fear of being uninteresting. As Hofstadter (1980: 621) observes: 
This is, it seems to me, a general principle: you get bored with something not when you have exhausted its repertoire of behaviour, but when you have mapped out the limits of the space that contains its behaviour. 
[3] Humanity modelled is not humanity explained away.


ChatGPT revised:

The Danger of Reductionism and the Fear of Simplification

Results without causes are much more impressive (Sherlock Holmes, The Stockbroker’s Clerk).

The practice of analysing complex phenomena by breaking them down into their individual parts risks the interpretation that these phenomena are ‘nothing but’ the sum of those parts. This reductionist interpretation is often signalled by the use of words such as ‘just’, ‘only’, ‘merely’, or ‘simply’. Grammatically, these words function as mood adjuncts of intensity, specifically within the category of counterexpectancy and limiting. Their meaning is essentially: “contrary to expectations, the phenomenon is limited to x”.

When these terms are applied to construals of humanity, they evoke a fear that, despite any theoretical or emotional assertions, humans are limited to their anatomical, chemical, or atomic components. As argued earlier, the constituency model (which breaks down systems into parts) is limited by its static nature and is insufficient for understanding dynamic systems—systems that emerge through interaction and cannot be fully understood merely by looking at their parts.

The covert assumption driving this misinterpretation is the false belief that lower levels of organisation (e.g., biological or chemical components) are somehow more “real” or “true” than higher levels of organisation (such as psychological or social phenomena). This view is held both by proponents and opponents of reductionist thinking. However, lower-level models are not more “real”. Like all models, they are semiotic representations, distinct from the phenomena they aim to describe.

Lower-level models are more self-consistent and reliable simply because they describe simpler systems, which are easier to represent and require less time to evolve in the history of modelling.

This fear of simplification is closely tied to the concern that any model of a phenomenon reduces it to that explanation. This confusion between the phenomenon itself and the model of it leads to the erroneous belief that reducing ignorance is equivalent to reducing value. In this framework, the positive value of a model is often seen as linked to the mystery it maintains.