Language, cognition and alien math (Part I)

July 13, 2020. Linguistic relativism is the notion that language determines reality. Here, I introduce a variant called cognitive relativism, and consider whether humans could simulate bats. This will prepare us (hopefully) to explore whether aliens can do our math homework!

Linguistic relativism in brief

He who learns a new language, acquires a new soul.

Czech proverb

If a lion could talk, we could not understand him.

Ludwig Wittgenstein

Language, it seems, is tied to our view of reality. Two famous and contested examples: the Inuit have more root words for snow than English, while the Hopi language reflects a different conception of time. (The former seems more attested than the latter.) The claim that language affects our perception of reality is called linguistic relativism. “Strong” linguistic relativism states that your languages determines your reality. “Weak” relativism claims that language merely influences your reality. Perception is downstream either way.

Perhaps the sharpest version of the hypothesis is that colour words reflects cultural differences in the perception of colour. This ties into a 19th century prediliction for making cognitive judgments on the basis of style. For instance, the rarity of colour words in the epics of Homer led British politician William Gladstone to write that

…the organ of color and its impressions were but partially developed among the Greeks of the heroic age.

Pomposity aside, there is a scientific thesis here. In fact, experiments by cognitive linguists suggest that colour terminology does lead to small differences of performance in tasks involving memorisation and categorisation of colour. Colour terminology is correlated with something. But rather than proving relativism, I think the best way to understand these experiments is to relinquish the false dichotomy between relativism and universalism altogether.

Language, culture and cognition

Would anyone really be surprised to learn that there are more words for snow in Inuktitut than Amharic? This seems about as shocking as the existence of the periodic table. I would expect Inuit hunters to outperform their Ethiopian counterparts in tasks involving fine-grained perceptions of snow, since their survival literally depends on it. Similarly, the periodic table does not embody a mere “agreement that holds throughout the speech community”; rather, it encodes facts about the world chemists care to pay attention to. I use the word “encode” deliberately, since words tokenise relevant differences in a way that aids compression. It takes less bytes to say things, and lowers the cognitive burden of storage and manipulation, so both reasoning and communication are easier.

Reference is only one aspect of language. Our uniquely human drives and needs are expressed in culture. Culture, in turn, is woven marvellously through with its language games of family, hierarchy, art, courtship, religion, custom, protocol, and so forth. Language and culture are not synonymous, as founding relativist Edward Sapir points out:

It is easy to show that language and culture are not intrinsically associated. Totally unrelated languages share in one culture; closely related languages—even a single language—belong to distinct culture spheres.

Sure. But show me a culture without register, slang, or other forms of sociolinguistic variance and local colour, and I will show you a sloppy lexicologist.

Culture and language partially adapt and create each other, co-evolving to serve cognitive and social demands. If we split these forces up and decide on some fixed order of influence, the cart will be before the horse, whichever order we choose. Linguistic universalism, relativism’s opposite number, seems to me to make the same error of unidirectional generalisation. A mnemonic, for instance, is a way of shaping cognition with language, and education is (in part) the mnemonic of a whole structure of life, a process of enculturation and cognitive bootstrapping via the written and spoken word. Clearly, language can influence perception and thought.

Cognitive universalism and relativism

But that influence can’t happen in just any old way. Brains are brains, obeying the Hebbian maxim that “neurons that fire together, wire together”. If linguistic relativism is a claim that language changes our brains in any way other than Hebbian learning (or a related, neuronally-grounded learning mechanism), then it is probably wrong. Similarly, if universalism claims that language cannot change our brains in a Hebbian fashion, it is wrong. And what is Hebbianly possible is a question for neuroscientists rather than linguists. The only way for perception to be fundamentally different—rather than the result of Hebbian training—is for brains to magically change as we cross the boundaries of culture, race or speech community. Cognitive universalism is the empirically defensible claim this does not happen. Different people have different brains, certainly, but the variation between individuals is greater than the variation between cultures. Brains in Peru form some plane of cognitive variance; a brain in Spain falls mainly on this plane.

Language and culture clearly have effects on how we operate. In fact, they can be viewed as “operating systems” interfacing between the IO of reality (including the reality of our fellow featherless bipeds) and our relatively invariant cognitive hardware. These operating systems differ not only because reality varies (sub-Saharan Africa is not Nunangat), but because the choice of interface is radically non-unique.

If cognitive universalism within the species is a biological fact, between species we would expect a form of cognitive relativism to hold. A dog snuffling around in the undergrowth, a bat navigating by sonar, and a human piloting an F-15, are experiencing the world in different ways; each “sees” things the other cannot, and is afforded new possibilities for action as a result. As with language, we can distinguish two forms of cognitive relativism. The strong form states that your brain (or cognitive architecture) determines your reality, with the upshot that different animals will have incommensurable experiences. (This is like the claim that some things are untranslatable.) The weak version replaces “determines” with “influences”.

I think the weak version is uncontroversially true, so for the rest of this post, we will focus on the strong form, and incommensurability in particular. An immediate objection is that a gnat is simply not neurally equipped to understand human experience, so of course our experience is incommensurable to it. But what about the other way round? Can we “simulate” life as a gnat, which is evidently much lower on the cognitive ladder? More generally, can any creature simulate any other creature? This leads to a sharp version of strong cognitive relativism: two organisms will have (mutually) incommensurable experiences unless they have the same sort of brain.

The inner life of bats

The most famous argument for incommensurability is Thomas Nagel’s What is it like to be a bat?. Nagel is really interested in showing there is an interior aspect to consciousness that cannot be captured by reductive explanation; in a slogan, mind is different from brain. But his argument relies on incommensurability to suggest there is “something it is like” to be a bat which humans cannot simulate. This “something” is mind. As he puts it:

Even without the benefit of philosophical reflection, anyone who has spent some time in an enclosed space with an excited bat knows what it is to encounter a fundamentally alien form of life… [B]at sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine. This appears to create difficulties for the notion of what it is like to be a bat.

There is no real argument here (or the rest of the paper for that matter), just the bald assertion that echolocation “appears to create difficulties” for getting inside a bat’s head. Can we do better?

The German biologist Jakob von Uexküll provides one such approach. Uexküll wanted to understand how organisms model and interact with their environments, and thereby create meaning. He used the term umwelt (German for “environment”) for a synthesis of factors,

…constituted by a more or less broad series of elements, ‘carriers of significance’ or ‘marks’ which are the only things that interest the animal.

I think this is an instructive philosophy. By carefully considering the “marks” available to the bat, learning about cave topography, feeding habits, social structure, experimenting with human echolocation, and so on—in other words, becoming practical ethologists—the imaginative obstructions do not seem fundamental. This closely parallels Daniel Dennett’s notion of “heterophenomenology” in his reply to Nagel, so we will adopt this term for ethologists who try to experience the inner life of other animals. Although there is no conceivable sequence of Hebbian changes that will turn me into a bat, with enough effort I could maybe simulate one.

Functional gestalts and heterophenomenological mecha

But is it so easy to simulate a bat? A bat, unlike a human, knows how to operate all of its interlocking cognitive machinery seamlessly and in real time. The different components form a “functional gestalt”, defined precisely by all the parts operating together at once; in fact, it seems reasonable to identify this gestalt with the umwelt itself. There is no magic or phenomenology here, just an organism competent at living. And the competent organism “knows how” while the ethologist “knows that”. You might be the world’s greatest expert on bat heterophenomenology, but to learn how to drive the batmobile, you probably need to spend some time in the driver’s seat.

To get driving practice, you would have to be immersed in a bat-like environment and obliged to use bat-like interfaces to solve bat-like problems, for a suitably long period of time. (I think “bat-like” is good enough to reproduce the functional aspect of this gestalt; to adopt the functionalist motto, we simply want to capture the relationships between outputs and inputs, so analogues will do.) Thus, we are led to imagine a “heterophenomenological arcade”, offering a full range of non-human inner lives in the form of, say, full-body VR immersion tanks.

To get a maximally faithful mirroring of an organism’s IO, we could wire it up to one of these arcade units, so instead of an arcade, we get a “heterophenomenological mecha”. The bat tank, for instance, could translate sensory inputs from a real bat into human-readable form, and then translate the human operator’s decisions into actions performed by the bat. (I think this is a horrifically unethical thing to do, but it’s a gedanken so I don’t feel so bad.) If the human can successfully drive the batmobile with enough training, navigating the umwelt in a comparable way to a native user, I would say that its experience was human-commensurable. But if there is no way to produce human-readable functional analogues, or train the human to successfully solve bat problems, then we have an empirically grounded claim of cognitive incommensurability.

This is like the “linguistic mecha” of moving to a country where one is obliged to speak a second language and operate within a different set of cultural norms. Doing this successfully indicates fluency. The fact that this sort of fluency is fairly common provides strong evidence against linguistic incommensurability. (One could argue that a fluent non-native doesn’t have the true “interior” phenomenological experience that a native speaker has, but the point of introducing functional criteria is to get away from this sort of unfalsifiable mysticism.) But returning to the problem at hand, I guess that, until we make heterophenomenological umwelt mechas, we won’t know how strongly relativist to be about cognition.

Next time

So finishes Part I. I realise this post is rather discursive and open-ended, so perhaps I will clean it up in future.

Although Nagel would like to claim that there is something “qualitative” the bat simulation misses, his argument for phenomenological difference is based on a functional difference, so this seems like a reasonable place to start! If you like, we have explored the special case of functional incommensurability. But this is all I need for my end goal (to be reached in later posts) of exploring the likelihood of “mathematical universality”, i.e. that radically different forms of intelligent life will be able to do each other’s math homework. On our journey to alien mathematics, I’ll probably consider functional aspects of sentience, telepathy, convergent evolution, and Big Thoughts about the nature of physical reality. Stay tuned for the next instalment!

Written on July 8, 2020
Philosophy