The Language Conspiracy

How are we really special?


https://en.wikipedia.org/wiki/Reification_(fallacy) Reification is part of normal usage of natural language (just like metonymy for instance), as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy. According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or “concrete” reality: “There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the ‘Fallacy of Misplaced Concreteness.’”

Roy Harris talks about a “Language Myth” (Harris, 1980) when describing modern (particularly Western) view on “language”. But maybe a better way to think about it is as a “Language Conspiracy”. Conspiracies are absolutely fascinating phenomena in that something that might very well be seen by one group as a perfectly normal set of activities and states of affairs are seen by others as dastardly actions most foul, devised by a cabal of nefarious masterminds in the birthing of their ungodly designs.

One absolutely key aspect is intention and the associated concept of agency. Now intention and agency are subjects that have been dealt with with ad nauseum over the millenia, in pretty much any culture that has people with enough time for sustained navel contemplation (which is all of them, though not necessarily all the time!). My take is this: there is no such thing as “agency in nature” but we are unable to understand anything without thinking in agentive terms. Humans are social beings, and the basis of our capacity to apprehend and comprehend the world is on the foundation of thinking in terms of agents. Basically, there is no such thing as agency, so we invented it.

James Paul Gee has a concept he calls the “Transacting Swarm” (Gee, 2020), which I believe is a very fertile one for thinking about agency and particularly how that relates to the idea of language. Essentially, the model/theory/metaphor is that there are multiple layers of interacting elements that cohere at various levels. So quarks cohere to form atoms, atoms to form molecules, …, cells to form organs, organs to form bodies, bodies to form communities, communities to form nations, etc.

There are a couple of interesting things to note about these:

  • what the levels are is not given by nature, it is laid over the system by us. This means that the units of the swarm, at least as we are able to comprehend them, are invented by us as constituent parts. The universe didn’t create atoms, the universe just has wobbles in space-time. We are interpreting these things as units, as things. This is due to many factors, including our dominant perceptive capabilities, other physical accidents of evolution, historical accident, etc.

But we can’t understand the world like that, we can only think in terms of agents. Some have talked about modern notions of God (in the West at least) as a “God of the gaps”. Basically, if we don’t yet have a good “scientific” theory for something, God can take care of it, otherwise (S)He isn’t required anymore. As we “evolve” and develop “better” scientific theories, God continues to live, but only in the gaps between these theories. If only it were that simple. In many ways science is not at all about “filling in the gaps”, all it is doing is putting a new lick of paint on the old delusions.

Humans understand the world through the prism of their physical senses, which are then made sense of through culture (narrative, Discourse, chronotope, or whatever other term is popular at the moment). The problem is that, because there isn’t really anything there to begin with, and obviously that isn’t going to motivate us to action much, we need to make something of the wobbles in space-time. So we inject form, mechanism and agency everywhere. Because the form, mechanism and agency aren’t really there in the first place, it isn’t a massive deal that we invent stuff that puts agency where it isn’t there. Vast proportions of the members of our species have at least some belief in agentive beings that we cannot see or hear in the typical sense of the words, many of whom have physical (and spiritual) capabilities that vastly surpass our own. Many people also believe these agentive beings care about us (what we think, say or do anyway), and can and do intervene on our behalf on a semi-regular basis. Even the most dedicated positivist atheist’s rational rigour can fail in front of lady luck sometimes, even if he remedies that quickly afterwards.

So if we agree that even things like atoms aren’t “really there”, at least in some sense, then what is the harm of talking about “language” and “languages” as if they are “really there”? After all, pretending that atoms are “really there” allows us to do some enormously useful stuff, so why not also pretend for “language(s)”? The key question to ask is what the “useful stuff” for “languages” might be. Thinking in terms of languages as “things” and of humans as having a special (and unique) “capacity” for language might be useful. For what, then?

The problem with modern language scholarship is that is a huge proportion of the so-called “language scientists” act as if thinking about language in these terms (which, we should agree, is a collective hallucination from the outset) is neutral in terms of the externalities in brings onto the scene. Every metaphor, theory, position, decision has advantages and disadvantages, costs and benefits. In order to understand any real-world phenomenon, we necessarily need to abstract away a certain number of factors that, we claim at least, are “unimportant” for understanding or predicting with the theory. A model of the performance of a football team is unlikely to include parameters for the spin of the electrons on the ball. The ball still has electrons though, whether we include them in our theory or not. So any theory that we could ever tractably propose is going to “sanitise” the world. The reality is that this sanitisation is not “neutral” or “objective”, no matter how much we proclaim our scientific credentials, no matter the university we work in or publications we have amassed. We abstract away the details that are unimportant for helping us do useful stuff. Again, deciding what is “useful” is where the history, culture and ideology comes in. Why don’t they include spin in the football performance model? Who cares whether it doesn’t significantly improve monetary benefit from bets placed using the model? The ball’s electrons definitely have spin! No one can deny it!!! We can happily ignore electron spin in the context of a team’s performance model because it doesn’t help us do anything useful. At least we can’t yet think of anything useful it let’s us do. It might later, we just don’t know.

Anyone who has seriously studied “language” and “languages” in the real world very quickly realises that both are actually quite crass abstractions. If a language is necessarily shared, then surely we need to find a number of actual people who share it. The problem is that it is very easy to show that no two human beings share exactly “the same language”. People’s vocabularies, grammars, sets of expressions and pronunciations always differ quite significantly, even for identical twins who grew up in close proximity. It also won’t do to simply find the minimal shared set of elements, as that would require us to exclude a large number of things we really want to keep, particularly where we have generational change. Worse still, we need to come to agreement on whose speech we will include and whose we won’t. It turns out that, without exception, who gets included and who doesn’t is based on deeply political and ideological choices. Does an immigrant’s speech get included? What about when they immigrated at age 3? 5? 8? 15? Even within groups of “natives”, does “ghetto” speech get included? It turns out that even for so called “corpus-based approaches” like the University of Birmingham and Collins’ COBUILD dictionary, some speakers and some speech gets left out of “the language”. Because essentially every human has an idiolect, if we are going to make an abstraction out of a set of these idiolects then we need to decide what to leave out, and what to put in. Who gets in and who doesn’t is very much controlled by the political and ideological stances of the project leaders and funders of any such endeavour. What happens to people who don’t make the cut? Do they “only speak idiolect”? There is a very common refrain when discussing these issues in linguistic circles - a language is a dialect with an army and a navy. And here we start getting to the crux of the matter…

So what is the idea of “languages” really good for? So languages are fictions. They demonstrably don’t “exist in the real world”, or at least they don’t in any way that common folk would be happy to pay people to study as objectively isolatable objects of study. They are abstractions that we construct from an ideological standpoint, deciding who and what to put in and what to leave out.

So forcing people to learn a “particular language” is a very, very powerful and ideologically driven mechanism for being able to create potentially ever larger Transacting Swarms. This not only allows for creating ever larger Transacting Swarms though, it also very significantly increases the numbers and possible permutations of “sub-Swarms” There is no “language” there though, what we are actually doing is first creating (via grammars and dictionaries, that crystalise a particular set of semiotic habits)