Could language models change language?
How auto-complete might terraform our linguistic reality.
In The Alignment Problem, Brian Christian argues that many new tools rooted in machine learning have the potential to reshape our experience of the world. It’s easy to think of examples that already have, at least to a degree: recommender systems, search engines, and even automated recruiting tools1.
The writer Donald MacKenzie makes a similar argument in An Engine, Not a Camera: financial models don’t just reflect the state of financial markets––they actively alter them.
There’s a common thread here: models of reality don’t exist in a vacuum. They produce outputs, whether directly or through human intervention, and these outputs can terraform the world they were initially designed to model.2
But what happens when the world in question is our linguistic reality?
From auto-complete to language generation
If you’ve ever used Google Search, you’ve probably encountered some form of auto-complete. As you type, the form you’re using begins to make suggestions for you. Sometimes these suggestions are comically wrong, but quite often, they’re pretty accurate. How does this work?
Here’s Google’s description:
Autocomplete predictions reflect real searches that have been done on Google. To determine what predictions to show, our systems look for common queries that match what someone starts to enter into the search box but also consider:
The language of the query
The location a query is coming from
Trending interest in a query
Your past searches
For any given word or phrase that you’ve typed, Google’s system tries to predict the next word or phrase that you’ll use––based on trillions and trillions of past Google searches. (Google Search apparently gets around ~3-5B searches per day.)
You can probably play this game pretty well too. Consider the following sentence fragment:
She likes her coffee with cream and ______
If you’re like me, you probably guessed something like “sugar” to fill in the blank there. You probably didn’t guess an anomalous noun like “dog”; it’s even less likely that you guessed another part-of-speech altogether, like “walked”. In fact, this sentence––and others like it––were used in a now-classic 1980 study by Kutas & Hillyard, which ended up launching a decades-long research program into the so-called “N400 component”3.
My point is that language is relatively predictable. Given a particular word (e.g., “the”), some words (and parts of speech) are much more likely to follow than others. This basic observation is really the foundation of auto-complete. In recent years, auto-complete has gotten a whole lot better because of neural language models (or NLMs): gigantic neural networks, trained on hundreds of billions of words, which predict upcoming words with surprisingly high accuracy.
These models––like GPT-3––can also be used to generate longer passages. Researchers prompt the model with a word or string of words, then use the model to produce an upcoming word, then another upcoming word, then another, and so on. Each time, the model’s predicted word can be used to make predictions about the next word. This is what enables these models to produce apparently sophisticated prose and even poetry (examples here).
I’m going to gloss over, here, the question of whether these models really “understand” language; others have discussed this in much more detail, as well as other ethical issues with the deployment of NLMs, and I think that topic merits its own post. The question I’m interested in here is: how might the widespread application of these models change language?
How would this work?
Before I dive into potential the mechanisms and effects of this change, I want to provide a brief illustrative example.
Imagine you’re using Gmail to compose an email to a colleague. Gmail, like many other applications, has a predictive text feature. While you’re typing, Gmail makes suggestions about which word (or words) might come next. For the most part, perhaps you ignore these suggestions. In other cases, perhaps Gmail predicts the word you wanted to use perfectly well, so you select its suggestion.
But sometimes, the word Gmail predicted isn’t exactly what you were thinking, but it’s close enough to the word you wanted to use. Maybe you were thinking of writing “Your presentation today was really fascinating”, but Gmail predicted “interesting”. This isn’t enough to substantively change the meaning of your email, but it does have a subtly different tone. “Interesting” is a little more boring, a little less excited, than “fascinating”.
And let’s say that––again, only sometimes––you go with the suggested word instead. As soon as your word choice is changed by Gmail’s suggestion, it fits the definition of these models altering our language use. Sure, this isn’t particularly groundbreaking, but imagine this happening again and again: every time someone thinks to use the word “fascinating”, Gmail suggests the word “interesting” instead.
Now consider the fact that you don’t always know exactly what you want to say before you say it. I’ve certainly had the experience of figuring out the words I want to use as I speak or write. Auto-complete mediates that process. Depending on your perspective, you might think of this mediation as a kind of “augmentation” or "as “interference”. My goal in this article isn’t to argue one way or the other––it’s just to convince you that, in principle, this kind of mediation is possible.
How does this connect to language change?
Or so you might be wondering.
My claim––which is certainly not original to me––is that the ways we use language end up shaping language itself. There are a bunch of flavors of this claim in the Linguistics literature, but they all have in common a couple common core insights:
Language is used for communication.
The ways we use language reflect constraints: on producing language, understanding language, and so on.
Thus, what we call a “language” is the product of these local constraints that operate during communicative interactions.
From this perspective, the roots of language change––variation over time in a language’s features––can be located (at least in part) in these constraints.
As always, the devil is in the details, and linguists and cognitive scientists have proposed a plethora of different constraints or “selection pressures”––many of them, rather unhelpfully, at different levels of analysis––that might be operating at any given time and which might be responsible for any given change (be it a change in grammar, phonology, meaning, and more).
But despite the difficulty inherent in explaining these changes, I think there’s a lot to like about the basic insights I’ve laid out above. There’s something quite intuitive, perhaps even obvious, about it: of course a “language” is, in some sense, the product of a bunch of things that people say and how they say them.
What happens when it’s not people saying those things?
But first: writing
There’s at least one other case of a dominant communication technology possibly shaping language (and language change): writing.
Writing is, of course, thousands of years old, so we don’t often think of it as a technology. But languages were spoken or signed for tens of thousands of years before writing came along, so it certainly represents a technological innovation of sorts.
Moreover, some have claimed that writing slows down the pace of language change. One way it might do this is by increasing metalinguistic awareness: the ability to consciously reflect on the nature of language (e.g., “what is a word?”). And in fact, there’s some empirical evidence that literacy is connected to our ability to recognize “words” as units of speech. According to this argument, increased metalinguistic awareness might freeze certain linguistic practices in place.
How would this work?
Consider the standard view of how language evolution operates. At any point in time, there’s a pool of variation in how people use any given language: they pronounce words a little differently, use slightly different grammar or word choices, and so on. And as in biological evolution, variation is the engine of selection––those “mutations” are what give rise to potential changes at the level of the language itself.
Writing, however, might cause speakers to view certain linguistic entities (like a word) as relative fixed and stable. Because the concept of a word is tethered to its written form, that psychological fixedness could influence the rate at which, say, the pronunciation of the word changes––specifically, it might cause it to change more slowly.
This is an interesting prediction, but I don’t know whether this is true (and if so, how true and in which circumstances). It would be very hard to assess empirically: doing so would require some measure of how quickly languages change without writing, and then comparing languages with writing to those without. This is challenging because languages without writing are (as one might expect) much less well-documented, particularly in terms of their historical records.
But I mention this example for two reasons:
To illustrate that there are other, older technologies that may also play a role in language change.
To illustrate a potential mechanism––that of freezing a language in place.
Now, onto NLMs.
How NLMs might change language
Imagine a world in the not-so-distant future––maybe 5-6 years down the line––where more and more of our communicative interactions are mediated in some way by something akin to predictive text. Maybe, by that time, we’re even dealing with predictive text 2.0: we tell Gmail roughly what we want to say––including the tone and inferences we want a reader to draw––and it produces a draft for us to review4. And further, maybe many more of our interactions are digital in the first place, so there are more opportunities for these models to be deployed.
How might such the widespread use of this technology shape our language?
Here are some possible sketches of that future, presented as distinct “hypotheses”.
H0: They don’t.
This is, in essence, the null hypothesis. This hypothesis claims that NLMs won’t have a discernible impact on language itself; or perhaps, to make it a little more viable, that this impact will be very marginal compared to the base rate of change.
Note that this is an outcome, not a mechanism. There might be a couple different paths to this destination, so to speak:
Maybe NLMs don’t get that much better, or NLMs don’t ever see widespread deployment––they’re confined to technologies like Gmail.
Maybe NLMs are actually pretty good, but they are continually retrained on new data, so they never end up “freezing” a particular language system in place––they’re always incorporating new linguistic innovations that people generate.
In (1), NLMs have ~0 impact because they’re not used very much. This is kind of cheating, because the scenario I described above asserted that they were more widely used.
(2) is a little more interesting. Here, the claim is that NLMs don’t end up “freezing” a linguistic system in place (as some argue writing does) because they keep up with the pace of language change. Maybe they lag behind the most innovative users, but because they’re always retrained, they’re never given an opportunity to fossilize a language system too much.
It’s important to note that the implicit assumption of (2) is that the counterfactual scenario is one in which NLMs slow language change; (2) is asserting that retraining the NLMs frequently will prevent this from happening.
H1: They slow down language change.
This hypothesis claims that widespread deployment of NLMs will slow down the pace of language change––ultimately homogenizing language to some degree.
Recall that NLMs generate language by predicting the most likely word in a given context. If I write “salt and ___”, the most likely word is probably “pepper”. Similarly, if “store” is more likely than “library” in “He went to the ____”, then an NLM will (usually) assign higher probability to “store”.
Importantly, if the same language model is used to predict the language of many different users, then its probability distribution over words in similar contexts will be pretty much the same. That is, the NLM will always be attracted to similar sets of words in a given context, which means that across a large number of samples, they’ll reinforce the patterns they’ve observed in the training data. Hence: homogenization.
Depending on how much NLMs pervade society, we might observe a range of effects:
If NLMs are primarily used for things like emails (e.g., work communication), then we might observe a kind of “bifurcation” of that language. Some linguistic registers––those involving NLMs––are homogenized, while others are left free to change at whatever pace they wish. This is actually pretty common among the world’s languages (e.g., many have a “literary” variety).
If NLMs are used for pretty much all communication––emails, journalism, texting, etc.––we would expect a proportionally larger effect. Here, perhaps language change is slowed to a crawl (again, relative to what we might otherwise expect).
H2: They speed up it up––and take language to some weird places.
So far, I’ve been comparing NLMs to writing. But there’s a key difference: NLMs are generative. Writing is a conduit for language users, but NLMs are––or at least can be––language “users” themselves. And so, this hypothesis claims, NLMs might end up innovating novel linguistic expressions as well––and perhaps in ways totally different from how human linguistic innovation usually works.
The likelihood of this scenario probably depends on whether NLM-mediated communication is managed by a single, centralized model, or whether different copies of this model are paired to different humans, which then “grow” or adjust according to each person’s idiolect. Let’s call this a Personalized Language Model (or PLM). It’s conceivable that these PLMs will come to embody the idiosyncrasies of an individual’s linguistic usage––and perhaps, in the right circumstances, develop idiosyncrasies of their own.
Some might remember the reinforcement learning agents in a Facebook experiment that “invented” their own language. Left to their own devices, PLMs might take language to some weird places.
Exactly how “weird” language gets will depend, in turn, on how big a constraint human comprehension plays in the use of these PLMs. Obviously, at the end of the day, humans need to communicate with each other––so the limits of PLM-induced language change are to some extent the limits of human cognitive and communicative capacities. But if PLMs are used to assist with comprehension as well, some of those limits might be obviated or side-stepped.
Below, I sketch out two scenarios representing these possibilities. The key difference between them is whether PLMs end up being used to help with comprehension processes as well as production.
Scenario 1: Not too weird
In this scenario, PLMs perform something qualitatively similar to auto-complete (i.e., what they do now). They help us compose emails, but not to read them.
If this is the situation, I predict that PLMs might inject some interesting and surprising noise into our linguistic systems, but this won’t deviate too far from the state-space of attested languages. After all, we still need to be able to understand what they’re producing.
Scenario 2: Very weird
In this scenario, PLMs do help us read our emails––and our texts, and news articles, and more. Each PLM is equipped with a “summary” feature, which produces personalized summarizes tailored to our precise linguistic preferences and world knowledge. Given a news article (also, incidentally, produced by a PLM), our PLM will read that article and generate a version of it most conducive to our understanding5.
Similarly, imagine we want to send a message to our friend. First, our PLM helps us compose that message. Then, the message is sent to our friend’s PLM; that PLM adjusts the message into a form suitable for our friend’s idiosyncratic language use.
This is not unlike using Google Translate to mediate communication between speakers of different languages. Only in this case, both language users speak the “same” language. It’s just that they never have to encounter different idiolects of that language, because their PLM “translates” all incoming messages into their own idiolect.
This creates a weird incentive for speakers: they no longer have to produce language that’s directly comprehensible by other human speakers. As long as the other NLMs are able to fluidly translate between idiolects, speakers can just produce whatever linguistic expression is easiest for them to produce, and the NLMs handle the rest.
Now, there’s some evidence that this is partly what speakers already do: as Ferreira (2008) notes, language production is hard. Therefore, speakers might opt for what’s easiest for them to produce, as opposed to tailoring their expression to something that’s easier to comprehenders to understand.
On the other hand, clearly language production involves some audience design. For example, we often speak to children differently than adults. Similarly, if we’re trying to be polite, we might craft our message in a way that doesn’t offend our interlocutor. Or to make things even more extreme: we don’t simply go around saying “ba ba ba”, because even if we knew that what we intended was “Yesterday I read an excellent paper about sound change”, our comprehender has no way of knowing. Human language is subject to the constraint of being interpretable by human comprehenders.
But the advent of PLMs might remove that pressure. At the very least, any given linguistic expression just needs to be comprehensible by a PLM, such that the PLM can tailor the expression to its human partner––thus reducing the need for human speakers to engage in audience design. And this might make language very odd indeed6.
Scenario 3: Weirdest
If you believe in Artificial General Intelligence, then it’s not outside the realm of possibility that different artificial agents will be communicating primarily with each other, not with humans. If this is true, it’s most similar to the “language” created by those Facebook agents described above––humans aren’t even in the loop at all.
I hesitated to include this scenario––not because I think AGI is impossible, but because I’m not sure whether those AGIs would be using something like human language to communicate with in the first place. But if NLMs are the first step towards AGI, then perhaps human language ends up being a kind of “stepping stone” on the way to whatever communication system these AGIs develop. That system could presumably look quite different from any human language we’ve ever observed.
Taking stock
These hypotheses are extremely speculative. My goal here was not to provide evidence for one hypothesis or another, but rather to illustrate the range of possible futures––inspired, to some degree, by Holden Karnofsky’s work on the most important century.
But I have tried to enumerate the full space of possibilities as best I can. My hope is that understanding these possible outcomes is the first step to starting to understand which outcome is more or less likely. If NLMs never see widespread use, then H0 is clearly the most likely outcome. But if we start to see the emergence of something like a Personalized Language Model, then hopefully the scenarios I described in H2 will provide a template for understanding how that might play out.
Since scrapped by Amazon after it was revealed that the system displayed a clear bias against female applicants.
Note that this is true of more than just true of machine learning: even a map designed by a cartographer is, of course, a simplification of the territory––and importantly, that simplification might lead the people who use that map to take certain routes, i.e., changing their experience of the landscape.
A subject for another post.
It’s worth noting that this is really not so different from ways that people prompt GPT-3 and other language models; or, to change the domain somewhat, DALLE-2.
Now, what “understanding” means here is left open for debate.
It’s also an interesting question what exactly we’d call “language” in this scenario. Is it what individual writers or speakers produce? Is it the output of a PLM or all the PLMs combined together?