Autocomplete Presents the Best Version of You
Type the phrase “In 2019, I’ll …” and let your smartphone’s keyboard predict the rest. Depending on what else you’ve typed recently, you might end up with a result like one of these:
In 2019, I’ll let it be a surprise to be honest.
In 2019, i’ll be alone.
In 2019, I’ll be in the memes of the moment.
In 2019, I’ll have to go to get the dog.
In 2019 I will rule over the seven kingdoms or my name is not Aegon Targareon [sic].
Many variants on the predictive text meme—which works for both Android and iOS—can be found on social media. Not interested in predicting your 2019? Try writing your villain origin story by following your phone’s suggestions after typing “Foolish heroes! My true plan is …” Test the strength of your personal brand with “You should follow me on Twitter because …” Or launch your political career with “I am running for president with my running mate, @[3rd Twitter Suggestion], because we …”
Gretchen McCulloch is WIRED’s resident linguist. She’s the cocreator of Lingthusiasm, a podcast that’s enthusiastic about linguistics, and her book Because Internet: Understanding the New Rules of Language is coming out in July 2019 from Penguin.
In eight years, we’ve gone from Damn You Autocorrect to treating the strip of three predicted words as a sort of wacky but charming oracle. But when we try to practice divination by algorithm, we’re doing something more than killing a few minutes—we’re exploring the limits of what our devices can and cannot do.
Your phone’s keyboard comes with a basic list of words and sequences of words. That’s what powers the basic language features: autocorrect, where a sequence like “rhe” changes to “the” after you type it, and the suggestion strip just above the letters, which contains both completions (if you type “keyb” it might suggest “keyboard”) and next-word predictions (if you type “predictive” it might suggest “text,” “value,” and “analytics”). It’s this predictions feature that we use to generate amusing and slightly nonsensical strings of text—a function that goes beyond its intended purpose of supplying us with a word or two before we go back to tapping them out letter by letter.
The basic reason we get different results is that, as you use your phone, words or sequences of words that you type get added to your personal word list. “For most users, the on-device dictionary ends up containing local place-names, songs they like, and so on,” says Daan van Esch, a technical program manager of Gboard, Google’s keyboard for Android. Or, in the case of the “Aegon Targareon” example, slightly misspelled Game of Thrones characters.
Another factor that helps us get unique results is a slight bias toward predicting less frequent words. “Suggesting a very common word like ‘and’ might be less helpful because it’s short and easy to type,” van Esch says. “So maybe showing a longer word is actually more useful, even if it’s less frequent.” Of course, a longer word is probably going to be more interesting as meme fodder.
Finally, phones seem to choose different paths from the very beginning. Why are some people getting “I’ll be” while others get “I’ll have” or “I’ll let”? That part is probably not very exciting: The default Android keyboard presumably has slightly different predictions than the default iPhone keyboard, and third-party apps would also have slightly different predictions.
Whatever their provenance, the random juxtaposition of predictive text memes has become fodder for a growing genre of AI humor. Botnik Studios writes goofy songs using souped-up predictive keyboards and a lot of human tweaking. The blog AI Weirdness trains neural nets to do all sorts of ridiculous tasks, such as deciding whether a string of words is more likely to be a name from My Little Pony or a metal band. Darth Vader? 19 percent metal, 81 percent pony. Leia Organa? 96 percent metal, 4 percent pony. (I’m suddenly interpreting Star Wars in quite a new light.)
The combination of the customization and the randomness of the predictive text meme is compelling the way a BuzzFeed quiz or a horoscope is compelling—it gives you a tiny amount of insight into yourself to share, but not so much that you’re baring your soul. It’s also hard to get a truly terrible answer. In both cases, that’s by design.
You know how when you get a new phone and you have to teach it that, no, you aren’t trying to type “duck” and “ducking” all the time? Your keyboard deliberately errs on the conservative side. There are certain words that it just won’t try to complete, even if you get really close. After all, it’s better to accidentally send the word “public” when you meant “pubic” than the other way around.
This goes for sequences of words as well. Just because a sequence is common doesn’t mean it’s a good idea to predict it. “For a while, when you typed ‘I’m going to my Grandma’s,’ GBoard would actually suggest ‘funeral,'” van Esch says. “It’s not wrong, per se. Maybe this is more common than ‘my Grandma’s rave party.’ But at the same time, it’s not something that you want to be reminded about. So it’s better to be a bit careful.”
Users seem to prefer this discretion. Keyboards get roundly criticized when a sexual, morbid, or otherwise disturbing phrase does get predicted. It’s likely that a lot more filtering happens behind the scenes before we even notice it. Janelle Shane, the creator of AI Weirdness, experiences lapses in machine judgment all the time. “Whenever I produce an AI experiment, I’m definitely filtering out offensive content, even when the training data is as innocuous as My Little Pony names. There’s no text-generating algorithm I would trust not to be offensive at some point.”
The true goal of text prediction can’t be as simple as anticipating what a user might want to type. After all, people often type things about sex or death—according to Google Ngrams, “job” is the most common noun after “blow,” and “bucket” is very common after “kick the.” But I experimentally typed these and similar taboo-but-common phrases into my phone’s keyboard, and it never predicted them straightaway. It waited until I’d typed most of the letters of the final word, until I’d definitely committed to the taboo, rather than reminding me of weighty topics when I wasn’t necessarily already thinking about them. With innocuous idioms (like “raining cats and”), the keyboard seemed more proactive about predicting them.
Instead, the goal of text prediction must be to anticipate what the user might want the machine to think they might want to type. For mundane topics, these two goals might seem identical, but their difference shows up as soon as a hint of controversy enters the picture. Predictive text needs to project an aspirational version of a user’s thoughts, a version that avoids subjects like sex and death even though these might be the most important topics to human existence—quite literally the way we enter and leave the world.
We prefer the keyboard to balance raw statistics against our feelings. Sex Death Phone Keyboard is a pretty good name for my future metal band (and a very bad name for my future pony), but I can’t say I’d actually buy a phone that reminds me of my own mortality when I’m composing a grocery list or suggests innuendos when I’m replying to a work email.
The predictive text meme is comforting in a social media world that often leaps from one dismal news cycle to the next. The customizations make us feel seen. The random quirks give our pattern-seeking brains delightful connections. The parts that don’t make sense reassure us of human superiority—the machines can’t be taking over yet if they can’t even write me a decent horoscope! And the topic boundaries prevent the meme from reminding us of our human frailty. The result is a version of ourselves through the verbal equivalent of an Instagram filter, eminently shareable on social media.