How Censorship Is Creating A New TikTok Language
For as long as there’s been censorship, humans have found ways to navigate around it. They might use legal loopholes, subterfuge or any number of other tactics that allow people to say what they want without it catching the attention of prying eyes. One of the most common is simply playing with language in a way that circumvents certain rules. The interplay of language and censorship has a long history.
Today, censorship comes in many forms. Many people’s main experiences of it come from social media companies attempting to crack down on unsavory material. There are any number of reasons for this kind of censorship, though it’s a divisive issue. In pure economic terms, though, advertisers tend to avoid platforms filled with hate speech, death threats or pornography. The modern problem, though, is that it’s impossible for a large social media company to go through every post manually. Algorithms and automation step in to moderate.
In their attempts to moderate their content, TikTok and other social media sites tend to catch more innocuous things in their net. And to get around this, people are starting to change the way they speak online to avoid post suppression. It’s such a common practice, the language of censorship is extending beyond the internet. How is this happening, and will this somehow change language as a whole?
What Does Social Media Censorship Look Like?
The censorship that happens on social media comes in a few different forms, and it doesn’t always conform to what we think of when we hear “censorship.” There are a few ways that something might get suppressed on social media.
- Outright deletion or banning. This is the clearest example of censorship, when a social media site takes down a piece of content entirely.
- Appending warnings or bleeping. In these cases, the content is up but there is some sort of warning on it that might make it harder to view. TikTok, for example, censors swear words in its auto-generated speech-to-text function, even though the swear words are audible. Twitter allows users to self-censor images by turning on a “sensitive content warning” that people have to click through to see the content. There are also warnings like the ones Instagram uses on any content related to coronavirus that prompts users to learn more about the virus from a trusted source. These don’t necessarily all count as “censorship” per se, but they are ways for social media companies to affect the content on their platforms.
- Suppressing the content’s visibility. Lastly, and most important for this article, is that social media companies will suppress content. This isn’t just a conspiracy by people unhappy that their content isn’t performing well; many companies’ community guidelines explicitly state that content may be harder to find if it is considered harmful. The content is findable, but users will have to go out of their way to search for it specifically.
The main problem for users attempting to avoid censorship is that it’s not entirely clear what constitutes “harmful” content. Some categories are obvious — nudity, death threats — but social media companies don’t have a list of words you shouldn’t use or anything like that.
If a TikTok doesn’t do well, it’s never really clear whether it’s being suppressed for some reason or it’s just a bad post. Recently, the Washington Post tried to design a TikTok to get suppressed by filling it with terms the journalists thought were getting flagged by the app, but it utterly failed and performed very well instead. Trying to guess what will succeed and what won’t is a frustrating, intrinsic part of being online today.
How Creators Are Getting Around Suppression
In an unclear environment, creators still find workarounds so that they can talk about certain things without their posts being suppressed. To be clear, these tactics are often used by well-meaning people who want to be able to talk about topics like sex, death, racism, homophobia and more online without it being hidden from people. There are certainly bad actors who use workarounds to spread bigotry and hate, but here we’re focusing on those who are simply trying to produce content relevant to their lives.
As a method of playing with language and censorship, misspellings are a relatively new technique. Relatively, at least, though misspellings have been deployed since at least the early days of the internet. While a human moderator can easily spot that “seggs” is just a way of saying “sex,” algorithms cannot (unless they’re trained to look for “seggs” too). Thus, a quick way to communicate something is just to use a misspelling. That way, a user doesn’t need to learn a new code, they can figure out what’s being referred to with a few context clues.
A lot of the misspellings involve things like replacing the letter “o” with the number “0” or replacing an “s” with a dollar sign, but they also tend to evolve over time. One example on TikTok is “le dollar bean,” which means “lesbian.” This went through an extra layer of translation because at first people would use “le$bian,” which TikTok’s text-to-speech automation pronounced kind of like “le dollar bean.” This evolution might be because common misspellings are likely to get picked up by algorithms after a while, meaning the language people use will get more and more impenetrable over time.
The use of emoji has changed a lot of the way we communicate. There’s the kind of emoji we use to augment a message, like saying “Hello 😊,” and there are also emoji that stand in for words. There are obvious, literal cases, like using ❤️ to mean “love” or 🔥 to mean “fire,” but there are also countless ways people are being more creative in how they deploy emoji. The 🌈 emoji, for example, can easily be deployed to talk about LGBTQ topics, and the 🌽 emoji is often used to refer to the lewd content that rhymes with “corn.” It’s a pictographic game that can be hard to parse if you’re not given lots of context.
Humans have always used euphemisms to talk around difficult topics. There are countless ways in English alone to talk about death — kicking the bucket, buying the farm, passing away — and most predate the internet by decades, if not centuries. But TikTok has given birth to another: “unalive,” which is a way to mention death, murder or suicide without getting flagged for violent content.
There are many other euphemisms in place, all of which are going in and out of style at any given time as language and censorship interact. Calling the coronavirus pandemic “the panini,” referring to nipples as “nip nops” and talking about homophobia as “cornucopia” all have gone in and out of fashion on TikTok. While it may sound silly to use somewhat infantile euphemisms when trying to tackle serious subjects, creators say that it’s the only way they can mention these topics without ruining their content’s performance.
Language On The Internet Vs. Language In Real Life
An oft-quoted line is that “The internet is not real life,” but as we get deeper and deeper into the digital age, it’s started feeling less true. While you might once have laughed at the idea of someone saying “LOL” out loud, it’s now not that strange. In the moment, though, it’s notoriously difficult to predict how much long-term change will happen because of the interplay of language and censorship.
For the most part, internet language tends to get dated pretty quickly. One day people are unironically saying “doggo” and “pupper,” the next it’s considered incredibly uncool. It’s not too dissimilar from how slang works in general, with one generation’s cool lingo becoming the next’s punching bag. The internet speeds this process along by quickening a word’s use, followed by its overuse.
That these words are specifically designed for evading censorship means that the turnover in terminology is even higher. It doesn’t take too much effort for, say, TikTok to decide they don’t want to promote content with “unalive” in it anymore, and thus add it to the list of terms that algorithms pay attention to. And thus, creators will have to figure out a different way to talk about death if they want a lot of people to see it.
From here, people will always find ways to use language that allows them to get around auto-censorship. Whether anyone will be saying “cornucopia” or “nip nops” a decade from now is anyone’s guess, though it’s probably unlikely. Still, certain terms could stand the test of time if they fill a specific need. If social media companies keep cracking down, however, the terminology will continue to evolve. No matter what the barrier to communication put in place is, language (and the people who use it) will find a way through.