Any resemblance with 1984’s Ministry of truth is pure coincidence.

This is the story of Li An, a pseudonymous former employee at ByteDance, as told to Protocol’s Shen Lu.

My job was to use technology to make the low-level content moderators’ work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content.

When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. (…) We eventually decided not to do it: We didn’t have enough Uyghur language data points in our system, and the most popular livestream rooms were already closely monitored.

Streamers speaking ethnic languages and dialects that Mandarin-speakers don’t understand would receive a warning to switch to Mandarin. (…)

The truth is, political speech comprised a tiny fraction of deleted content. Chinese netizens are fluent in self-censorship and know what not to say. (…) We mostly censored content the Chinese government considers morally hazardous — pornography, lewd conversations, nudity, graphic images and curse words — as well as unauthorized livestreaming sales and content that violated copyright.

But political speech still looms large. What Chinese user-generated content platforms most fear is failing to delete politically sensitive content that later puts the company under heavy government scrutiny. It’s a life-and-death matter. (…) ByteDance does not have strong government relationships like other tech giants do, so it’s walking a tightrope every second.

Many of my colleagues felt uneasy about what we were doing. Some of them had studied journalism in college. Some were graduates of top universities. They were well-educated and liberal-leaning. We would openly talk from time to time about how our work aided censorship. But we all felt that there was nothing we could do.

When it comes to day-to-day censorship, the Cyberspace Administration of China would frequently issue directives to ByteDance’s Content Quality Center (内容质量中心), which oversees the company’s domestic moderation operation: sometimes over 100 directives a day. They would then task different teams with applying the specific instructions to both ongoing speech and to past content, which needed to be searched to determine whether it was allowed to stand.

During livestreaming shows, every audio clip would be automatically transcribed into text, allowing algorithms to compare the notes with a long and constantly-updated list of sensitive words, dates and names, as well as Natural Language Processing models. Algorithms would then analyze whether the content was risky enough to require individual monitoring.

Around politically sensitive holidays, such as Oct. 1 (China’s National Day), July 1 (the birthday of the Chinese Communist Party) or major political anniversaries like the anniversary of the 1989 protests and crackdown in Tiananmen Square, the Content Quality Center would generate special lists of sensitive terms for content moderators to use.

Influencers enjoyed some special treatment — there were content moderators assigned specifically to monitor certain influencers’ channels in case their content or accounts were mistakenly deleted. Some extremely popular influencers, state media and government agencies were on a ByteDance-generated white list, free from any censorship — their compliance was assumed.

It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.