An Interview with a former TikTok policy manager
Marika Tedroff, also known as Malla, is interested in the internet, culture, and policy. Her experiences range from VC-related work all the way to managing policy at TikTok (from which she recently left). Most recently, she worked in TikTokâs monetization policy team, crafting the moderation guidelines that make up some of TikTokâs global policies. Last month, she wrote a about her experiences at TikTok, and why moderation is so hard. One theme that arose is the disconnect between how users view content moderation, and the work that happens behind the scenes. We had a chat to dig into this issue further, and get her thoughts on the future of Trust & Safety more generally. Hereâs an extract from our fascinating discussion that covered topics ranging from decision-making in a globally important platform, the role of social media in society, and our musings on what online platforms might look like both without any moderation at all and in a heavily moderated environment.
IPPOLITA: , you highlight that moderation decision-making is âall about binary choices, which is hard because content isnât binary, so the systems, processes and structuresâ used by platforms âmake little sense.â More transparency is needed around the âhowâs and whyâsâ of content moderation, so that users understand this space more. So my question is: In what ways do you think the industry can shed light on the âhowâs and whyâsâ of moderation systems for users? And do you think this disconnect is also a question of media literacy?
I absolutely think itâs a question of media literacy, among other things. I think the space in general is complicated, itâs very reliant on everybody using their own critical minds to assess what theyâre seeing and why things are happening. And in relation to transparency, I think thereâs a huge gap between the impressions users are getting about how things work and how they actually work.
But we also have to remember that there are many reasons for why not everything is super transparent, reasons that ultimately help protect users.
At the same time, I do think it is important that users understand there are operational and product issues behind these challenges, itâs not as straightforward as many seem to think. People might attribute issues to value judgements or targeting specific posts or accounts, but this is not what I experienced when I was working there; rather, ops issues played a key role. Itâs complex.
Overall, I donât think transparency alone or even communicating âwhyâ and âhowâ, will fix any problems. But itâs perhaps more about addressing the users, their concerns and worries, because thereâs a lot of assumptions about how things work that are not always rooted in reality. People make these assumptions when they do not have enough information and data about âhowâ and âwhyâ things are happening. This is true for most companies and industries.
IPPOLITA: how without moderatorsâ work on platforms, users would not actually want to spend a lot of time on these places as they wouldnât be what they are today. But from a policy perspective, why is content classification such a hard task? Do you ever feel like you are in charge of regulating the âworld of informationâ? And how do you deal with the responsibility of having to decide what users see / not see?
Yeah, it is super hard because ultimately you have to represent a lot of different voices. There are a lot of elements to consider. There are legal restrictions, regulatory factors, public pressures etc. For example, public pressure is why I wrote about. There are a lot of discussions online about how social media platforms are harmful for young womenâs body image. Obviously itâs not against the law to upload a video saying, âyou should eat 500 calories a day to look hot.â But then, young women are feeling awful about themselves because of everything they see online, and obviously platforms could do something about it, right?
However, you always have this other side of it, which is: âpeople should be able to post whatever they wantâ. And not every person will interpret a video about eating 500 calories a day as something that is harmful or triggering.
Itâs very much in the perception of how users see it. And itâs hard to make that distinction as a platform, which is supposed to be a neutral place. Because the minute youâre taking a [policy] stance, youâre also saying âthis is goodâ and âthis is badâ and âthis is what society should be and not beâ. But ultimately, on a day-to-day basis, I didnât feel this kind of pressure, because obviously this whole mission feels separate from what youâre doing day-to-day, which is smaller tasks and making sure everything works.
However, those legal requirements, the public pressure, what users want or not, and where we draw the line between being responsible but not silencing users, thatâs really hard to do. Because not everything can fit into this box. So you always have to measure these risks internally in your mind:
What is the harm of people seeing this versus what is the harm of people being silenced because of it? Which voice should carry the most weight? Should we listen to all the parents being concerned about their children spending too much time on social media? Should we listen to what the users want to publish? Should we listen to the business needs?
There are all these distinct voices, and a lot of noise. As a policy person you have to write something that meets all these different requirements, balancing these various needs.
I always envisioned it as: youâre standing on a line over the water; youâre trying not to fall on either side, you know? You need to balance the weight.
IPPOLITA: To what extent were these policy challenges unique to TikTok? What role do you think the design of a specific platform plays into how content is created and disseminated?
I mean, I donât know. I havenât worked at any other big company before that, so Iâm not sure what things were TikTok related, and what relates to big companies in general. But I think what is unique about TikTok â and other platforms that are increasingly moving in this direction â is this multi type of content/formats. For example, we now have video, live streaming and more. All these different formats bring new moderation challenges. Pictures are easier, because you can quickly make an assessment. When thereâs video, thereâs all kinds of things and more nuances to account for. You need to think about how it is presented. What voice are they using in the video? What caption is related to it? Maybe the caption is the thing that reveals that the video is actually sarcastic or is it the audio in the background that reveals that? How do we moderate live videos when something is happening as we speak and no one can review it when it goes live? This is something I thought about a lot: how much more difficult it is when there is video involved.
As for platforms, as TikTok grew over the last year â I mean, itâs still new, which is crazy to think about how quickly it exploded â it also became clear that it is playing a more prominent role in the reporting of events. Because it grew so quickly, I think that challenge might have been unique to TikTok because of the quick growth pace.
IPPOLITA: you had more to say about âwhy itâs hard to fix âobviousâ problems such as misinformation and scams.â Misinformation is clearly a major issue. What are your thoughts on how social media platforms can tackle this?
Yeah, misinformation, fraud and all these things are very difficult for many reasons. I think two of the main drivers are:
a) Facts are increasingly expressed as opinions. Itâs very hard to say that something is misinformation if someone is just talking about their experience, but presented in a way that is not really factual.
b) Bad actors are never shining their red lights and saying, âhey, Iâm a fraudulent bad actor and Iâm spreading misinformationâ.
For platforms, itâs very hard to know the balance between being proactive and reactive. For instance, a proactive measure would be, before an election, the platform decides: âweâre going to fact-check specific keywords because we know that misinformation will increase during an electionâ. But a reactive approach would be after an election, reviewing everything thatâs been reported by users as misinformation. You can read more about this on
So, finding the path forward is really hard and Iâm not sure what the future will look like, but it is a very, very difficult problem because of these recent hard-to-identify, hard-to-know-when-to-do-what instances around distinguishing truths from facts. Especially when events are unfolding, then itâs very difficult. Fact-checking something that is unfolding as we speak is really, really difficult.
IPPOLITA: Definitely. And I think misinformation in particular is quite a tricky area because, you know, itâs about the âtruthâ of something which is hard to define... Despite these challenges, do you feel optimistic about the future of online safety?
I am generally optimistic. It has become an important topic, and it wasnât some years ago. It is becoming a core part that people think about when they build companies. Iâve seen a lot of new startups in the social media and live chat space incorporating policy and community guidelines from the outset. So, I think long-term we will see a positive effect from this. But I also think that the platforms and products weâre building are, at times, facilitating the problems we are trying to fix, meaning we will never really reach a perfect state.
IPPOLITA: Do you have an ideal, âdreamâ solution in mind, if so what would it be?
I donât have a dream solution but I think it would be cool to see a new social media platform where not everybody is allowed to post content. A place where we put more emphasis on verifying the account and the user at the beginning of the funnel, rather than the content. It would be interesting to see how that impacts safety, content and integrity challenges. Of course, utility and use cases would differ from traditional social media apps, but given the fact most people donât actually post content, it might still be interesting if it results in higher quality content for users.
IPPOLITA: How would you measure who has the right post?
I donât know, this is just an idea. There are many different verification processes and approaches to explore. In general, I would love to see more transparency for users (including how things work and what happens to their content) and more communication with users, because otherwise it just creates a lot of frustrations and false information going around. And I always think that better understanding will lead to better outcomes for everybody.
IPPOLITA: I agree⊠the same way that in school people learn about history, the government and laws, I think schools should incorporate learning about platforms and the internet because ultimately it is part of our society today. For example, students should be equipped with knowledge of how a recommendation system works.
IPPOLITA: Do you have any final thoughts on this?
Yeah. And just this general idea of knowing that âIâm really shaped by this appâ and just being reminded of that constantly because itâs so easy to forget. We look at these type of things when weâre tired and in bed. And I donât think we really realise how much impact they have on the way we shape our worldviews.
For more posts like these follow us on , and . Stay tuned for more interviews and discussions with people working across the Trust & Safety and Brand Safety space.
At Âé¶čŽ«Ăœ we build technology that enables safe and positive online experiences. Our goal is to understand the visual internet and create a transparent digital space.
For more information on what we do you can check out our website or email us at contact@unitary.ai.