Transparent Automated Content Moderation (TACo)

Online political discussions are increasingly perceived as negative, aggressive, and toxic. This is a worry, because exposure to toxic content undermines trust and fosters cynicism, leading to a polarized society. Defining what and how “toxic” content should be regulated online is therefore one of the most pressing challenges for researchers today, because such an approach can be used to develop automated content moderation systems that ensure healthy political conversations on a global scale. However, the available research on toxic content and its moderation is elite-driven and imposes top-down definitions of what is “good” or “bad” on users. This has resulted in biased content moderation models, and it has damaged the reputation of those who have implemented them. More importantly, however, a top-down approach removes agency from citizens in a time when many already feel they have too little influence on their daily information intake. Therefore, the TACo Project proposes a novel user-centric approach towards automated content moderation. We (a) conduct exploratory social science research to learn what citizens themselves want when it comes to automated content moderation. Then, we (b) develop toxicity detection systems and automated moderation infrastructures based on this knowledge, testing for validity and reliability. Finally, we test whether what citizens “want” truly has beneficial effects for them: we (c) conduct experiments that test the effects of these technological affordances on citizens and human content moderators.

The WWTF (Vienna Science and Technology Fund) funded TACo Project is an interdisciplinary project of data science (Technische Universität Wien) and political communication (Universität Wien) researchers led by Univ.-Prof. Dr. Sophie Lecheler (PI, Universität Wien) and Univ. Prof. Dr. Allan Hanbury (PI, TU Wien), focusing on user-centric approaches towards automated content moderation by studying user agency in online content regulation, developing toxicity detection systems and automated content moderation infrastructures, as well as testing its attitudinal, behavioral and emotional effects on citizens and human content moderators.

Principal Investigators

Sophie Lecheler

Allan Hanbury (TU Wien)

Team Members

WWTF Digital Humanism Call