Papers
arxiv:2404.17985

Detection of Conspiracy Theories Beyond Keyword Bias in German-Language Telegram Using Large Language Models

Published on Apr 27, 2024
Authors:
,
,

Abstract

Supervised fine-tuning and prompt-based approaches using BERT-like and GPT models effectively detect conspiracy theories in German Telegram messages, with GPT-4 showing superior zero-shot performance.

AI-generated summary

The automated detection of conspiracy theories online typically relies on supervised learning. However, creating respective training data requires expertise, time and mental resilience, given the often harmful content. Moreover, available datasets are predominantly in English and often keyword-based, introducing a token-level bias into the models. Our work addresses the task of detecting conspiracy theories in German Telegram messages. We compare the performance of supervised fine-tuning approaches using BERT-like models with prompt-based approaches using Llama2, GPT-3.5, and GPT-4 which require little or no additional training data. We use a dataset of sim!! 4,000 messages collected during the COVID-19 pandemic, without the use of keyword filters. Our findings demonstrate that both approaches can be leveraged effectively: For supervised fine-tuning, we report an F1 score of sim!! 0.8 for the positive class, making our model comparable to recent models trained on keyword-focused English corpora. We demonstrate our model's adaptability to intra-domain temporal shifts, achieving F1 scores of sim!! 0.7. Among prompting variants, the best model is GPT-4, achieving an F1 score of sim!! 0.8 for the positive class in a zero-shot setting and equipped with a custom conspiracy theory definition.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.17985 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.17985 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.