Resources & Publications

All members of the network can share their recent work on media bias here.
Resources
Most recent models are published on Huggingface
[Benchmark, GitHub] MBIB – the first Media Bias Identification Benchmark Task and Dataset Collection
[Dataset, GitHub] BABE – Bias Annotations By Experts
[Scale/Questionnaire to measure bias perception] Do You Think It’s Biased? How To Ask For The Perception Of Media Bias (A set of tested questions to assess media bias perception to be used in any bias-related research)
[Dataset, Zenodo] MBIC -A Media Bias Annotation Dataset Including Annotator Characteristics
Publications
2024
Horych, Tomas; Wessel, Martin; Wahle, Jan Philip; Ruas, Terry; Wassmuth, Jerome; Greiner-Petter, Andre; Aizawa, Akiko; Gipp, Bela; Spinde, Timo
MAGPIE: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions Proceedings Article
In: "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation", 2024.
Abstract | Links | BibTeX | Tags: dataset, multi-task learning, Transfer learning
@inproceedings{Horych2024a,
title = {MAGPIE: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions},
author = {Tomas Horych and Martin Wessel and Jan Philip Wahle and Terry Ruas and Jerome Wassmuth and Andre Greiner-Petter and Akiko Aizawa and Bela Gipp and Timo Spinde},
url = {https://aclanthology.org/2024.lrec-main.952},
year = {2024},
date = {2024-02-01},
urldate = {2024-02-01},
booktitle = {"Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation"},
abstract = {Media bias detection poses a complex, multifaceted problem traditionally tackled using single-task models and small in-domain datasets, consequently lacking generalizability. To address this, we introduce MAGPIE, a large-scale multi-task pre-training approach explicitly tailored for media bias detection. To enable large-scale pre-training, we construct Large Bias Mixture (LBM), a compilation of 59 bias-related tasks. MAGPIE outperforms previous approaches in media bias detection on the Bias Annotation By Experts (BABE) dataset, with a relative improvement of 3.3% F1-score. Furthermore, using a RoBERTa encoder, we show that MAGPIE needs only 15% of fine-tuning steps compared to single-task approaches. We provide insight into task learning interference and show that sentiment analysis and emotion detection help learning of all other tasks, and scaling the number of tasks leads to the best results. MAGPIE confirms that MTL is a promising approach for addressing media bias detection, enhancing the accuracy and efficiency of existing models. Furthermore, LBM is the first available resource collection focused on media bias MTL.},
keywords = {dataset, multi-task learning, Transfer learning},
pubstate = {published},
tppubtype = {inproceedings}
}
2023
Spinde, Timo; Richter, Elisabeth; Wessel, Martin; Kulshrestha, Juhi; Donnay, Karsten
What do Twitter comments tell about news article bias? Assessing the impact of news article bias on its perception on Twitter Journal Article
In: Online Social Networks and Media, vol. 37-38, pp. 100264, 2023, ISSN: 2468-6964.
Abstract | Links | BibTeX | Tags: Hate speech detection, media bias, Sentiment analysis, Transfer learning
@article{SPINDE2023100264,
title = {What do Twitter comments tell about news article bias? Assessing the impact of news article bias on its perception on Twitter},
author = {Timo Spinde and Elisabeth Richter and Martin Wessel and Juhi Kulshrestha and Karsten Donnay},
url = {https://www.sciencedirect.com/science/article/pii/S246869642300023X},
doi = {https://doi.org/10.1016/j.osnem.2023.100264},
issn = {2468-6964},
year = {2023},
date = {2023-01-01},
journal = {Online Social Networks and Media},
volume = {37-38},
pages = {100264},
abstract = {News stories circulating online, especially on social media platforms, are nowadays a primary source of information. Given the nature of social media, news no longer are just news, but they are embedded in the conversations of users interacting with them. This is particularly relevant for inaccurate information or even outright misinformation because user interaction has a crucial impact on whether information is uncritically disseminated or not. Biased coverage has been shown to affect personal decision-making. Still, it remains an open question whether users are aware of the biased reporting they encounter and how they react to it. The latter is particularly relevant given that user reactions help contextualize reporting for other users and can thus help mitigate but may also exacerbate the impact of biased media coverage. This paper approaches the question from a measurement point of view, examining whether reactions to news articles on Twitter can serve as bias indicators, i.e., whether how users comment on a given article relates to its actual level of bias. We first give an overview of research on media bias before discussing key concepts related to how individuals engage with online content, focusing on the sentiment (or valance) of comments and on outright hate speech. We then present the first dataset connecting reliable human-made media bias classifications of news articles with the reactions these articles received on Twitter. We call our dataset BAT - Bias And Twitter. BAT covers 2,800 (bias-rated) news articles from 255 English-speaking news outlets. Additionally, BAT includes 175,807 comments and retweets referring to the articles. Based on BAT, we conduct a multi-feature analysis to identify comment characteristics and analyze whether Twitter reactions correlate with an article’s bias. First, we fine-tune and apply two XLNet-based classifiers for hate speech detection and sentiment analysis. Second, we relate the results of the classifiers to the article bias annotations within a multi-level regression. The results show that Twitter reactions to an article indicate its bias, and vice-versa. With a regression coefficient of 0.703 (p<0.01), we specifically present evidence that Twitter reactions to biased articles are significantly more hateful. Our analysis shows that the news outlet’s individual stance reinforces the hate-bias relationship. In future work, we will extend the dataset and analysis, including additional concepts related to media bias.},
keywords = {Hate speech detection, media bias, Sentiment analysis, Transfer learning},
pubstate = {published},
tppubtype = {article}
}