Resources
Most recent models are published on Huggingface
[Benchmark, GitHub] MBIB – the first Media Bias Identification Benchmark Task and Dataset Collection
[Dataset, Huggingface] Anno-lexical (Lexical bias)
[Dataset, GitHub] BABE – Bias Annotations By Experts
[Dataset, Paper] BAT – Bias And Twitter
[Scale/Questionnaire to measure bias perception] Do You Think It’s Biased? How To Ask For The Perception Of Media Bias (A set of tested questions to assess media bias perception to be used in any bias-related research)
[Dataset, Zenodo] MBIC -A Media Bias Annotation Dataset Including Annotator Characteristics
Publications
2025
Spinde, Timo; Wu, Fei; Gaissmaier, Wolfgang; Demartini, Gianluca; Echizen, Isao; Giese, Helge
Enhancing media literacy: The effectiveness of (Human) annotations and bias visualizations on bias detection Journal Article
In: Information Processing & Management, vol. 62, no. 6, pp. 104244, 2025, ISSN: 0306-4573.
Abstract | Links | BibTeX | Tags: Language processing, media bias, News literacy, Text perception
@article{SPINDE2025104244,
title = {Enhancing media literacy: The effectiveness of (Human) annotations and bias visualizations on bias detection},
author = {Timo Spinde and Fei Wu and Wolfgang Gaissmaier and Gianluca Demartini and Isao Echizen and Helge Giese},
url = {https://www.sciencedirect.com/science/article/pii/S0306457325001852},
doi = {https://doi.org/10.1016/j.ipm.2025.104244},
issn = {0306-4573},
year = {2025},
date = {2025-01-01},
journal = {Information Processing & Management},
volume = {62},
number = {6},
pages = {104244},
abstract = {Marking biased texts effectively increases media bias awareness, but its sustainability across new topics and unmarked news remains unclear, and the role of AI-generated bias labels is untested. This study examines how news consumers learn to perceive media bias from human- and AI-generated labels and identify biased language through highlighting, neutral rephrasing, and political orientation cues. We conducted two experiments with a teaching phase exposing them to various bias-labeling conditions and a testing phase evaluating their ability to classify biased sentences and detect biased text in unlabeled news on new topics. We find that, compared to the control group, both human- and AI-generated sentential bias labels significantly improve bias classification (p < .001), though human labels are more effective (d = 0.42 vs. d = 0.23). Additionally, among all teaching interventions, participants best detect biased sentences when taught with biased sentence or phrase labels (p < .001), while politicized phrase labels reduce accuracy. The effectiveness of different media literacy interventions remains independent of political ideology, but conservative participants are generally less accurate (p = .011), suggesting an interaction between political inclinations and bias detection. Our research provides a novel experimental framework into assessing the generalizability of media bias awareness and offer practical implications for designing bias indicators in news-reading platforms and media literacy curricula.},
keywords = {Language processing, media bias, News literacy, Text perception},
pubstate = {published},
tppubtype = {article}
}