Skip to content Skip to footer

Characterizing and Detecting Video Misinformation

Building a taxonomy from real-world examples

Research into detecting misinformation in videos remains sparse and mostly focused on deepfakes. We aim to change that, by providing an overview of the different types of misinformation, along with a dataset built from real-world examples.

With more and more people using social media as a major source of information, misinformation is already a pressing issue. Video-based misinformation is especially powerful, because it is often perceived as more trustworthy than text, images, or audio alone. Yet, little research into video misinformation detection has been conducted. Furthermore, most research effort is targeted towards deepfake detection. Crucially, detection of other kinds of misinformation videos remain largely unexplored, such as selective editing, quoting out of context, or reusing unrelated footage.

To close this gap, our team is currently working on developing a taxonomy that captures how misinformation is created, both technically and semantically. Building on this, we characterize what makes misinformation videos misinforming, and will soon release an annotated dataset of real-world examples. Through this taxonomy and dataset, we want to enable future research that covers currently under-explored types of misinformation videos.

Looking ahead, we will develop models to detect video misinformation requiring semantic clues, such as footage reuse or selective editing.