DOI
10.5703/1288284318531
Description
Misinformation and rumors during public health crises significantly affect lives and increases mistrust in government and public health institutions. The circulation and amplification of rumors often worsen disease transmission and impede effective interventions. Historically, rumor spread has been a recurring issue in public health emergencies. For example, during the Ebola outbreak, misinformation about the virus and its treatment created deep mistrust of health workers. Similarly, in the COVID-19 pandemic, widespread misinformation fueled public panic and vaccine hesitancy. Previous research has primarily identified social media as a channel for rumor dissemination and focused on automatic rumor detection for fact-checking using binary false/true approach. However, there remains a gap in understanding the emotional dynamics and psychological drivers underlying rumor spread. This research introduces a Large Language Model (LLM)–augmented framework that leverages Generative Pre-trained Transformers (GPT) in alignment with established psychological theories of rumor. Using a COVID-19 rumor dataset from Sierra Leone, we identify and visualize the emotional undertones that drive rumor diffusion during public health crises. We focus on three core emotional categories (fear, aggression, and wish) and examine their temporal variations and cooccurrence patterns. A user study was conducted to evaluate GPT’s performance in emotion identification by comparing its predictions against participant ratings. Results reveal that the framework effectively uncovers key emotional drivers of rumor spread and provides empirical evidence of GPT’s reliability in emotion recognition. This work contributes a novel methodological approach to misinformation analysis and offers data-driven insights for designing effective communication and intervention strategies in under-resourced regions.
Emotion Driven Rumor Detection and Analysis: An LLM(Large Language Model) Augmented Approach
Misinformation and rumors during public health crises significantly affect lives and increases mistrust in government and public health institutions. The circulation and amplification of rumors often worsen disease transmission and impede effective interventions. Historically, rumor spread has been a recurring issue in public health emergencies. For example, during the Ebola outbreak, misinformation about the virus and its treatment created deep mistrust of health workers. Similarly, in the COVID-19 pandemic, widespread misinformation fueled public panic and vaccine hesitancy. Previous research has primarily identified social media as a channel for rumor dissemination and focused on automatic rumor detection for fact-checking using binary false/true approach. However, there remains a gap in understanding the emotional dynamics and psychological drivers underlying rumor spread. This research introduces a Large Language Model (LLM)–augmented framework that leverages Generative Pre-trained Transformers (GPT) in alignment with established psychological theories of rumor. Using a COVID-19 rumor dataset from Sierra Leone, we identify and visualize the emotional undertones that drive rumor diffusion during public health crises. We focus on three core emotional categories (fear, aggression, and wish) and examine their temporal variations and cooccurrence patterns. A user study was conducted to evaluate GPT’s performance in emotion identification by comparing its predictions against participant ratings. Results reveal that the framework effectively uncovers key emotional drivers of rumor spread and provides empirical evidence of GPT’s reliability in emotion recognition. This work contributes a novel methodological approach to misinformation analysis and offers data-driven insights for designing effective communication and intervention strategies in under-resourced regions.