A new study titled Multi-Stakeholder Disaster Insights from Social Media Using Large Language Models, led by Loris Belcastro and published on March 30, 2025, explores how generative AI can enhance disaster response by extracting actionable insights from social media for diverse stakeholders.
The study highlights social media’s critical role in disaster response by enabling real-time feedback sharing. Current methods face challenges in automating and customizing data analysis for diverse stakeholders like emergency services and media. The paper introduces a methodology leveraging large language models (LLMs) to enhance automation, aggregation, and customization of user-reported issues from social media during disasters. This approach bridges the gap between raw data and actionable insights, improving coordination of relief efforts, resource distribution, and communication strategies.
Leveraging Generative AI for Disaster Response: A New Frontier in Crisis Management
Social media has become an indispensable tool for real-time information sharing in the wake of disasters. Users post updates on everything from collapsed buildings to service outages, creating a wealth of data that can inform emergency response efforts. However, translating this raw data into actionable insights remains a challenge. Enter large language models (LLMs), which are now being harnessed to automate the classification, aggregation, and customization of social media content during crises.
This approach combines encoder-based models like BERT for precise multi-dimensional classification—identifying elements such as sentiment, emotion, geolocation, and topic—with decoder-based models like ChatGPT for generating human-readable reports tailored to specific stakeholders. By bridging the gap between raw user feedback and stakeholder-specific insights, this methodology aims to improve coordination among emergency services, media outlets, and other decision-makers during disasters.
The proposed methodology begins with the collection of social media posts from disaster-affected areas. These posts are then processed using encoder-based models like BERT, which classify the content across multiple dimensions. This classification enables the identification of critical issues such as infrastructure damage or service outages, providing a structured foundation for further analysis.
Once classified, the data is fed into decoder-based models like ChatGPT, which generate detailed, stakeholder-specific reports. For example, emergency responders might receive reports highlighting immediate risks, while utility companies could be informed about specific service outages. This dual approach not only enhances the efficiency of disaster response but also ensures that information is presented in a way that facilitates rapid decision-making.
Compared to standard approaches, this methodology demonstrates superior performance in terms of accuracy and relevance. By leveraging the real-time processing capabilities of LLMs, it enables early detection of sub-events and the automatic generation of detailed reports—critical tools during crises when time is of the essence.
At the heart of this innovation is integrating multi-dimensional classification with generative reporting. Encoder-based models like BERT excel at understanding linguistic context, allowing for precise categorization of social media posts across multiple dimensions. This classification is essential for identifying patterns and prioritizing issues during a crisis.
Once classified, decoder-based models like ChatGPT take over, transforming the structured data into human-readable reports. These reports are not only comprehensive but also tailored to the specific needs of different stakeholders. For instance, a report generated for emergency responders might emphasize immediate risks, while one for city planners could focus on long-term infrastructure repair.
This combination of classification and generation ensures that information is accurate and actionable, enabling more efficient coordination among response teams. By automating this process, the methodology reduces the time and resources required to analyze social media data during disasters, ultimately improving outcomes for affected communities.
The integration of generative AI into disaster response represents a significant leap forward in crisis management. This methodology addresses critical gaps in current systems by leveraging large language models to automate the classification and generation of insights from social media. It enhances the efficiency of emergency response and ensures that information is presented in a way that facilitates rapid decision-making.
As disasters become more frequent and severe, the need for innovative solutions like this becomes increasingly urgent. By harnessing the power of generative AI, we can transform how we respond to crises, ultimately saving lives and reducing the impact of disasters on communities worldwide.
More information
Multi-Stakeholder Disaster Insights from Social Media Using Large Language Models
DOI: https://doi.org/10.48550/arXiv.2504.00046
