Title: Weakly Supervised Multimodal Fusion of Remote Sensing and Social Media Data for Urban Flood Mapping
Abstract: Accurate identification of urban flood extent is essential for disaster response and risk management. However, due to data heterogeneity and spatial bias, existing studies rarely explore the use of social media to guide remote sensing annotation for flood mapping in the context of data fusion. In this article, we propose a weakly supervised multimodal fusion framework that integrates spatially relocated social media data and remote sensing features to guide the generation of accurate flooded / non-flooded pseudo labels. Based on the weakly supervised information, we train a deep learning model by using pre- and post-spectral bands and spectral indices as inputs. To evaluate the proposed model, the 2017 Hurricane Harvey flood in Houston, USA, is taken as a case study. For flood mapping, the proposed model realizes a precision of 89.75%, a recall of 81.83%, an F1 score of 84.94%, and a mean intersection over union (mIoU) of 74.98%, an overall accuracy (OA) of 97.51%, and a kappa coefficient of 0.8612. It outperforms both unsupervised and conventional supervised methods and maintains high segmentation accuracy in complex environments such as dense urban areas and major roads. It can be concluded that the newly proposed framework can better fuse remote sensing and social media data, which supports accurate flood mapping and disaster management.
Index Terms: Flood mapping, social media, multimodal data, data fusion, remote sensing.
DOI: 10.1109/JSTARS.2026.3680591



