Skip to content

BENCHMARKING DEEP LEARNING MODELS FOR IMPROVED STREAMFLOW PREDICTION

    Liu, Jiangtao, Civil and Environmental Engineering, The Pennsylvania State University, 201 Old Main, State College, PA, 16802, liujiangtao3@gmail.com; Shen, Chaopeng, The Pennsylvania State University, O’Donncha, Fearghal , IBM Research, Song, Yalan, The Pennsylvania State University, Zhi, Wei, , Hohai University, Beck, Hylke , King Abdullah University of Science and Technology.

    Accurate streamflow prediction is essential for effective river management, flood mitigation, and ecological restoration. Deep learning models, especially Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM), have shown notable success in predicting streamflow dynamics. Recently, the attention-based Transformer architecture has emerged as a promising alternative due to its effectiveness in capturing long-term dependencies. However, the relative performance of Transformer-based models compared to LSTM in hydrological forecasting remains unclear. In this study, we systematically benchmark 11 Transformer variants against LSTM models across diverse streamflow prediction scenarios using large-scale basin datasets, including CAMELS and global streamflow observations. Our results indicate that while LSTM models excel in regression and memory-dependent streamflow prediction tasks, Transformer models demonstrate superior performance in more complex forecasting tasks, particularly for longer prediction horizons. This benchmarking effort provides clear guidance on the applicability and effectiveness of advanced deep learning models for hydrological predictions, informing future efforts toward improved streamflow forecasting and management.

    Streamflow Forecasting, Deep Learning, Hydrological Modeling,