1
|
Mid- to Long-Term Runoff Prediction Based on Deep Learning at Different Time Scales in the Upper Yangtze River Basin. WATER 2022. [DOI: 10.3390/w14111692] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Deep learning models are essential tools for mid- to long-term runoff prediction. However, the influence of the input time lag and output lead time on the prediction results in deep learning models has been less studied. Based on 290 schemas, this study specified different time lags by sliding windows and predicted the runoff process by RNN (Recurrent Neural Network), LSTM (Long–short-term Memory), and GRU (Gated Recurrent Unit) models at five hydrological stations in the upper Yangtze River during 1980–2018 at daily, ten-day, and monthly scales. Different models have different optimal time lags; therefore, multiple time lags were analyzed in this paper to find out the relationship between the time intervals and the accuracy of different river runoff predictions. The results show that the optimal time-lag settings for the RNN, LSTM, and GRU models in the daily, ten-day, and monthly scales were 7 days, 24 ten days, 27 ten days, 24 ten days, 24 months, 27 months, and 21 months, respectively. Furthermore, with the increase of time lags, the simulation accuracy would stabilize after a specific time lag at multiple time scales of runoff prediction. Increased lead time was linearly related to decreased NSE at daily and ten-day runoff prediction. However, there was no significant linear relationship between NSE and lead time at monthly runoff prediction. Choosing the smallest lead time could have the best prediction results at different time scales. Further, the RMSE of the three models revealed that RNN was inferior to LSTM and GRU in runoff prediction. In addition, RNN, LSTM, and GRU models could not accurately predict extreme runoff events at different time scales. This study highlights the influence of time-lag setting and lead-time selection in the mid- to long-term runoff prediction results for the upper Yangtze River basin. It is recommended that researchers should evaluate the effect of time lag before using deep learning models for runoff prediction, and to obtain the best prediction, the shortest lead-time length can be chosen as the best output for different time scales.
Collapse
|
2
|
FedGCN: Federated Learning-Based Graph Convolutional Networks for Non-Euclidean Spatial Data. MATHEMATICS 2022. [DOI: 10.3390/math10061000] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Federated Learning (FL) can combine multiple clients for training and keep client data local, which is a good way to protect data privacy. There are many excellent FL algorithms. However, most of these can only process data with regular structures, such as images and videos. They cannot process non-Euclidean spatial data, that is, irregular data. To address this problem, we propose a Federated Learning-Based Graph Convolutional Network (FedGCN). First, we propose a Graph Convolutional Network (GCN) as a local model of FL. Based on the classical graph convolutional neural network, TopK pooling layers and full connection layers are added to this model to improve the feature extraction ability. Furthermore, to prevent pooling layers from losing information, cross-layer fusion is used in the GCN, giving FL an excellent ability to process non-Euclidean spatial data. Second, in this paper, a federated aggregation algorithm based on an online adjustable attention mechanism is proposed. The trainable parameter ρ is introduced into the attention mechanism. The aggregation method assigns the corresponding attention coefficient to each local model, which reduces the damage caused by the inefficient local model parameters to the global model and improves the fault tolerance and accuracy of the FL algorithm. Finally, we conduct experiments on six non-Euclidean spatial datasets to verify that the proposed algorithm not only has good accuracy but also has a certain degree of generality. The proposed algorithm can also perform well in different graph neural networks.
Collapse
|
3
|
Li X, Yang X, Li X, Lu S, Ye Y, Ban Y. GCDB-UNet: A novel robust cloud detection approach for remote sensing images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107890] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|