1
|
Calvo-Bascones P, Sanz-Bobi MA. Advanced Prognosis methodology based on behavioral indicators and Chained Sequential Memory Neural Networks with a diesel engine application. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
2
|
Fu Y, Cao H, Chen X, Ding J. Task-incremental broad learning system for multi-component intelligent fault diagnosis of machinery. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
3
|
Chin CS, Zhang R. Noise modeling of offshore platform using progressive normalized distance from worst-case error for optimal neuron numbers in deep belief network. Soft comput 2021. [DOI: 10.1007/s00500-020-05163-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractNoise prediction is important for crew comfort in an offshore platform such as oil drilling rig. A deep neural network learning on the oil drilling rig is not widely studied. In this paper, a deep belief network (DBN) with the last layer initialized with trained DBN (named DBN-DNN) is used to model the sound pressure level (SPL) in the compartments of the oil drilling rig. The method finds an optimal number of the hidden neurons in restricted Boltzmann machine by using a normalized Euclidean distance from the worst possible error for each hidden layer progressively. The dataset used for experimental results is obtained via vibroacoustics simulation software such as VA-One and actual site measurements. The results show that output parameters such as spatial SPL, average spatial SPL, structure-borne SPL and airborne SPL improve the testing root mean square error to around 20% as compared to randomly assigning the number of neurons for each hidden layer. The testing RMSE in the output parameters has improved when compared with a multi-layer perceptron, sparse autoencoder, Softmax, self-taught learning and extreme learning machine.
Collapse
|