1
|
Olaoluwa Omole E, Olusheye Adeyefa E, Iyabo Apanpa K, Iyadunni Ayodele V, Emmanuel Amoyedo F, Emadifar H. Unveiling the Power of Implicit Six-Point Block Scheme: Advancing numerical approximation of two-dimensional PDEs in physical systems. PLoS One 2024; 19:e0301505. [PMID: 38753696 PMCID: PMC11098417 DOI: 10.1371/journal.pone.0301505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 03/14/2024] [Indexed: 05/18/2024] Open
Abstract
In the era of computational advancements, harnessing computer algorithms for approximating solutions to differential equations has become indispensable for its unparalleled productivity. The numerical approximation of partial differential equation (PDE) models holds crucial significance in modelling physical systems, driving the necessity for robust methodologies. In this article, we introduce the Implicit Six-Point Block Scheme (ISBS), employing a collocation approach for second-order numerical approximations of ordinary differential equations (ODEs) derived from one or two-dimensional physical systems. The methodology involves transforming the governing PDEs into a fully-fledged system of algebraic ordinary differential equations by employing ISBS to replace spatial derivatives while utilizing a central difference scheme for temporal or y-derivatives. In this report, the convergence properties of ISBS, aligning with the principles of multi-step methods, are rigorously analyzed. The numerical results obtained through ISBS demonstrate excellent agreement with theoretical solutions. Additionally, we compute absolute errors across various problem instances, showcasing the robustness and efficacy of ISBS in practical applications. Furthermore, we present a comprehensive comparative analysis with existing methodologies from recent literature, highlighting the superior performance of ISBS. Our findings are substantiated through illustrative tables and figures, underscoring the transformative potential of ISBS in advancing the numerical approximation of two-dimensional PDEs in physical systems.
Collapse
Affiliation(s)
- Ezekiel Olaoluwa Omole
- Department Physical Sciences, Mathematics Programme, College of Pure and Applied Sciences, Landmark University, Omu-Aran, Kwara State, Nigeria
- SDG 4: Quality Education Research Group, Landmark University, Omu-Aran, Nigeria
| | - Emmanuel Olusheye Adeyefa
- Mathematics Department, Faculty of Science, Federal University Oye-Ekiti, Oye-Ekiti, Ekiti State, Nigeria
| | | | | | - Femi Emmanuel Amoyedo
- Department Physical Sciences, Mathematics Programme, College of Pure and Applied Sciences, Landmark University, Omu-Aran, Kwara State, Nigeria
- SDG 4: Quality Education Research Group, Landmark University, Omu-Aran, Nigeria
| | - Homan Emadifar
- Department of Mathematics, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamil Nadu, India
- MEU Research Unit, Middle East University, Amman, Jordan
- Department of Mathematics, Hamedan Branch, Islamic Azad University, Hamedan, Iran
| |
Collapse
|
2
|
Lu Y, Zhang S, Weng F, Sun H. Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine. Front Comput Neurosci 2023; 17:1120516. [PMID: 36968294 PMCID: PMC10033520 DOI: 10.3389/fncom.2023.1120516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 02/13/2023] [Indexed: 03/11/2023] Open
Abstract
In this study, we investigate a new neural network method to solve Volterra and Fredholm integral equations based on the sine-cosine basis function and extreme learning machine (ELM) algorithm. Considering the ELM algorithm, sine-cosine basis functions, and several classes of integral equations, the improved model is designed. The novel neural network model consists of an input layer, a hidden layer, and an output layer, in which the hidden layer is eliminated by utilizing the sine-cosine basis function. Meanwhile, by using the characteristics of the ELM algorithm that the hidden layer biases and the input weights of the input and hidden layers are fully automatically implemented without iterative tuning, we can greatly reduce the model complexity and improve the calculation speed. Furthermore, the problem of finding network parameters is converted into solving a set of linear equations. One advantage of this method is that not only we can obtain good numerical solutions for the first- and second-kind Volterra integral equations but also we can obtain acceptable solutions for the first- and second-kind Fredholm integral equations and Volterra–Fredholm integral equations. Another advantage is that the improved algorithm provides the approximate solution of several kinds of linear integral equations in closed form (i.e., continuous and differentiable). Thus, we can obtain the solution at any point. Several numerical experiments are performed to solve various types of integral equations for illustrating the reliability and efficiency of the proposed method. Experimental results verify that the proposed method can achieve a very high accuracy and strong generalization ability.
Collapse
Affiliation(s)
- Yanfei Lu
- School of Electronics and Information Engineering, Taizhou University, Zhejiang, Taizhou, China
| | - Shiqing Zhang
- School of Electronics and Information Engineering, Taizhou University, Zhejiang, Taizhou, China
| | - Futian Weng
- Data Mining Research Center, Xiamen University, Fujian, Xiamen, China
| | - Hongli Sun
- School of Mathematics and Statistics, Central South University, Hunan, Changsha, China
- *Correspondence: Hongli Sun
| |
Collapse
|
3
|
Evolving deep convolutional neutral network by hybrid sine-cosine and extreme learning machine for real-time COVID19 diagnosis from X-ray images. Soft comput 2023; 27:3307-3326. [PMID: 33994846 PMCID: PMC8107782 DOI: 10.1007/s00500-021-05839-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/20/2021] [Indexed: 11/05/2022]
Abstract
The COVID19 pandemic globally and significantly has affected the life and health of many communities. The early detection of infected patients is effective in fighting COVID19. Using radiology (X-Ray) images is, perhaps, the fastest way to diagnose the patients. Thereby, deep Convolutional Neural Networks (CNNs) can be considered as applicable tools to diagnose COVID19 positive cases. Due to the complicated architecture of a deep CNN, its real-time training and testing become a challenging problem. This paper proposes using the Extreme Learning Machine (ELM) instead of the last fully connected layer to address this deficiency. However, the parameters' stochastic tuning of ELM's supervised section causes the final model unreliability. Therefore, to cope with this problem and maintain network reliability, the sine-cosine algorithm was utilized to tune the ELM's parameters. The designed network is then benchmarked on the COVID-Xray-5k dataset, and the results are verified by a comparative study with canonical deep CNN, ELM optimized by cuckoo search, ELM optimized by genetic algorithm, and ELM optimized by whale optimization algorithm. The proposed approach outperforms comparative benchmarks with a final accuracy of 98.83% on the COVID-Xray-5k dataset, leading to a relative error reduction of 2.33% compared to a canonical deep CNN. Even more critical, the designed network's training time is only 0.9421 ms and the overall detection test time for 3100 images is 2.721 s.
Collapse
|
5
|
Ma M, Yang J, Liu R. A novel structure automatic-determined Fourier extreme learning machine for generalized Black–Scholes partial differential equation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Schiassi E, Furfaro R, Leake C, De Florio M, Johnston H, Mortari D. Extreme theory of functional connections: A fast physics-informed neural network method for solving ordinary and partial differential equations. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
7
|
Jacobi Neural Network Method for Solving Linear Differential-Algebraic Equations with Variable Coefficients. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10543-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
Dwivedi KD, Rajeev. Numerical Solution of Fractional Order Advection Reaction Diffusion Equation with Fibonacci Neural Network. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10513-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Haweel MT, Zahran O, El-Samie FEA. Adaptive Polynomial Method for Solving Third-Order ODE With Application in Thin Film Flow. IEEE ACCESS 2021; 9:67874-67889. [DOI: 10.1109/access.2021.3072944] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
10
|
Solution of Ruin Probability for Continuous Time Model Based on Block Trigonometric Exponential Neural Network. Symmetry (Basel) 2020. [DOI: 10.3390/sym12060876] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The ruin probability is used to determine the overall operating risk of an insurance company. Modeling risks through the characteristics of the historical data of an insurance business, such as premium income, dividends and reinvestments, can usually produce an integral differential equation that is satisfied by the ruin probability. However, the distribution function of the claim inter-arrival times is more complicated, which makes it difficult to find an analytical solution of the ruin probability. Therefore, based on the principles of artificial intelligence and machine learning, we propose a novel numerical method for solving the ruin probability equation. The initial asset u is used as the input vector and the ruin probability as the only output. A trigonometric exponential function is proposed as the projection mapping in the hidden layer, then a block trigonometric exponential neural network (BTENN) model with a symmetrical structure is established. Trial solution is set to meet the initial value condition, simultaneously, connection weights are optimized by solving a linear system using the extreme learning machine (ELM) algorithm. Three numerical experiments were carried out by Python. The results show that the BTENN model can obtain the approximate solution of the ruin probability under the classical risk model and the Erlang(2) risk model at any time point. Comparing with existing methods such as Legendre neural networks (LNN) and trigonometric neural networks (TNN), the proposed BTENN model has a higher stability and lower deviation, which proves that it is feasible and superior to use a BTENN model to estimate the ruin probability.
Collapse
|
11
|
Zhao X, Zhang Z, Bi X, Sun Y. A new point-of-interest group recommendation method in location-based social networks. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04979-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|