Appears in collection : CEMRACS 2023: Scientific Machine Learning / CEMRACS 2023: Apprentissage automatique scientifique
In this series of lectures, I will report some recent development of the design and analysis of neural network (NN) based method, such as physics-informed neural networks (PINN) and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). I will give an overview on convergence analysis of FNM, for error estimates (without or with numerical quadrature) and also for training algorithms for solving the relevant optimization problems. I will present theoretical results that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present some new classes of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN and MgNet using an activation function with compact support for image classifications.