Towards domain decomposition of large neural networks
Please login to view abstract download link
Neural networks draw attention in many engineering fields. Particularly in design optimization, when one is interested in generating surrogate models for high-dimensional regression problems, neural networks are used. However, neural networks are global approximators and therefore have problems predicting local nonlinearities without a large amount of training parameters slowing down the training process. The idea is to reduce the aforementioned drawbacks by introducing a domain decomposition method. The domain decomposition allows to split global NN into multiple local neural networks to approximate smaller, simpler local domains with less parameters. The simplification comes at the cost of additional interface constraints between local predictions, which are included in the local loss functions by the method of Lagrange multipliers. The method is showcased on a 1D beam example that is split into two domains, which shows that the method works in finding an improved approximation for continuity along the interface constraint.