Dynamic neural networks for mimetic scientific machine learning
Please login to view abstract download link
For dynamic simulations, in most cases neural networks are used that are mostly static in nature, with the conventional sigmoid or ReLu functions in the neurons and making use of recurrence. This resembles ordinary numerical methods which also make use of time stepping. This means that these ''dynamic'' neural networks are just an alternative to scientific computing methods, but in no way superior or faster. In fact, in many cases, numerical methods are much more efficient. In this presentation we will discuss an entirely different and new methodology to create truly dynamic neural networks. Neuron activation functions are dynamic in nature, and hence no use is made of recurrence. It can be proved that there is a 1-1 relation between state space models and these dynamic neural networks. What's more, the topology of the neural network can be predicted: the number of hidden layers is related to the multiplicity of eigenvalues of the state space matrix, and the number of neurons is related to the number of complex eigenvalues. We will also provide some examples to demonstrate the procedure.