ELM type neural networks in scientific computing
Please login to view abstract download link
Neural Networks (NN) are a powerful tool in approximation theory because of the exis- tence of Universal Approximation (UA) results. In the last decades, a significant attention has been given to Extreme Learning Machines (ELMs), typically employed for the training of single layer NNs, and for which a UA result can also be proven. In a generic NN, the design of the optimal approximator can be recast as an optimization problem that turns out to be particularly demanding from the computational viewpoint. However, under the adoption of ELM, the optimization task reduces to an - possibly rectangular - linear problem. This makes ELMs faster than typical deep neural networks, where optimization methods may lead to prohibitively slow learning speeds. We point out that ELMs are variants of the random projection networks.