Some tricks in parameter selection for extreme learning machine
IOP Conference Series: Materials Science and Engineering
August 30-September 2, 2017
Date of Presentation
Extreme learning machine (ELM) is a widely used neural network with random weights (NNRW), which has made great contributions to many fields. However, the relationship between the parameters and the performance of ELM has not been fully investigated yet, i.e. the impact of the number of hidden layer nodes, the randomization range of the weights between the input layer and hidden layer, the randomization range of the threshold of hidden nodes, and the type of activation functions. In this paper, eight benchmark functions are used to study this relationship. We have some interesting findings, such as more hidden layer nodes cannot guarantee the best performance of ELM, the empirical randomization range of the hidden weights (i.e., [-1, 1]) and the empirical randomization range of the threshold of hidden nodes (i.e., [0, 1]) may not lead to the optimal performance of ELM models, and ELM with sigmoid as the activation function always achieves better performance on some regression problems than ELM with tribas as the activation function. We hope the findings from our work could provide a useful guidance for researchers to select right parameters for ELM.
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Some tricks in parameter selection for extreme learning machine.
Paper presented at IOP Conference Series: Materials Science and Engineering in Hawaii.