Some tricks in parameter selection for extreme learning machine
Document Type
Conference Presentation
Department
Computer Science
Conference Title
IOP Conference Series: Materials Science and Engineering
Location
Hawaii
Conference Dates
August 30-September 2, 2017
Date of Presentation
8-30-2017
Abstract
Extreme learning machine (ELM) is a widely used neural network with random weights (NNRW), which has made great contributions to many fields. However, the relationship between the parameters and the performance of ELM has not been fully investigated yet, i.e. the impact of the number of hidden layer nodes, the randomization range of the weights between the input layer and hidden layer, the randomization range of the threshold of hidden nodes, and the type of activation functions. In this paper, eight benchmark functions are used to study this relationship. We have some interesting findings, such as more hidden layer nodes cannot guarantee the best performance of ELM, the empirical randomization range of the hidden weights (i.e., [-1, 1]) and the empirical randomization range of the threshold of hidden nodes (i.e., [0, 1]) may not lead to the optimal performance of ELM models, and ELM with sigmoid as the activation function always achieves better performance on some regression problems than ELM with tribas as the activation function. We hope the findings from our work could provide a useful guidance for researchers to select right parameters for ELM.
Publisher
IOP Publishing
Volume
261
Issue
1
First Page
12002
DOI
10.1088/1757-899X/261/1/012002
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Recommended Citation
Cao, W.,
Gao, J.,
Ming, Z.,
&
Cai, S.
(2017).
Some tricks in parameter selection for extreme learning machine.
Paper presented at IOP Conference Series: Materials Science and Engineering in Hawaii.
https://scholarlycommons.pacific.edu/soecs-facpres/92