A review on neural networks with random weights
Department
Computer Science
Document Type
Article
Publication Title
Neurocomputing
ISSN
0925-2312
Volume
275
Issue
31
DOI
10.1016/j.neucom.2017.08.040
First Page
278
Last Page
287
Publication Date
1-31-2018
Abstract
In big data fields, with increasing computing capability, artificial neural networks have shown great strength in solving data classification and regression problems. The traditional training of neural networks depends generally on the error back propagation method to iteratively tune all the parameters. When the number of hidden layers increases, this kind of training has many problems such as slow convergence, time consuming, and local minima. To avoid these problems, neural networks with random weights (NNRW) are proposed in which the weights between the hidden layer and input layer are randomly selected and the weights between the output layer and hidden layer are obtained analytically. Researchers have shown that NNRW has much lower training complexity in comparison with the traditional training of feed-forward neural networks. This paper objectively reviews the advantages and disadvantages of NNRW model, tries to reveal the essence of NNRW, gives our comments and remarks on NNRW, and provides some useful guidelines for users to choose a mechanism to train a feed-forward neural network.
Recommended Citation
Cao, W.,
Wang, X.,
Ming, Z.,
&
Gao, J.
(2018).
A review on neural networks with random weights.
Neurocomputing, 275(31), 278–287.
DOI: 10.1016/j.neucom.2017.08.040
https://scholarlycommons.pacific.edu/soecs-facarticles/58