A Study on the Relationship between the Rank of Input Data and the Performance of Random Weight Neural Network
Department
Computer Science
Document Type
Article
Publication Title
Neurocomputing and Applications
Volume
32
Issue
16
DOI
10.1007/s00521-020-04719-8
First Page
12685
Last Page
12696
Publication Date
Spring 1-1-2020
Abstract
Random feature mapping (RFM) is the core operation in the random weight neural network (RWNN). Its quality has a significant impact on the performance of a RWNN model. However, there has been no good way to evaluate the quality of RFM. In this paper, we introduce a new concept called dispersion degree of matrix information distribution (DDMID), which can be used to measure the quality of RFM. We used DDMID in our experiments to explain the relationship between the rank of input data and the performance of the RWNN model and got some interesting results. We found that: (1) when the rank of input data reaches a certain threshold, the model’s performance increases with the increase in the rank; (2) the impact of the rank on the model performance is insensitive to the type of activation functions and the number of hidden nodes; (3) if the DDMID of an RFM matrix is very small, it implies that the first ��k singular values in the singular value matrix of the RFM matrix contain too much information, which usually has a negative impact on the final closed-form solution of the RWNN model. Besides, we verified the improvement effect of intrinsic plasticity (IP) algorithm on RFM by using DDMID. The experimental results showed that DDMID allows researchers evaluate the mapping quality of data features before model training, so as to predict the effect of data preprocessing or network initialization without model training. We believe that our findings could provide useful guidance when constructing and analyzing a RWNN model.
Recommended Citation
Gao, J.,
Cao, W.,
Hu, L.,
Wang, X.,
&
Ming, Z.
(2020).
A Study on the Relationship between the Rank of Input Data and the Performance of Random Weight Neural Network.
Neurocomputing and Applications, 32(16), 12685–12696.
DOI: 10.1007/s00521-020-04719-8
https://scholarlycommons.pacific.edu/soecs-facarticles/137