Title
On the Classification Consistency of High-Dimensional Sparse Neural Network
Abstract
Artificial neural networks (ANN) is an automatic way of capturing linear and nonlinear correlations, spatial and other structural dependence among features and output variables. This results good performance in many application areas such as classification and prediction from magnetic resonance imaging, spatial data and computer vision tasks. Most commonly used ANNs assume availability of large training data compared to dimension of feature vector. However, in modern days applications, such as MRI applications or in computer visions the training sample sizes are often low, and may be even lower than the dimension of feature vector. In this paper, we consider a single layer ANN classification model that is suitable for low training sample. Besides developing the sparse architecture, we also studied theoretical properties of our machine. We showed that under mild conditions, the classification risk converges to the optimal Bayes classifier risk (universal consistency) under sparse group lasso regularization. Moreover, we proposed a variation on the regularization terms. A few examples in popular research fields are also provided to illustrate the theory and methods.
Year
DOI
Venue
2019
10.1109/DSAA.2019.00032
2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA)
Keywords
Field
DocType
neural network,high-dimensional,consistency,regularization
Spatial analysis,Feature vector,Nonlinear system,Pattern recognition,Computer science,Regularization (mathematics),Artificial intelligence,Structural dependence,Artificial neural network,Bayes classifier,Sample size determination
Conference
ISSN
ISBN
Citations 
2472-1573
978-1-7281-4494-8
0
PageRank 
References 
Authors
0.34
4
2
Name
Order
Citations
PageRank
Kaixu Yang100.34
Taps Maiti200.34