Title
Learning Sparse Classifiers: Continuous And Mixed Integer Optimization Perspectives
Abstract
We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) l(0)-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower than the relatively mature algorithms for l(1)-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for l(0)-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle p approximate to 50, 000 features in a few minutes, and approximate algorithms that can address instances with p approximate to 10(6) in times comparable to the fast l(1)-based algorithms. Our exact algorithm is based on the novel idea of integrality generation, which solves the original problem (with p binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of l(0)-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially variable selection) compared to competing methods.
Year
DOI
Venue
2021
v22/19-1049.html
JOURNAL OF MACHINE LEARNING RESEARCH
Keywords
DocType
Volume
sparsity, sparse classification, 10 regularization, mixed integer programming
Journal
22
Issue
ISSN
Citations 
1
1532-4435
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Dedieu Antoine100.34
Hazimeh Hussein200.34
Rahul Mazumder330818.33