TR2012-012

Greedy Sparsity-Constrained Optimization


TR Image
Abstract:

Finding optimal sparse solutions to estimation problems, particularly in under-determined regimes has recently gained much attention. Most existing literature study linear models in which the squared error is used as the measure of discrepancy to be minimized. However, in many applications discrepancy is measured in more general forms such as log-likelihood. Regularization by '1-norm has been shown to induce sparse solutions, but their sparsity level can be merely suboptimal. In this paper we present a greedy algorithm, dubbed Gradient Support Pursuit (GraSP), for sparsity constrained optimization. Quantifiable guarantees are provided for GraSP when cost functions have the Stable Hessian Property.

 

  • Related News & Events

  • Related Publication

  •  Bahmani, S., Raj, B., Boufounos, P., "Greedy Sparsity-Constrained Optimization", Journal of Machine Learning Research (JMLR), Vol. 14, pp. 807-841, March 2013.
    BibTeX TR2013-015 PDF
    • @article{Bahmani2013mar,
    • author = {Bahmani, S. and Raj, B. and Boufounos, P.},
    • title = {Greedy Sparsity-Constrained Optimization},
    • journal = {Journal of Machine Learning Research (JMLR)},
    • year = 2013,
    • volume = 14,
    • pages = {807--841},
    • month = mar,
    • url = {https://www.merl.com/publications/TR2013-015}
    • }