A Forward and Backward Stagewise Algorithm for Nonconvex Loss Functions with Adaptive Lasso.
Ontology highlight
ABSTRACT: Penalization is a popular tool for multi- and high-dimensional data. Most of the existing computational algorithms have been developed for convex loss functions. Nonconvex loss functions can sometimes generate more robust results and have important applications. Motivated by the BLasso algorithm, this study develops the Forward and Backward Stagewise (Fabs) algorithm for nonconvex loss functions with the adaptive Lasso (aLasso) penalty. It is shown that each point along the Fabs paths is a ?-approximate solution to the aLasso problem and the Fabs paths converge to the stationary points of the aLasso problem when ? goes to zero, given that the loss function has second-order derivatives bounded from above. This study exemplifies the Fabs with an application to the penalized smooth partial rank (SPR) estimation, for which there is still a lack of effective algorithm. Extensive numerical studies are conducted to demonstrate the benefit of penalized SPR estimation using Fabs, especially under high-dimensional settings. Application to the smoothed 0-1 loss in binary classification is introduced to demonstrate its capability to work with other differentiable nonconvex loss function.
SUBMITTER: Shi X
PROVIDER: S-EPMC6181148 | biostudies-literature | 2018 Aug
REPOSITORIES: biostudies-literature
ACCESS DATA