Accessibility navigation


l1-norm penalized orthogonal forward regression

Hong, X., Chen, S., Guo, Y. and Gao, J. (2017) l1-norm penalized orthogonal forward regression. International Journal of Systems Science, 48 (10). pp. 2195-2201. ISSN 0020-7721

[img]
Preview
Text - Accepted Version
· Please see our End User Agreement before downloading.

405kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1080/00207721.2017.1311383

Abstract/Summary

A l1-norm penalized orthogonal forward regression (l1-POFR) algorithm is proposed based on the concept of leave-one-out mean square error (LOOMSE), by defining a new l1-norm penalized cost function in the constructed orthogonal space and associating each orthogonal basis with an individually tunable regularization parameter. Due to orthogonality, the LOOMSE can be analytically computed without actually splitting the data set, and moreover a closed form of the optimal regularization parameter is derived by greedily minimizing the LOOMSE incrementally. We also propose a simple formula for adaptively detecting and removing regressors to an inactive set so that the computational cost of the algorithm is significantly reduced. Examples are included to demonstrate the effectiveness of this new l1-POFR approach.

Item Type:Article
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:72078
Publisher:Taylor & Francis

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation