Sparse algorithms are not stable: A no-free-lunch theorem
IEEE transactions on pattern analysis and machine intelligence, 2011•ieeexplore.ieee.org
We consider two desired properties of learning algorithms: sparsity and algorithmic stability.
Both properties are believed to lead to good generalization ability. We show that these two
properties are fundamentally at odds with each other: A sparse algorithm cannot be stable
and vice versa. Thus, one has to trade off sparsity and stability in designing a learning
algorithm. In particular, our general result implies that ℓ 1-regularized regression (Lasso)
cannot be stable, while ℓ 2-regularized regression is known to have strong stability …
Both properties are believed to lead to good generalization ability. We show that these two
properties are fundamentally at odds with each other: A sparse algorithm cannot be stable
and vice versa. Thus, one has to trade off sparsity and stability in designing a learning
algorithm. In particular, our general result implies that ℓ 1-regularized regression (Lasso)
cannot be stable, while ℓ 2-regularized regression is known to have strong stability …
We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ 1 -regularized regression (Lasso) cannot be stable, while ℓ 2 -regularized regression is known to have strong stability properties and is therefore not sparse.
ieeexplore.ieee.org
Showing the best result for this search. See all results