Estimator for learning linear classifiers by SAGA.
Solves the following objective:
- minimize_w 1 / n_samples * sum_i loss(w^T x_i, y_i)
- alpha * 0.5 * ||w||^2_2 + beta * penalty(w)
Parameters: | eta : float or string, defaults to ‘auto’
alpha : float
beta : float
loss : string
penalty : string or Penalty object
gamma : float
max_iter : int
tol : float
verbose : int
callback : callable or None
random_state: int or RandomState :
|
---|
Methods
decision_function(X) | |
fit(X, y) | |
get_params([deep]) | Get parameters for this estimator. |
n_nonzero([percentage]) | |
predict(X) | |
predict_proba(X) | |
score(X, y[, sample_weight]) | Returns the mean accuracy on the given test data and labels. |
set_params(**params) | Set the parameters of this estimator. |
Get parameters for this estimator.
Parameters: | deep: boolean, optional :
|
---|---|
Returns: | params : mapping of string to any
|
Returns the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters: | X : array-like, shape = (n_samples, n_features)
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
sample_weight : array-like, shape = [n_samples], optional
|
---|---|
Returns: | score : float
|
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: | self : |
---|