Estimator for learning a linear support vector regressor by coordinate descent in the dual.
Parameters: | loss : str, ‘epsilon_insensitive’, ‘squared_epsilon_insensitive’
C : float
epsilon : float
max_iter : int
tol : float
fit_intercept : bool
warm_start : bool
permute : bool
callback : callable
n_calls : int
random_state : RandomState or int
verbose : int
|
---|
Methods
fit(X, y) | Fit model according to X and y. |
get_params([deep]) | Get parameters for this estimator. |
n_nonzero([percentage]) | |
predict(X) | |
score(X, y[, sample_weight]) | Returns the coefficient of determination R^2 of the prediction. |
set_params(**params) | Set the parameters of this estimator. |
Fit model according to X and y.
Parameters: | X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples]
|
---|---|
Returns: | self : regressor
|
Get parameters for this estimator.
Parameters: | deep: boolean, optional :
|
---|---|
Returns: | params : mapping of string to any
|
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
Parameters: | X : array-like, shape = (n_samples, n_features)
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
sample_weight : array-like, shape = [n_samples], optional
|
---|---|
Returns: | score : float
|
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: | self : |
---|