- 2.25.0 (latest)
- 2.24.0
- 2.23.0
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 1.36.0
- 1.35.0
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.0
- 1.30.0
- 1.29.0
- 1.28.0
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.1
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.1
- 0.19.2
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.1
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
XGBClassifier(
    n_estimators: int = 1,
    *,
    booster: typing.Literal["gbtree", "dart"] = "gbtree",
    dart_normalized_type: typing.Literal["tree", "forest"] = "tree",
    tree_method: typing.Literal["auto", "exact", "approx", "hist"] = "auto",
    min_tree_child_weight: int = 1,
    colsample_bytree: float = 1.0,
    colsample_bylevel: float = 1.0,
    colsample_bynode: float = 1.0,
    gamma: float = 0.0,
    max_depth: int = 6,
    subsample: float = 1.0,
    reg_alpha: float = 0.0,
    reg_lambda: float = 1.0,
    learning_rate: float = 0.3,
    max_iterations: int = 20,
    tol: float = 0.01,
    enable_global_explain: bool = False,
    xgboost_version: typing.Literal["0.9", "1.1"] = "0.9"
)XGBoost classifier model.
| Parameters | |
|---|---|
| Name | Description | 
| n_estimators | Optional[int]Number of parallel trees constructed during each iteration. Default to 1. | 
| booster | Optional[str]Specify which booster to use: gbtree or dart. Default to "gbtree". | 
| dart_normalized_type | Optional[str]Type of normalization algorithm for DART booster. Possible values: "TREE", "FOREST". Default to "TREE". | 
| tree_method | Optional[str]Specify which tree method to use. Default to "auto". If this parameter is set to default, XGBoost will choose the most conservative option available. Possible values: "exact", "approx", "hist". | 
| min_child_weight | Optional[float]Minimum sum of instance weight(hessian) needed in a child. Default to 1. | 
| colsample_bytree | Optional[float]Subsample ratio of columns when constructing each tree. Default to 1.0. | 
| colsample_bylevel | Optional[float]Subsample ratio of columns for each level. Default to 1.0. | 
| colsample_bynode | Optional[float]Subsample ratio of columns for each split. Default to 1.0. | 
| gamma | Optional[float](min_split_loss) Minimum loss reduction required to make a further partition on a leaf node of the tree. Default to 0.0. | 
| max_depth | Optional[int]Maximum tree depth for base learners. Default to 6. | 
| subsample | Optional[float]Subsample ratio of the training instance. Default to 1.0. | 
| reg_alpha | Optional[float]L1 regularization term on weights (xgb's alpha). Default to 0.0. | 
| reg_lambda | Optional[float]L2 regularization term on weights (xgb's lambda). Default to 1.0. | 
| learning_rate | Optional[float]Boosting learning rate (xgb's "eta"). Default to 0.3. | 
| max_iterations | Optional[int]Maximum number of rounds for boosting. Default to 20. | 
| tol | Optional[float]Minimum relative loss improvement necessary to continue training. Default to 0.01. | 
| enable_global_explain | Optional[bool]Whether to compute global explanations using explainable AI to evaluate global feature importance to the model. Default to False. | 
| xgboost_version | Optional[str]Specifies the Xgboost version for model training. Default to "0.9". Possible values: "0.9", "1.1". | 
Methods
__repr__
__repr__()Print the estimator's constructor with all non-default parameter values.
fit
fit(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
) -> bigframes.ml.base._TFit gradient boosting model.
Note that calling fit() multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model argument.
| Parameters | |
|---|---|
| Name | Description | 
| X | bigframes.dataframe.DataFrame or bigframes.series.SeriesSeries or DataFrame of shape (n_samples, n_features). Training data. | 
| y | bigframes.dataframe.DataFrame or bigframes.series.SeriesDataFrame of shape (n_samples,) or (n_samples, n_targets). Target values. Will be cast to X's dtype if necessary. | 
| Returns | |
|---|---|
| Type | Description | 
| XGBModel | Fitted estimator. | 
get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]Get parameters for this estimator.
| Parameter | |
|---|---|
| Name | Description | 
| deep | bool, default TrueDefault  | 
| Returns | |
|---|---|
| Type | Description | 
| Dictionary | A dictionary of parameter names mapped to their values. | 
predict
predict(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series]
) -> bigframes.dataframe.DataFramePredict using the XGB model.
| Parameter | |
|---|---|
| Name | Description | 
| X | bigframes.dataframe.DataFrame or bigframes.series.SeriesSeries or DataFrame of shape (n_samples, n_features). Samples. | 
| Returns | |
|---|---|
| Type | Description | 
| bigframes.dataframe.DataFrame | DataFrame of shape (n_samples, n_input_columns + n_prediction_columns). Returns predicted values. | 
register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._TRegister the model to Vertex AI.
After register, go to the Google Cloud console (https://console.cloud.google.com/vertex-ai/models) to manage the model registries. Refer to https://cloud.google.com/vertex-ai/docs/model-registry/introduction for more options.
| Parameter | |
|---|---|
| Name | Description | 
| vertex_ai_model_id | Optional[str], default NoneOptional string id as model id in Vertex. If not set, will default to 'bigframes_{bq_model_id}'. Vertex Ai model id will be truncated to 63 characters due to its limitation. | 
score
score(
    X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
    y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series],
)Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy, which is a harsh metric since you require that each label set be correctly predicted for each sample.
| Parameters | |
|---|---|
| Name | Description | 
| X | bigframes.dataframe.DataFrame or bigframes.series.SeriesDataFrame of shape (n_samples, n_features). Test samples. | 
| y | bigframes.dataframe.DataFrame or bigframes.series.SeriesDataFrame of shape (n_samples,) or (n_samples, n_outputs). True labels for  | 
| Returns | |
|---|---|
| Type | Description | 
| bigframes.dataframe.DataFrame | A DataFrame of the evaluation result. | 
to_gbq
to_gbq(
    model_name: str, replace: bool = False
) -> bigframes.ml.ensemble.XGBClassifierSave the model to BigQuery.
| Parameters | |
|---|---|
| Name | Description | 
| model_name | strThe name of the model. | 
| replace | bool, default FalseDetermine whether to replace if the model already exists. Default to False. | 
| Returns | |
|---|---|
| Type | Description | 
| XGBClassifier | Saved model. |