autogluon.tabular.models

Note

This documentation is for advanced users, and is not comprehensive.

For a stable public API, refer to TabularPredictor.

Model Name Suffixes

Models trained by TabularPredictor can have suffixes in their names that have special meanings.

The suffixes are as follows:

“_Lx”: Indicates the stack level (x) the model is trained in, such as “_L1”, “_L2”, etc. A model with “_L1” suffix is a base model, meaning it does not depend on any other models. If a model lacks this suffix, then it is a base model and is at level 1 (“_L1”).

“/Tx”: Indicates that the model was trained via hyperparameter search (HPO). Tx is shorthand for HPO trial #x. An example would be “LightGBM/T8”.

“_BAG”: Indicates that the model is a bagged ensemble. A bagged ensemble contains multiple instances of the model (children) trained with different subsets of the data. During inference, these child models each predict on the data and their predictions are averaged in the final result. This typically achieves a stronger result than any of the individual models alone, but slows down inference speed significantly. Refer to “_FULL” for instructions on how to improve inference speed.

“_FULL”: Indicates the model has been refit via TabularPredictor’s refit_full method. This model will have no validation score because all of the data (train and validation) was used as training data. Usually, there will be another model with the same name as this model minus the “_FULL” suffix. Often, this model can outperform the original model because of using more data during training, but is usually weaker if the original was a bagged ensemble (“_BAG”), but with much faster inference speed.

“_DSTL”: Indicates the model was created through model distillation via a call to TabularPredictor’s distill method. Validation scores of distilled models should only be compared against other distilled models.

“_x”: Indicates that the name without this added suffix already existed in a different model, so this suffix was added to avoid overwriting the pre-existing model. An example would be “LightGBM_2”.

Models

AbstractModel

Abstract model implementation from which all AutoGluon models inherit.

LGBModel

LightGBM model: https://lightgbm.readthedocs.io/en/latest/

CatBoostModel

CatBoost model: https://catboost.ai/

XGBoostModel

XGBoost model: https://xgboost.readthedocs.io/en/latest/

RFModel

Random Forest model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html

XTModel

Extra Trees model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html#sklearn.ensemble.ExtraTreesClassifier

KNNModel

KNearestNeighbors model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html

LinearModel

Linear model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

TabularNeuralNetModel

Class for neural network models that operate on tabular data.

NNFastAiTabularModel

Class for fastai v1 neural network models that operate on tabular data.

AbstractModel

class autogluon.tabular.models.AbstractModel(path: str, name: str, problem_type: str, eval_metric: Union[str, autogluon.core.metrics.Scorer] = None, hyperparameters=None, feature_metadata: autogluon.core.features.feature_metadata.FeatureMetadata = None, num_classes=None, stopping_metric=None, features=None, **kwargs)[source]

Abstract model implementation from which all AutoGluon models inherit.

Parameters
pathstr

Directory location to store all outputs.

namestr

Name of the subdirectory inside path where model will be saved. The final model directory will be path+name+os.path.sep()

problem_typestr

Type of prediction problem, i.e. is this a binary/multiclass classification or regression problem (options: ‘binary’, ‘multiclass’, ‘regression’).

eval_metricautogluon.core.metrics.Scorer or str, default = None

Metric by which predictions will be ultimately evaluated on test data. This only impacts model.score(), as eval_metric is not used during training.

If eval_metric = None, it is automatically chosen based on problem_type. Defaults to ‘accuracy’ for binary and multiclass classification and ‘root_mean_squared_error’ for regression. Otherwise, options for classification:

[‘accuracy’, ‘balanced_accuracy’, ‘f1’, ‘f1_macro’, ‘f1_micro’, ‘f1_weighted’, ‘roc_auc’, ‘roc_auc_ovo_macro’, ‘average_precision’, ‘precision’, ‘precision_macro’, ‘precision_micro’, ‘precision_weighted’, ‘recall’, ‘recall_macro’, ‘recall_micro’, ‘recall_weighted’, ‘log_loss’, ‘pac_score’]

Options for regression:

[‘root_mean_squared_error’, ‘mean_squared_error’, ‘mean_absolute_error’, ‘median_absolute_error’, ‘r2’]

For more information on these options, see sklearn.metrics: https://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics

You can also pass your own evaluation function here as long as it follows formatting of the functions defined in folder autogluon.core.metrics.

hyperparametersdict, default = None

Hyperparameters that will be used by the model (can be search spaces instead of fixed values). If None, model defaults are used. This is identical to passing an empty dictionary.

feature_metadataautogluon.core.features.feature_metadata.FeatureMetadata, default = None

Contains feature type information that can be used to identify special features such as text ngrams and datetime as well as which features are numerical vs categorical. If None, feature_metadata is inferred during fit.

Attributes
path_suffix

Methods

can_infer()

Returns True if the model is capable of inference on new data.

compute_feature_importance(X, y[, features, …])

convert_to_refit_full_template()

After calling this function, returned model should be able to be fit without X_val, y_val using the iterations trained by the original model.

convert_to_template()

After calling this function, returned model should be able to be fit as if it was new, as well as deep-copied.

delete_from_disk()

Deletes the model from disk.

fit(**kwargs)

Fit model to predict values in y based on X.

get_disk_size()

get_info()

Returns a dictionary of numerous fields describing the model.

get_memory_size()

get_model_feature_importance()

Custom feature importance values for a model (such as those calculated from training)

get_trained_params()

Returns the hyperparameters of the trained model.

is_fit()

Returns True if the model has been fit.

is_valid()

Returns True if the model is capable of inference on new data (if normal model) or has produced out-of-fold predictions (if bagged model) This indicates whether the model can be used as a base model to fit a stack ensemble model.

load(path[, reset_paths, verbose])

Loads the model from disk to memory.

predict(X, **kwargs)

Returns class predictions of X.

predict_proba(X[, normalize])

Returns class prediction probabilities of X.

preprocess(X[, preprocess_nonadaptive, …])

Preprocesses the input data into internal form ready for fitting or inference.

reduce_memory_size([remove_fit, …])

Removes non-essential objects from the model to reduce memory and disk footprint.

rename(name)

Renames the model and updates self.path to reflect the updated name.

reset_metrics()

save([path, verbose])

Saves the model to disk.

set_contexts(path_context)

create_contexts

hyperparameter_tune

load_info

save_info

score

score_with_y_pred_proba

can_infer() → bool[source]

Returns True if the model is capable of inference on new data.

convert_to_refit_full_template()[source]

After calling this function, returned model should be able to be fit without X_val, y_val using the iterations trained by the original model.

convert_to_template()[source]

After calling this function, returned model should be able to be fit as if it was new, as well as deep-copied.

delete_from_disk()[source]

Deletes the model from disk.

WARNING: This will DELETE ALL FILES in the self.path directory, regardless if they were created by AutoGluon or not. DO NOT STORE FILES INSIDE OF THE MODEL DIRECTORY THAT ARE UNRELATED TO AUTOGLUON.

fit(**kwargs)[source]

Fit model to predict values in y based on X.

Models should not override the fit method, but instead override the _fit method which has the same arguments.

Parameters
XDataFrame

The training data features.

ySeries

The training data ground truth labels.

X_valDataFrame, default = None

The validation data features. If None, early stopping via validation score will be disabled.

y_valSeries, default = None

The validation data ground truth labels. If None, early stopping via validation score will be disabled.

X_unlabeledDataFrame, default = None

Unlabeled data features. Models may optionally implement logic which leverages unlabeled data to improve model accuracy.

time_limitfloat, default = None

Time limit in seconds to adhere to when fitting model. Ideally, model should early stop during fit to avoid going over the time limit if specified.

sample_weightSeries, default = None

The training data sample weights. Models may optionally leverage sample weights during fit. If None, model decides. Typically, models assume uniform sample weight.

sample_weights_valSeries, default = None

The validation data sample weights. If None, model decides. Typically, models assume uniform sample weight.

num_cpusint, default = ‘auto’

How many CPUs to use during fit. This is counted in virtual cores, not in physical cores. If ‘auto’, model decides.

num_gpusint, default = ‘auto’

How many GPUs to use during fit. If ‘auto’, model decides.

verbosityint, default = 2

Verbosity levels range from 0 to 4 and control how much information is printed. Higher levels correspond to more detailed print statements (you can set verbosity = 0 to suppress warnings). verbosity 4: logs every training iteration, and logs the most detailed information. verbosity 3: logs training iterations periodically, and logs more detailed information. verbosity 2: logs only important information. verbosity 1: logs only warnings and exceptions. verbosity 0: logs only exceptions.

**kwargs :

Any additional fit arguments a model supports.

get_info() → dict[source]

Returns a dictionary of numerous fields describing the model.

get_model_feature_importance() → dict[source]

Custom feature importance values for a model (such as those calculated from training)

This is purely optional to implement, as it is only used to slightly speed up permutation importance by identifying features that were never used.

get_trained_params() → dict[source]

Returns the hyperparameters of the trained model. If the model early stopped, this will contain the epoch/iteration the model uses during inference, instead of the epoch/iteration specified during fit. This is used for generating a model template to refit on all of the data (no validation set).

is_fit() → bool[source]

Returns True if the model has been fit.

is_valid() → bool[source]

Returns True if the model is capable of inference on new data (if normal model) or has produced out-of-fold predictions (if bagged model) This indicates whether the model can be used as a base model to fit a stack ensemble model.

classmethod load(path: str, reset_paths=True, verbose=True)[source]

Loads the model from disk to memory.

Parameters
pathstr

Path to the saved model, minus the file name. This should generally be a directory path ending with a ‘/’ character (or appropriate path separator value depending on OS). The model file is typically located in path + cls.model_file_name.

reset_pathsbool, default True

Whether to reset the self.path value of the loaded model to be equal to path. It is highly recommended to keep this value as True unless accessing the original self.path value is important. If False, the actual valid path and self.path may differ, leading to strange behaviour and potential exceptions if the model needs to load any other files at a later time.

verbosebool, default True

Whether to log the location of the loaded file.

Returns
modelcls

Loaded model object.

predict(X, **kwargs)[source]

Returns class predictions of X. For binary and multiclass problems, this returns the predicted class labels as a Series. For regression problems, this returns the predicted values as a Series.

predict_proba(X, normalize=None, **kwargs)[source]

Returns class prediction probabilities of X. For binary problems, this returns the positive class label probability as a Series. For multiclass problems, this returns the class label probabilities of each class as a DataFrame. For regression problems, this returns the predicted values as a Series.

preprocess(X, preprocess_nonadaptive=True, preprocess_stateful=True, **kwargs)[source]

Preprocesses the input data into internal form ready for fitting or inference. It is not recommended to override this method, as it is closely tied to multi-layer stacking logic. Instead, override _preprocess.

reduce_memory_size(remove_fit=True, remove_info=False, requires_save=True, **kwargs)[source]

Removes non-essential objects from the model to reduce memory and disk footprint. If remove_fit=True, enables the removal of variables which are required for fitting the model. If the model is already fully trained, then it is safe to remove these. If remove_info=True, enables the removal of variables which are used during model.get_info(). The values will be None when calling model.get_info(). If requires_save=True, enables the removal of variables which are part of the model.pkl object, requiring an overwrite of the model to disk if it was previously persisted.

It is not necessary for models to implement this.

rename(name: str)[source]

Renames the model and updates self.path to reflect the updated name.

save(path: str = None, verbose=True) → str[source]

Saves the model to disk.

Parameters
pathstr, default None

Path to the saved model, minus the file name. This should generally be a directory path ending with a ‘/’ character (or appropriate path separator value depending on OS). If None, self.path is used. The final model file is typically saved to path + self.model_file_name.

verbosebool, default True

Whether to log the location of the saved file.

Returns
pathstr

Path to the saved model, minus the file name. Use this value to load the model from disk via cls.load(path), cls being the class of the model object, such as model = RFModel.load(path)

LGBModel

class autogluon.tabular.models.LGBModel(**kwargs)[source]

LightGBM model: https://lightgbm.readthedocs.io/en/latest/

Hyperparameter options: https://lightgbm.readthedocs.io/en/latest/Parameters.html

CatBoostModel

class autogluon.tabular.models.CatBoostModel(**kwargs)[source]

CatBoost model: https://catboost.ai/

Hyperparameter options: https://catboost.ai/docs/concepts/python-reference_parameters-list.html

XGBoostModel

class autogluon.tabular.models.XGBoostModel(**kwargs)[source]

XGBoost model: https://xgboost.readthedocs.io/en/latest/

Hyperparameter options: https://xgboost.readthedocs.io/en/latest/parameter.html

RFModel

class autogluon.tabular.models.RFModel(**kwargs)[source]

Random Forest model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html

KNNModel

class autogluon.tabular.models.KNNModel(**kwargs)[source]

KNearestNeighbors model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html

TabularNeuralNetModel

class autogluon.tabular.models.TabularNeuralNetModel(**kwargs)[source]

Class for neural network models that operate on tabular data. These networks use different types of input layers to process different types of data in various columns.

Attributes:

_types_of_features (dict): keys = ‘continuous’, ‘skewed’, ‘onehot’, ‘embed’, ‘language’; values = column-names of Dataframe corresponding to the features of this type feature_arraycol_map (OrderedDict): maps feature-name -> list of column-indices in df corresponding to this feature

self.feature_type_map (OrderedDict): maps feature-name -> feature_type string (options: ‘vector’, ‘embed’, ‘language’) processor (sklearn.ColumnTransformer): scikit-learn preprocessor object.

Note: This model always assumes higher values of self.eval_metric indicate better performance.

NNFastAiTabularModel

class autogluon.tabular.models.NNFastAiTabularModel(**kwargs)[source]

Class for fastai v1 neural network models that operate on tabular data.

Hyperparameters:

y_scaler: on a regression problems, the model can give unreasonable predictions on unseen data. This attribute allows to pass a scaler for y values to address this problem. Please note that intermediate iteration metrics will be affected by this transform and as a result intermediate iteration scores will be different from the final ones (these will be correct). https://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing

‘layers’: list of hidden layers sizes; None - use model’s heuristics; default is None

‘emb_drop’: embedding layers dropout; defaut is 0.1

‘ps’: linear layers dropout - list of values applied to every layer in layers; default is [0.1]

‘bs’: batch size; default is 256

‘lr’: maximum learning rate for one cycle policy; default is 1e-2; see also https://fastai1.fast.ai/train.html#fit_one_cycle, One-cycle policy paper: https://arxiv.org/abs/1803.09820

‘epochs’: number of epochs; default is 30

# Early stopping settings. See more details here: https://fastai1.fast.ai/callbacks.tracker.html#EarlyStoppingCallback ‘early.stopping.min_delta’: 0.0001, ‘early.stopping.patience’: 10,

‘smoothing’: If > 0, then use LabelSmoothingCrossEntropy loss function for binary/multi-class classification; otherwise use default loss function for this type of problem; default is 0.0. See: https://docs.fast.ai/layers.html#LabelSmoothingCrossEntropy

Ensemble Models

BaggedEnsembleModel

Bagged ensemble meta-model which fits a given model multiple times across different splits of the training data.

StackerEnsembleModel

Stack ensemble meta-model which functions identically to BaggedEnsembleModel with the additional capability to leverage base models.

WeightedEnsembleModel

Weighted ensemble meta-model that implements Ensemble Selection: https://www.cs.cornell.edu/~alexn/papers/shotgun.icml04.revised.rev2.pdf

BaggedEnsembleModel

class autogluon.core.models.BaggedEnsembleModel(model_base: autogluon.core.models.abstract.abstract_model.AbstractModel, random_state=0, **kwargs)[source]

Bagged ensemble meta-model which fits a given model multiple times across different splits of the training data.

StackerEnsembleModel

class autogluon.core.models.StackerEnsembleModel(base_model_names=None, base_models_dict=None, base_model_paths_dict=None, base_model_types_dict=None, base_model_types_inner_dict=None, base_model_performances_dict=None, **kwargs)[source]

Stack ensemble meta-model which functions identically to BaggedEnsembleModel with the additional capability to leverage base models.

By specifying base models during init, stacker models can use the base model predictions as features during training and inference.

This property allows for significantly improved model quality in many situations compared to non-stacking alternatives.

Stacker models can act as base models to other stacker models, enabling multi-layer stack ensembling.

WeightedEnsembleModel

class autogluon.core.models.WeightedEnsembleModel(**kwargs)[source]

Weighted ensemble meta-model that implements Ensemble Selection: https://www.cs.cornell.edu/~alexn/papers/shotgun.icml04.revised.rev2.pdf

A autogluon.core.models.GreedyWeightedEnsembleModel must be specified as the model_base to properly function.

Experimental Models

FastTextModel

Attributes

TextPredictorModel

Attributes

FastTextModel

class autogluon.tabular.models.FastTextModel(**kwargs)[source]

TextPredictorModel

class autogluon.tabular.models.TextPredictorModel(**kwargs)[source]