autogluon.tabular.models¶
Note
This documentation is for advanced users, and is not comprehensive.
For a stable public API, refer to TabularPredictor.
Model Name Suffixes¶
Models trained by TabularPredictor can have suffixes in their names that have special meanings.
The suffixes are as follows:
“_Lx”: Indicates the stack level (x) the model is trained in, such as “_L1”, “_L2”, etc. A model with “_L1” suffix is a base model, meaning it does not depend on any other models. If a model lacks this suffix, then it is a base model and is at level 1 (“_L1”).
“/Tx”: Indicates that the model was trained via hyperparameter search (HPO). Tx is shorthand for HPO trial #x. An example would be “LightGBM/T8”.
“_BAG”: Indicates that the model is a bagged ensemble. A bagged ensemble contains multiple instances of the model (children) trained with different subsets of the data. During inference, these child models each predict on the data and their predictions are averaged in the final result. This typically achieves a stronger result than any of the individual models alone, but slows down inference speed significantly. Refer to “_FULL” for instructions on how to improve inference speed.
“_FULL”: Indicates the model has been refit via TabularPredictor’s refit_full method. This model will have no validation score because all of the data (train and validation) was used as training data. Usually, there will be another model with the same name as this model minus the “_FULL” suffix. Often, this model can outperform the original model because of using more data during training, but is usually weaker if the original was a bagged ensemble (“_BAG”), but with much faster inference speed.
“_DSTL”: Indicates the model was created through model distillation via a call to TabularPredictor’s distill method. Validation scores of distilled models should only be compared against other distilled models.
“_x”: Indicates that the name without this added suffix already existed in a different model, so this suffix was added to avoid overwriting the pre-existing model. An example would be “LightGBM_2”.
Models¶
AbstractModel¶
-
class
autogluon.tabular.models.
AbstractModel
(path: str = None, name: str = None, problem_type: str = None, eval_metric: Union[str, autogluon.core.metrics.Scorer] = None, hyperparameters=None)[source]¶ Abstract model implementation from which all AutoGluon models inherit.
- Parameters
- pathstr, default = None
Directory location to store all outputs. If None, a new unique time-stamped directory is chosen.
- namestr, default = None
Name of the subdirectory inside path where model will be saved. The final model directory will be path+name+os.path.sep() If None, defaults to the model’s class name: self.__class__.__name__
- problem_typestr, default = None
Type of prediction problem, i.e. is this a binary/multiclass classification or regression problem (options: ‘binary’, ‘multiclass’, ‘regression’). If None, will attempt to infer the problem type based on training data labels during training.
- eval_metric
autogluon.core.metrics.Scorer
or str, default = None Metric by which predictions will be ultimately evaluated on test data. This only impacts model.score(), as eval_metric is not used during training.
If eval_metric = None, it is automatically chosen based on problem_type. Defaults to ‘accuracy’ for binary and multiclass classification and ‘root_mean_squared_error’ for regression. Otherwise, options for classification:
[‘accuracy’, ‘balanced_accuracy’, ‘f1’, ‘f1_macro’, ‘f1_micro’, ‘f1_weighted’, ‘roc_auc’, ‘roc_auc_ovo_macro’, ‘average_precision’, ‘precision’, ‘precision_macro’, ‘precision_micro’, ‘precision_weighted’, ‘recall’, ‘recall_macro’, ‘recall_micro’, ‘recall_weighted’, ‘log_loss’, ‘pac_score’]
- Options for regression:
[‘root_mean_squared_error’, ‘mean_squared_error’, ‘mean_absolute_error’, ‘median_absolute_error’, ‘r2’]
- Options for quantile regression:
[‘pinball_loss’]
For more information on these options, see sklearn.metrics: https://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics
You can also pass your own evaluation function here as long as it follows formatting of the functions defined in folder autogluon.core.metrics.
- hyperparametersdict, default = None
Hyperparameters that will be used by the model (can be search spaces instead of fixed values). If None, model defaults are used. This is identical to passing an empty dictionary.
- Attributes
- path_suffix
Methods
can_fit
()Returns True if the model can be fit.
Returns True if the model is capable of inference on new data.
After calling this function, returned model should be able to be fit without X_val, y_val using the iterations trained by the original model.
Creates a new refit_full variant of the model, but instead of training it simply copies self.
After calling this function, returned model should be able to be fit as if it was new, as well as deep-copied.
delete_from_disk
([silent])Deletes the model from disk.
estimate_memory_usage
(**kwargs)Estimates the memory usage of the model while training. Returns ——- int: number of bytes will be used during training.
fit
(**kwargs)Fit model to predict values in y based on X.
get_disk_size
()Returns dictionary of metadata related to model fit that isn’t related to hyperparameters.
get_info
()Returns a dictionary of numerous fields describing the model.
get_memory_size
()Returns a dictionary of minimum resource requirements to fit the model.
Get params of the model at the time of initialization
Returns the hyperparameters of the trained model.
is_fit
()Returns True if the model has been fit.
Returns True if the model is initialized.
is_valid
()Returns True if the model is capable of inference on new data (if normal model) or has produced out-of-fold predictions (if bagged model) This indicates whether the model can be used as a base model to fit a stack ensemble model.
load
(path[, reset_paths, verbose])Loads the model from disk to memory.
predict
(X, **kwargs)Returns class predictions of X.
predict_proba
(X[, normalize])Returns class prediction probabilities of X.
preprocess
(X[, preprocess_nonadaptive, …])Preprocesses the input data into internal form ready for fitting or inference.
reduce_memory_size
([remove_fit, …])Removes non-essential objects from the model to reduce memory and disk footprint.
rename
(name)Renames the model and updates self.path to reflect the updated name.
reset_metrics
()save
([path, verbose])Saves the model to disk.
set_contexts
(path_context)validate_fit_resources
([num_cpus, num_gpus])Verifies that the provided num_cpus and num_gpus (or defaults if not provided) are sufficient to train the model.
compute_feature_importance
create_contexts
get_features
hyperparameter_tune
initialize
load_info
save_info
score
score_with_y_pred_proba
-
convert_to_refit_full_template
()[source]¶ After calling this function, returned model should be able to be fit without X_val, y_val using the iterations trained by the original model.
-
convert_to_refit_full_via_copy
()[source]¶ Creates a new refit_full variant of the model, but instead of training it simply copies self. This method is for compatibility with models that have not implemented refit_full support as a fallback.
-
convert_to_template
()[source]¶ After calling this function, returned model should be able to be fit as if it was new, as well as deep-copied. The model name and path will be identical to the original, and must be renamed prior to training to avoid overwriting the original model files if they exist.
-
delete_from_disk
(silent=False)[source]¶ Deletes the model from disk.
WARNING: This will DELETE ALL FILES in the self.path directory, regardless if they were created by AutoGluon or not. DO NOT STORE FILES INSIDE OF THE MODEL DIRECTORY THAT ARE UNRELATED TO AUTOGLUON.
-
estimate_memory_usage
(**kwargs) → int[source]¶ Estimates the memory usage of the model while training. Returns ——-
int: number of bytes will be used during training
-
fit
(**kwargs)[source]¶ Fit model to predict values in y based on X.
Models should not override the fit method, but instead override the _fit method which has the same arguments.
- Parameters
- XDataFrame
The training data features.
- ySeries
The training data ground truth labels.
- X_valDataFrame, default = None
The validation data features. If None, early stopping via validation score will be disabled.
- y_valSeries, default = None
The validation data ground truth labels. If None, early stopping via validation score will be disabled.
- X_unlabeledDataFrame, default = None
Unlabeled data features. Models may optionally implement logic which leverages unlabeled data to improve model accuracy.
- time_limitfloat, default = None
Time limit in seconds to adhere to when fitting model. Ideally, model should early stop during fit to avoid going over the time limit if specified.
- sample_weightSeries, default = None
The training data sample weights. Models may optionally leverage sample weights during fit. If None, model decides. Typically, models assume uniform sample weight.
- sample_weights_valSeries, default = None
The validation data sample weights. If None, model decides. Typically, models assume uniform sample weight.
- num_cpusint, default = ‘auto’
How many CPUs to use during fit. This is counted in virtual cores, not in physical cores. If ‘auto’, model decides.
- num_gpusint, default = ‘auto’
How many GPUs to use during fit. If ‘auto’, model decides.
- feature_metadata
autogluon.common.features.feature_metadata.FeatureMetadata
, default = None Contains feature type information that can be used to identify special features such as text ngrams and datetime as well as which features are numerical vs categorical. If None, feature_metadata is inferred during fit.
- verbosityint, default = 2
Verbosity levels range from 0 to 4 and control how much information is printed. Higher levels correspond to more detailed print statements (you can set verbosity = 0 to suppress warnings). verbosity 4: logs every training iteration, and logs the most detailed information. verbosity 3: logs training iterations periodically, and logs more detailed information. verbosity 2: logs only important information. verbosity 1: logs only warnings and exceptions. verbosity 0: logs only exceptions.
- **kwargs
Any additional fit arguments a model supports.
-
get_fit_metadata
() → dict[source]¶ Returns dictionary of metadata related to model fit that isn’t related to hyperparameters. Must be called after model has been fit.
-
get_minimum_resources
() → Dict[str, int][source]¶ Returns a dictionary of minimum resource requirements to fit the model. Subclass should consider overriding this method if it requires more resources to train. If a resource is not part of the output dictionary, it is considered unnecessary. Valid keys: ‘num_cpus’, ‘num_gpus’.
-
get_trained_params
() → dict[source]¶ Returns the hyperparameters of the trained model. If the model early stopped, this will contain the epoch/iteration the model uses during inference, instead of the epoch/iteration specified during fit. This is used for generating a model template to refit on all of the data (no validation set).
-
is_initialized
() → bool[source]¶ Returns True if the model is initialized. This indicates whether the model has inferred various information such as problem_type and num_classes. A model is automatically initialized when .fit or .hyperparameter_tune are called.
-
is_valid
() → bool[source]¶ Returns True if the model is capable of inference on new data (if normal model) or has produced out-of-fold predictions (if bagged model) This indicates whether the model can be used as a base model to fit a stack ensemble model.
-
classmethod
load
(path: str, reset_paths=True, verbose=True)[source]¶ Loads the model from disk to memory.
- Parameters
- pathstr
Path to the saved model, minus the file name. This should generally be a directory path ending with a ‘/’ character (or appropriate path separator value depending on OS). The model file is typically located in path + cls.model_file_name.
- reset_pathsbool, default True
Whether to reset the self.path value of the loaded model to be equal to path. It is highly recommended to keep this value as True unless accessing the original self.path value is important. If False, the actual valid path and self.path may differ, leading to strange behaviour and potential exceptions if the model needs to load any other files at a later time.
- verbosebool, default True
Whether to log the location of the loaded file.
- Returns
- modelcls
Loaded model object.
-
predict
(X, **kwargs)[source]¶ Returns class predictions of X. For binary and multiclass problems, this returns the predicted class labels as a Series. For regression problems, this returns the predicted values as a Series.
-
predict_proba
(X, normalize=None, **kwargs)[source]¶ Returns class prediction probabilities of X. For binary problems, this returns the positive class label probability as a Series. For multiclass problems, this returns the class label probabilities of each class as a DataFrame. For regression problems, this returns the predicted values as a Series.
-
preprocess
(X, preprocess_nonadaptive=True, preprocess_stateful=True, **kwargs)[source]¶ Preprocesses the input data into internal form ready for fitting or inference. It is not recommended to override this method, as it is closely tied to multi-layer stacking logic. Instead, override _preprocess.
-
reduce_memory_size
(remove_fit=True, remove_info=False, requires_save=True, **kwargs)[source]¶ Removes non-essential objects from the model to reduce memory and disk footprint. If remove_fit=True, enables the removal of variables which are required for fitting the model. If the model is already fully trained, then it is safe to remove these. If remove_info=True, enables the removal of variables which are used during model.get_info(). The values will be None when calling model.get_info(). If requires_save=True, enables the removal of variables which are part of the model.pkl object, requiring an overwrite of the model to disk if it was previously persisted.
It is not necessary for models to implement this.
-
save
(path: str = None, verbose=True) → str[source]¶ Saves the model to disk.
- Parameters
- pathstr, default None
Path to the saved model, minus the file name. This should generally be a directory path ending with a ‘/’ character (or appropriate path separator value depending on OS). If None, self.path is used. The final model file is typically saved to path + self.model_file_name.
- verbosebool, default True
Whether to log the location of the saved file.
- Returns
- pathstr
Path to the saved model, minus the file name. Use this value to load the model from disk via cls.load(path), cls being the class of the model object, such as model = RFModel.load(path)
LGBModel¶
-
class
autogluon.tabular.models.
LGBModel
(**kwargs)[source]¶ LightGBM model: https://lightgbm.readthedocs.io/en/latest/
Hyperparameter options: https://lightgbm.readthedocs.io/en/latest/Parameters.html
- Extra hyperparameter options:
ag.early_stop : int, specifies the early stopping rounds. Defaults to an adaptive strategy. Recommended to keep default.
CatBoostModel¶
-
class
autogluon.tabular.models.
CatBoostModel
(**kwargs)[source]¶ CatBoost model: https://catboost.ai/
Hyperparameter options: https://catboost.ai/docs/concepts/python-reference_parameters-list.html
XGBoostModel¶
-
class
autogluon.tabular.models.
XGBoostModel
(**kwargs)[source]¶ XGBoost model: https://xgboost.readthedocs.io/en/latest/
Hyperparameter options: https://xgboost.readthedocs.io/en/latest/parameter.html
RFModel¶
-
class
autogluon.tabular.models.
RFModel
(**kwargs)[source]¶ Random Forest model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
XTModel¶
-
class
autogluon.tabular.models.
XTModel
(**kwargs)[source]¶ Extra Trees model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html#sklearn.ensemble.ExtraTreesClassifier
KNNModel¶
-
class
autogluon.tabular.models.
KNNModel
(**kwargs)[source]¶ KNearestNeighbors model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
LinearModel¶
-
class
autogluon.tabular.models.
LinearModel
(**kwargs)[source]¶ Linear model (scikit-learn): https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
Model backend differs depending on problem_type:
TabularNeuralNetTorchModel¶
TabularNeuralNetMxnetModel¶
-
class
autogluon.tabular.models.
TabularNeuralNetMxnetModel
(**kwargs)[source]¶ Class for neural network models that operate on tabular data. These networks use different types of input layers to process different types of data in various columns.
- Attributes:
_types_of_features (dict): keys = ‘continuous’, ‘skewed’, ‘onehot’, ‘embed’; values = column-names of Dataframe corresponding to the features of this type feature_arraycol_map (OrderedDict): maps feature-name -> list of column-indices in df corresponding to this feature
self.feature_type_map (OrderedDict): maps feature-name -> feature_type string (options: ‘vector’, ‘embed’) processor (sklearn.ColumnTransformer): scikit-learn preprocessor object.
Note: This model always assumes higher values of self.eval_metric indicate better performance.
NNFastAiTabularModel¶
-
class
autogluon.tabular.models.
NNFastAiTabularModel
(**kwargs)[source]¶ Class for fastai v1 neural network models that operate on tabular data.
- Hyperparameters:
y_scaler: on a regression problems, the model can give unreasonable predictions on unseen data. This attribute allows to pass a scaler for y values to address this problem. Please note that intermediate iteration metrics will be affected by this transform and as a result intermediate iteration scores will be different from the final ones (these will be correct). https://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
‘layers’: list of hidden layers sizes; None - use model’s heuristics; default is None
‘emb_drop’: embedding layers dropout; defaut is 0.1
‘ps’: linear layers dropout - list of values applied to every layer in layers; default is [0.1]
‘bs’: batch size; default is 256
‘lr’: maximum learning rate for one cycle policy; default is 1e-2; see also https://docs.fast.ai/callback.schedule.html#Learner.fit_one_cycle, One-cycle policy paper: https://arxiv.org/abs/1803.09820
‘epochs’: number of epochs; default is 30
# Early stopping settings. See more details here: https://docs.fast.ai/callback.tracker.html#EarlyStoppingCallback ‘early.stopping.min_delta’: 0.0001, ‘early.stopping.patience’: 10,
VowpalWabbitModel¶
-
class
autogluon.tabular.models.
VowpalWabbitModel
(**kwargs)[source]¶ VowpalWabbit Model: https://vowpalwabbit.org/
VowpalWabbit Command Line args: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-line-arguments
Ensemble Models¶
Bagged ensemble meta-model which fits a given model multiple times across different splits of the training data. |
|
Stack ensemble meta-model which functions identically to |
|
Weighted ensemble meta-model that implements Ensemble Selection: https://www.cs.cornell.edu/~alexn/papers/shotgun.icml04.revised.rev2.pdf |
BaggedEnsembleModel¶
-
class
autogluon.core.models.
BaggedEnsembleModel
(model_base: Union[autogluon.core.models.abstract.abstract_model.AbstractModel, Type[autogluon.core.models.abstract.abstract_model.AbstractModel]], model_base_kwargs: Dict[str, any] = None, random_state: int = 0, **kwargs)[source]¶ Bagged ensemble meta-model which fits a given model multiple times across different splits of the training data.
For certain child models such as KNN, this may only train a single model and instead rely on the child model to generate out-of-fold predictions.
- Parameters
- model_baseUnion[AbstractModel, Type[AbstractModel]]
The base model to repeatedly fit during bagging. If a AbstractModel class, then also provide model_base_kwargs which will be used to initialize the model via model_base(**model_base_kwargs).
- model_base_kwargsDict[str, any], default = None
kwargs used to initialize model_base if model_base is a class.
- random_stateint, default = 0
Random state used to split the data into cross-validation folds during fit.
- **kwargs
Refer to AbstractModel documentation
StackerEnsembleModel¶
-
class
autogluon.core.models.
StackerEnsembleModel
(base_model_names=None, base_models_dict=None, base_model_paths_dict=None, base_model_types_dict=None, base_model_types_inner_dict=None, base_model_performances_dict=None, **kwargs)[source]¶ Stack ensemble meta-model which functions identically to
BaggedEnsembleModel
with the additional capability to leverage base models.By specifying base models during init, stacker models can use the base model predictions as features during training and inference.
This property allows for significantly improved model quality in many situations compared to non-stacking alternatives.
Stacker models can act as base models to other stacker models, enabling multi-layer stack ensembling.
WeightedEnsembleModel¶
-
class
autogluon.core.models.
WeightedEnsembleModel
(**kwargs)[source]¶ Weighted ensemble meta-model that implements Ensemble Selection: https://www.cs.cornell.edu/~alexn/papers/shotgun.icml04.revised.rev2.pdf
A
autogluon.core.models.GreedyWeightedEnsembleModel
must be specified as the model_base to properly function.
Experimental Models¶
|
|
|
|
Wrapper of autogluon.vision.ImagePredictor |