What’s New¶
Here you can find the release notes for current and past releases of AutoGluon.
v1.3.1
Version 1.3.1
We are happy to announce the AutoGluon 1.3.1 release!
AutoGluon 1.3.1 contains several bug fixes and logging improvements for Tabular, TimeSeries, and Multimodal modules.
This release contains 9 commits from 5 contributors! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/1.3.0…1.3.1
Join the community:
Get the latest updates:
This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.3.1.
General
Tabular
Fixes and Improvements
Fix incorrect reference to positive_class in TabularPredictor constructor. @celestinoxp #5129
TimeSeries
Fixes and Improvements
Avoid masking the
scaler
param with the defaulttarget_scaler
value forDirectTabular
andRecursiveTabular
models. @shchur #5131Fix
FutureWarning
in leaderboard and evaluate methods. @shchur #5126
Multimodal
Fixes and Improvements
Documentation and CI
Contributors
Full Contributor List (ordered by # of commits):
New Contributors
v1.3.0
Version 1.3.0
We are happy to announce the AutoGluon 1.3.0 release!
AutoGluon 1.3 focuses on stability & usability improvements, bug fixes, and dependency upgrades.
This release contains 144 commits from 20 contributors! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v1.2.0…v1.3.0
Join the community:
Get the latest updates:
Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.3.
Highlights
AutoGluon-Tabular is the state of the art in the AutoML Benchmark 2025!
The AutoML Benchmark 2025, an independent large-scale evaluation of tabular AutoML frameworks, showcases AutoGluon 1.2 as the state of the art AutoML framework! Highlights include:
AutoGluon’s rank statistically significantly outperforms all AutoML systems via the Nemenyi post-hoc test across all time constraints.
AutoGluon with a 5 minute training budget outperforms all other AutoML systems with a 1 hour training budget.
AutoGluon is pareto efficient in quality and speed across all evaluated presets and time constraints.
AutoGluon with
presets="high", infer_limit=0.0001
(HQIL in the figures) achieves >10,000 samples/second inference throughput while outperforming all methods.AutoGluon is the most stable AutoML system. For “best” and “high” presets, AutoGluon has 0 failures on all time budgets >5 minutes.

AutoGluon Multimodal’s “Bag of Tricks” Update
We are pleased to announce the integration of a comprehensive “Bag of Tricks” update for AutoGluon’s MultiModal (AutoMM). This significant enhancement substantially improves multimodal AutoML performance when working with combinations of image, text, and tabular data. The update implements various strategies including multimodal model fusion techniques, multimodal data augmentation, cross-modal alignment, tabular data serialization, better handling of missing modalities, and an ensemble learner that integrates these techniques for optimal performance.
Users can now access these capabilities through a simple parameter when initializing the MultiModalPredictor after following the instruction here to download the checkpoints:
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(label="label", use_ensemble=True)
predictor.fit(train_data=train_data)
We express our gratitude to @zhiqiangdon, for this substantial contribution that enhances AutoGluon’s capabilities for handling complex multimodal datasets. Here is the corresponding research paper describing the technical details: Bag of Tricks for Multimodal AutoML with Image, Text, and Tabular Data.
Deprecations and Breaking Changes
The following deprecated TabularPredictor methods have been removed in the 1.3.0 release (deprecated in 1.0.0, raise in 1.2.0, removed in 1.3.0). Please use the new names:
persist_models
->persist
,unpersist_models
->unpersist
,get_model_names
->model_names
,get_model_best
->model_best
,get_pred_from_proba
->predict_from_proba
,get_model_full_dict
->model_refit_map
,get_oof_pred_proba
->predict_proba_oof
,get_oof_pred
->predict_oof
,get_size_disk_per_file
->disk_usage_per_file
,get_size_disk
->disk_usage
,get_model_names_persisted
->model_names(persisted=True)
The following logic has been deprecated starting in 1.3.0 and will log a FutureWarning
. Functionality will be changed in a future release:
(FutureWarning)
TabularPredictor.delete_models()
will default todry_run=False
in a future release (currentlydry_run=True
). Please ensure you explicitly specifydry_run=True
for the existing logic to remain in future releases. @Innixma (#4905)
General
Improvements
(Major) Internal refactor of
AbstractTrainer
class to improve extensibility and reduce code duplication. @canerturkmen (#4804, #4820, #4851)
Dependencies
Update numpy to
>=1.25.0,<2.3.0
. @tonyhoo, @Innixma, @suzhoum (#5020, #5056, #5072)Update scikit-learn to
>=1.4.0,<1.7.0
. @tonyhoo, @Innixma (#5029, #5045)Update ray to
>=2.10.0,<2.45
. @suzhoum, @celestinoxp, @tonyhoo (#4714, #4887, #5020)Update torch to
>=2.2,<2.7
. @FireballDWF (#5000)Update lightning to
>=2.2,<2.7
. @FireballDWF (#5000)Update torchmetrics to
>=1.2.0,<1.8
. @zkalson, @tonyhoo (#4720, #5020)Update torchvision to
>=0.16.0,<0.22.0
. @FireballDWF (#5000)Update accelerate to
>=0.34.0,<2.0
. @FireballDWF (#5000)Update pytorch-metric-learning to
>=1.3.0,<2.9
. @tonyhoo (#5020)
Documentation
Updating documented python version’s in CONTRIBUTING.md. @celestinoxp (#4796)
Refactored CONTRIBUTING.md to have up-to-date information. @Innixma (#4798)
Fix various typos. @celestinoxp (#4819)
Fixes and Improvements
Fix colab AutoGluon source install with
uv
. @tonyhoo (#4943, #4964)Make
full_install.sh
use the script directory instead of the working directory. @Innixma (#4933)Add
test_version.py
to ensure proper version format for releases. @Innixma (#4799)Ensure
setup_outputdir
always makes a new directory ifpath_suffix != None
andpath=None
. @Innixma (#4903)Check
cuda.is_available()
before callingcuda.device_count()
to avoid warnings. @Innixma (#4902)Log a warning if mlflow autologging is enabled. @shchur (#4925)
Fix rare ZeroDivisionError edge-case in
get_approximate_df_mem_usage
. @shchur (#5083)Minor fixes & improvements. @suzhoum @Innixma @canerturkmen @PGijsbers @tonyhoo (#4744, #4785, #4822, #4860, #4891, #5012, #5047)
Tabular
Removed Models
Removed vowpalwabbit model (key:
VW
) and optional dependency (autogluon.tabular[vowpalwabbit]
), as the model implemented in AutoGluon was not widely used and was largely unmaintained. @Innixma (#4975)Removed TabTransformer model (key:
TRANSF
), as the model implemented in AutoGluon was heavily outdated, unmaintained since 2020, and generally outperformed by FT-Transformer (key:FT_TRANSFORMER
). @Innixma (#4976)Removed tabpfn from
autogluon.tabular[tests]
install in preparation for futuretabpfn>=2.x
support. @Innixma (#4974)
New Features
Add support for regression stratified splits via binning. @Innixma (#4586)
Add
TabularPredictor.model_hyperparameters(model)
that returns the hyperparameters of a model. @Innixma (#4901)Add
TabularPredictor.model_info(model)
that returns the metadata of a model. @Innixma (#4901)(Experimental) Add
plot_leaderboard.py
to visualize performance over training time of the predictor. @Innixma (#4907)(Major) Add internal
ag_model_registry
to improve the tracking of supported model families and their capabilities. @Innixma (#4913, #5057, #5107)Add
raise_on_model_failure
TabularPredictor.fit
argument, default to False. If True, will immediately raise the original exception if a model raises an exception during fit instead of continuing to the next model. Setting to True is very helpful when using a debugger to try to figure out why a model is failing, as otherwise exceptions are handled by AutoGluon which isn’t desired while debugging. @Innixma (#4937, #5055)
Documentation
Fixes and Improvements
(Major) Ensure bagged refits in refit_full works properly (crashed in v1.2.0 due to a bug). @Innixma (#4870)
Improve XGBoost and CatBoost memory estimates. @Innixma (#5090)
Fixed balanced_accuracy metric edge-case exception + added unit tests to ensure future bugs don’t occur. @Innixma (#4775)
Fix crash when NN_TORCH trains with fewer than 8 samples. @Innixma (#4790)
Improve logging and documentation in CatBoost memory_check callback. @celestinoxp (#4802)
Improve code formatting to satisfy PEP585. @celestinoxp (#4823)
Remove deprecated TabularPredictor methods: @Innixma (#4906)
(FutureWarning)
TabularPredictor.delete_models()
will default todry_run=False
in a future release (currentlydry_run=True
). Please ensure you explicitly specifydry_run=True
for the existing logic to remain in future releases. @Innixma (#4905)Sped up tabular unit tests by 4x through various optimizations (3060s -> 743s). @Innixma (#4944)
Major tabular unit test refactor to avoid using fixtures. @Innixma (#4949)
Fix
TabularPredictor.refit_full(train_data_extra)
failing when categorical features exist. @Innixma (#4948)Reduced memory usage of artifact created by
convert_simulation_artifacts_to_tabular_predictions_dict
by 4x. @Innixma (#5024)Ensure that max model resources is respected during holdout model fit. @Innixma (#5067)
Remove unintended setting of global random seed during LightGBM model fit. @Innixma (#5095)
TimeSeries
The new v1.3 release brings numerous usability improvements and bug fixes to the TimeSeries module. Internally, we completed a major refactor of the core classes and introduced static type checking to simplify future contributions, accelerate development, and catch potential bugs earlier.
API Changes and Deprecations
As part of the refactor, we made several changes to the internal
AbstractTimeSeriesModel
class. If you maintain a custom model implementation, you will likely need to update it. Please refer to the custom forecasting model tutorial for details.No action is needed from the users that rely solely on the public API of the
timeseries
module (TimeSeriesPredictor
andTimeSeriesDataFrame
).
New Features
New tutorial on adding custom forecasting models by @shchur in #4749
Add
cutoff
support inevaluate
andleaderboard
by @abdulfatir in #5078Add
horizon_weight
support forTimeSeriesPredictor
by @shchur in #5084Add
make_future_data_frame
method to TimeSeriesPredictor by @shchur in #5051Refactor ensemble base class and add new ensembles by @canerturkmen in #5062
Code Quality
Add static type checking for the
timeseries
module by @canerturkmen in #4712 #4788 #4801 #4821 #4969 #5086 #5085Refactor the
AbstractTimeSeriesModel
class by @canerturkmen in #4868 #4909 #4946 #4958 #5008 #5038Improvements to the unit tests by @canerturkmen in #4773 #4828 #4877 #4872 #4884 #4888
Fixes and Improvements
Allow using custom
distr_output
with the TFT model by @shchur in #4899Update version ranges for
statsforecast
&coreforecast
by @shchur in #4745Fix feature importance calculation for models that use a
covariate_regressor
by @canerturkmen in #4845Fix hyperparameter tuning for Chronos and other models by @abdulfatir @shchur in #4838 #5075 #5079
Fix frequency inference for
TimeSeriesDataFrame
by @abdulfatir @shchur in #4834 #5066Update docs for custom
distr_output
by @Killer3048 in #5068Raise informative error message if invalid model name is provided by @shchur in #5004
Gracefully handle corrupted cached predictions by @shchur in #5005
Chronos-Bolt: Fix scaling that affects constant series by @abdulfatir in #5013
Fix deprecated
evaluation_strategy
kwarg intransformers
by @abdulfatir in #5019Fix time_limit when val_data is provided #5046 by @shchur in #5059
Rename covariate metadata by @canerturkmen in #5064
Fix NaT timestamp values during resampling by @shchur in #5080
Fix typing compatibility for py39 by @suzhoum @shchur in #5094 #5097
Warn if an S3 path is provided to the
TimeSeriesPredictor
by @shchur in #5091
Multimodal
New Features
AutoGluon’s MultiModal module has been enhanced with a comprehensive “Bag of Tricks” update that significantly improves performance when working with combined image, text, and tabular data through advanced fusion techniques, data augmentation, and an integrated ensemble learner now accessible via a simple use_ensemble=True
parameter after following the instruction here to download the checkpoints.
[AutoMM] Bag of Tricks by @zhiqiangdon in #4737
Documentation
[Tutorial] categorical convert_to_text default value by @cheungdaven in #4699
[AutoMM] Fix and Update Object Detection Tutorials by @FANGAreNotGnu in #4889
Fixes and Improvements
Update s3 path to public URL for AutoMM unit tests by @suzhoum in #4809
Fix object detection tutorial and default behavior of predict by @FANGAreNotGnu in #4865
Fix NLTK tagger path in download function by @k-ken-t4g in #4982
Fix AutoMM model saving logic by capping transformer range by @tonyhoo in #5007
fix: account for distributed training in learning rate schedule by @tonyhoo in #5003
Special Thanks
Zhiqiang Tang for implementing “Bag of Tricks” for AutoGluon’s MultiModal, which significantly enhances the multimodal performance.
Caner Turkmen for leading the efforts on refactoring and improving the internal logic in the
timeseries
module.Celestino for providing numerous bug reports, suggestions, and code cleanup as a new contributor.
Contributors
Full Contributor List (ordered by # of commits):
@Innixma @shchur @canerturkmen @tonyhoo @abdulfatir @celestinoxp @suzhoum @FANGAreNotGnu @prateekdesai04 @zhiqiangdon @cheungdaven @LennartPurucker @abhishek-iitmadras @zkalson @nathanaelbosch @Killer3048 @FireballDWF @timostrunk @everdark @kbulygin @PGijsbers @k-ken-t4g
New Contributors
@celestinoxp made their first contribution in #4796
@PGijsbers made their first contribution in #4891
@k-ken-t4g made their first contribution in #4982
@FireballDWF made their first contribution in #5000
@Killer3048 made their first contribution in #5068
v1.2.0
Version 1.2.0
We’re happy to announce the AutoGluon 1.2.0 release.
AutoGluon 1.2 contains massive improvements to both Tabular and TimeSeries modules, each achieving a 70% win-rate vs AutoGluon 1.1. This release additionally adds support for Python 3.12 and drops support for Python 3.8.
This release contains 186 commits from 19 contributors! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v1.1.1…v1.2.0
Join the community:
Get the latest updates:
Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.2.
For Tabular, we encompass the primary enhancements of the new TabPFNMix tabular foundation model and parallel fit strategy into the new "experimental_quality"
preset to ensure a smooth transition period for those who wish to try the new cutting edge features. We will be using this release to gather feedback prior to incorporating these features into the other presets. We also introduce a new stack layer model pruning technique that results in a 3x inference speedup on small datasets with zero performance loss and greatly improved post-hoc calibration across the board, particularly on small datasets.
For TimeSeries, we introduce Chronos-Bolt, our latest foundation model integrated into AutoGluon, with massive improvements to both accuracy and inference speed compared to Chronos, along with fine-tuning capabilities. We also added covariate regressor support!
We are also excited to announce AutoGluon-Assistant (AG-A), our first venture into the realm of Automated Data Science.
See more details in the Spotlights below!
Spotlight
AutoGluon Becomes the Golden Standard for Competition ML in 2024
Before diving into the new features of 1.2, we would like to start by highlighting the wide-spread adoption AutoGluon has received on competition ML sites like Kaggle in 2024. Across all of 2024, AutoGluon was used to achieve a top 3 finish in 15 out of 18 tabular Kaggle competitions, including 7 first place finishes, and was never outside the top 1% of private leaderboard placements, with an average of over 1000 competing human teams in each competition. In the $75,000 prize money 2024 Kaggle AutoML Grand Prix, AutoGluon was used by the 1st, 2nd, and 3rd place teams, with the 2nd place team led by two AutoGluon developers: Lennart Purucker and Nick Erickson! For comparison, in 2023 AutoGluon achieved only 1 first place and 1 second place solution. We attribute the bulk of this increase to the improvements seen in AutoGluon 1.0 and beyond.

We’d like to emphasize that these results are achieved via human expert interaction with AutoGluon and other tools, and often includes manual feature engineering and hyperparameter tuning to get the most out of AutoGluon. To see a live tracking of all AutoGluon solution placements on Kaggle, refer to our AWESOME.md ML competition section where we provide links to all solution write-ups.
AutoGluon-Assistant: Automating Data Science with AutoGluon and LLMs
We are excited to share the release of a new AutoGluon-Assistant module (AG-A), powered by LLMs from AWS Bedrock or OpenAI. AutoGluon-Assistant empowers users to solve tabular machine learning problems using only natural language descriptions, in zero lines of code with our simple user interface. Fully autonomous AG-A outperforms 74% of human ML practitioners in Kaggle competitions and secured a live top 10 finish in the $75,000 prize money 2024 Kaggle AutoML Grand Prix competition as Team AGA 🤖!
TabularPredictor presets=”experimental_quality”
TabularPredictor has a new "experimental_quality"
preset that offers even better predictive quality than "best_quality"
. On the AutoMLBenchmark, we observe a 70% winrate vs best_quality
when running for 4 hours on a 64 CPU machine. This preset is a testing ground for cutting edge features and models which we hope to incorporate into best_quality
for future releases. We recommend to use a machine with at least 16 CPU cores, 64 GB of memory, and a 4 hour+ time_limit
to get the most benefit out of experimental_quality
. Please let us know via a GitHub issue if you run into any problems running the experimental_quality
preset.
TabPFNMix: A Foundation Model for Tabular Data
TabPFNMix is the first tabular foundation model created by the AutoGluon team, and was pre-trained exclusively on synthetic data. The model builds upon the prior work of TabPFN and TabForestPFN. TabPFNMix to the best of our knowledge achieves a new state-of-the-art for individual open source model performance on datasets between 1000 and 10000 samples, and also supports regression tasks! Across the 109 classification datasets with less than or equal to 10000 training samples in TabRepo, fine-tuned TabPFNMix outperforms all prior models, with a 64% win-rate vs the strongest tree model, CatBoost, and a 61% win-rate vs fine-tuned TabForestPFN.
The model is available via the TABPFNMIX
hyperparameters key, and is used in the new experimental_quality
preset. We recommend using this model for datasets smaller than 50,000 training samples, ideally with a large time limit and 64+ GB of memory. This work is still in the early stages, and we appreciate any feedback from the community to help us iterate and improve for future releases. You can learn more by going to our HuggingFace model page for the model (tabpfn-mix-1.0-classifier, tabpfn-mix-1.0-regressor). Give us a like on HuggingFace if you want to see more! A paper is planned in future to provide more details about the model.
fit_strategy=”parallel”
AutoGluon’s TabularPredictor now supports the new fit argument fit_strategy
and the new "parallel"
option, enabled by default in the new experimental_quality
preset. For machines with 16 or more CPU cores, the parallel fit strategy offers a major speedup over the previous "sequential"
strategy. We estimate with 64 CPU cores that most datasets will experience a 2-4x speedup, with the speedup getting larger as CPU cores increase.
Chronos-Bolt⚡: a 250x faster, more accurate Chronos model
Chronos-Bolt is our latest foundation model for forecasting that has been integrated into AutoGluon. It is based on the T5 encoder-decoder architecture and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size.
The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.
Chronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the Weighted Quantile Loss (WQL) and the Mean Absolute Scaled Error (MASE), respectively, aggregated over 27 datasets (see the Chronos paper for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.
Chronos-Bolt models are now available through AutoGluon in four sizes—Tiny (9M), Mini (21M), Small (48M), and Base (205M)—and can also be used on the CPU. With the addition of Chronos-Bolt models and other enhancements, AutoGluon v1.2 achieves a 70%+ win rate against the previous release!
In addition to the new Chronos-Bolt models, we have also added support for effortless fine-tuning of Chronos and Chronos-Bolt models. Check out the updated Chronos tutorial to learn how to use and fine-tune Chronos-Bolt models.
Time Series Covariate Regressors
We have added support for covariate regressors for all forecasting models. Covariate regressors are tabular regression models that can be combined with univariate forecasting models to incorporate exogenous information. These are particularly useful for foundation models like Chronos-Bolt, which rely solely on the target time series’ historical data and cannot directly use exogenous information (such as holidays or promotions). To improve the predictions of univariate models when covariates are available, a covariate regressor is first fit on the known covariates and static features to predict the target column at each time step. The predictions of the covariate regressor are then subtracted from the target column, and the univariate model then forecasts the residuals. The Chronos tutorial showcases how covariate regressors can be used with Chronos-Bolt.
General
Improvements
Update
full_install.sh
to install AutoGluon in parallel and to useuv
, resulting in much faster source installation times. @Innixma (#4582, #4587, #4592)
Dependencies
Python 3.8 support dropped. @prateekdesai04 (#4512)
Update ray to
>=2.10.0,<2.40
. @suzhoum, @Innixma (#4302, #4688)Update scikit-learn to
>=1.4.0,<1.5.3
. @prateekdesai04 (#4420, #4570)Update pyarrow to
>=15.0.0
. @prateekdesai04 (#4520)Update psutil to
>=5.7.3,<7.0.0
. @prateekdesai04 (#4570)Update Pillow to
>=10.0.1,<12
. @prateekdesai04 (#4570)Update xgboost to
>=1.6,<2.2
. @prateekdesai04 (#4570)Update timm to
>=0.9.5,<1.0.7
. @prateekdesai04 (#4580)Update accelerate to
>=0.34.0,<1.0
. @cheungdaven @tonyhoo @shchur (#4596, #4612, #4676)Update scikit-learn-intelex to
>=2024.0,<2025.1
. @Innixma (#4688)
Documentation
Update install instructions to use proper torch and ray versions. @Innixma (#4581)
Add SECURITY.md for vulnerability reporting. @tonyhoo (#4298)
Fixes and Improvements
Speed up DropDuplicatesFeatureGenerator fit time by 2x+. @shchur (#4543)
Add
compute_metric
as a replacement forcompute_weighted_metric
with improved compatibility across the project. @Innixma (#4631)
Tabular
New Features
Add TabPFNMix model. Try it out with
presets="experimental"
. @xiyuanzh @Innixma (#4671, #4694)Parallel model fit support. Try it out with
fit_strategy="parallel"
. @LennartPurucker @Innixma (#4606)Learning curve generation feature. @adibiasio @Innixma (#4411, #4635)
Set
calibrate_decision_threshold="auto"
by default, and improve decision threshold calibration. This dramatically improves results when the eval_metric isf1
andbalanced_accuracy
for binary classification. @Innixma (#4632)Add support for custom memory (soft) limits. @LennartPurucker (#4333)
Add
ag.compile
hyperparameter to models to enable compiling at fit time rather than withpredictor.compile
. @Innixma (#4354)Add AdaptiveES support to NN_TORCH and increase max_epochs from 500 to 1000, enabled by default. @Innixma (#4436)
Add support for controlling repeated cross-validation behavior via
delay_bag_sets
fit argument. Set default to False (previously True). @LennartPurucker (#4552)Make
positive_class
an init argument of TabularPredictor. @Innixma (#4445)
Documentation
Added a tutorial with a deep dive on how AutoGluon works. @rey-allan (#4284)
Fixes and Improvements
(Major) Fix stacker max_models logic for a 3x inference speedup. @Innixma (#4290)
(Major) Speed up EnsembleSelection fitting speed by 2x+. @nathanaelbosch (#4367)
(Major) Dramatically improve temperature scaling performance by using the best iteration instead of the last iteration’s temperature. @LennartPurucker (#4396)
(Major) Automatically skip temperature scaling if negative temperature is found. @Innixma (#4397)
(Major) Fix
roc_auc
metric to usemacro
for multiclass instead ofweighted
. @LennartPurucker (#4407)(Major) Ensure
refit_full
respects user specifiednum_cpus
andnum_gpus
. @Innixma (#4495)(Major) Refactor TabularDataset. Now TabularDataset will always return a pandas DataFrame object when initialized, to simplify various documentation and improve IDE debugging visualization compatibility. @Innixma (#4613)
Fix bug where validation data is not used when in HPO mode when no search space is provided for the model. @echowve (#4667)
Set
num_bag_sets=1
by default, to avoidnum_bag_sets>1
being used if the user doesn’t use a preset and setsnum_bag_folds>=2
. @Innixma (#4446)Fix FASTAI crash when a column contains only a single unique value + NaNs. @Innixma (#4584)
Fix torch seed accidentally being updated on model.score calls in NN_TORCH. @adibiasio (#4391)
Fix LightGBM predict_proba quantile output dtype. @Innixma (#4272)
Fix incorrect return type for
predict_multi
for regression. @Innixma (#4450)Improved error messages when given invalid hyperparameters. @Innixma (#4258)
Improved user specified
num_cpus
andnum_gpus
sanity checking. @Innixma (#4277)Add readable error message for invalid models in
predictor.persist
calls. @Innixma (#4285)Add toggle
raise_on_no_models_fitted
to control if AutoGluon errors when no models are fit. @LennartPurucker (#4389)Make
raise_on_no_models_fitted=True
by default. Was False in previous release. @Innixma (#4400)Add
valid_stacker
anduse_orig_features
model options. @Innixma (#4444)Improve reliability of
predictor.predict_proba_multi
in edge-case scenarios. @Innixma (#4527)Fix edgecase crash during label column handling if it is a pandas category dtype with 0 instances of a category. @Innixma (#4583)
Enable aarch64 platform build. @abhishek-iitmadras (#4663)
Minor fixes. @Innixma @LennartPurucker @shchur @rsj123 (#4224, #4317, #4335, #4352, #4353, #4379, #4384, #4474, #4485, #4675, #4682, #4700)
Minor unit tests, documentation, and cleanup. @Innixma @abhishek-iitmadras (#4398, #4399, #4402, #4498, #4546, #4547, #4549, #4687, #4690, #4692)
TimeSeries
New Features
Add fine-tuning support for Chronos and Chronos-Bolt models @abdulfatir (#4608, #4645, #4653, #4655, #4659, #4661, #4673, #4677)
Add Chronos-Bolt @canerturkmen (#4625)
TimeSeriesPredictor.leaderboard
now can compute extra metrics and return hyperparameters for each model @shchur (#4481)Add
target_scaler
support for all forecasting models @shchur (#4460, #4644)Add
covariate_regressor
support for all forecasting models @shchur (#4566, #4641)Add method to convert a TimeSeriesDataFrame to a regular pd.DataFrame @shchur (#4415)
[experimental] Add the weighted cumulative error forecasting metric @shchur (#4594)
[experimental] Allow custom ensemble model types for time series @shchur (#4662)
Fixes and Improvements
Update presets @canerturkmen @shchur (#4656, #4658, #4666, #4672)
Unify all Croston models into a single class @shchur (#4564)
Bump
statsforecast
version to 1.7 @canerturkmen @shchur (#4194, #4357)Fix deep learning models failing if item_ids have StringDtype @rsj123 (#4539)
Update logic for inferring the time series frequency @shchur (#4540)
Speed up and reduce memory usage of the
TimeSeriesFeatureGenerator
preprocessing logic @shchur (#4557)Refactor GluonTS default parameter handling, update TiDE parameters @canerturkmen (#4640)
Move covariate scaling logic into a separate class @shchur (#4634)
Prune timeseries unit and smoke tests @canerturkmen (#4650)
Minor fixes @abdulfatir @canerturkmen @shchur (#4259, #4299, #4395, #4386, #4409, #4533, #4565, #4633, #4647)
Multimodal
Fixes and Improvements
Fix Missing Validation Metric While Resuming A Model Failed At Checkpoint Fusing Stage by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4449
Add coco_root for better support for custom dataset in COCO format. by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/3809
Add COCO Format Saving Support and Update Object Detection I/O Handling by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/3811
Skip MMDet Config Files While Checking with bandit by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4630
Fix Logloss Bug and Refine Compute Score Logics by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4629
Fix Index Typo in Tutorial by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4642
Fix Proba Metrics for Multiclass by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4643
Support torch 2.4 by @tonyhoo in https://github.com/autogluon/autogluon/pull/4360
Add Installation Guide for Object Detection in Tutorial by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4430
Add Bandit Warning Mitigation for Internal
torch.save
andtorch.load
Usage by @tonyhoo in https://github.com/autogluon/autogluon/pull/4502update accelerate version range by @cheungdaven in https://github.com/autogluon/autogluon/pull/4596
Bound nltk version to avoid verbose logging issue by @tonyhoo in https://github.com/autogluon/autogluon/pull/4604
Upgrade TIMM by @prateekdesai04 in https://github.com/autogluon/autogluon/pull/4580
Key dependency updates in _setup_utils.py for v1.2 release by @tonyhoo in https://github.com/autogluon/autogluon/pull/4612
Configurable Number of Checkpoints to Keep per HPO Trial by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4615
Refactor Metrics for Each Problem Type by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4616
Fix Torch Version and Colab Installation for Object Detection by @FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4447
Special Thanks
Xiyuan Zhang for leading the development of TabPFNMix!
The TabPFN author’s Noah Hollmann, Samuel Muller, Katharina Eggensperger, and Frank Hutter for unlocking the power of foundation models for tabular data, and the TabForestPFN author’s Felix den Breejen, Sangmin Bae, Stephen Cha, and Se-Young Yun for extending the idea to a more generic representation. Our TabPFNMix work builds upon the shoulders of giants.
Lennart Purucker for leading development of the parallel model fit functionality and pushing AutoGluon to its limits in the 2024 Kaggle AutoML Grand Prix.
Robert Hatch, Tilii, Optimistix, Mart Preusse, Ravi Ramakrishnan, Samvel Kocharyan, Kirderf, Carl McBride Ellis, Konstantin Dmitriev, and others for their insightful discussions and for championing AutoGluon on Kaggle!
Eddie Bergman for his insightful surprise code review of the tabular callback support feature.
Contributors
Full Contributor List (ordered by # of commits):
@Innixma @shchur @prateekdesai04 @tonyhoo @FangAreNotGnu @suzhoum @abdulfatir @canerturkmen @LennartPurucker @abhishek-iitmadras @adibiasio @rsj123 @nathanaelbosch @cheungdaven @lostella @zkalson @rey-allan @echowve @xiyuanzh
New Contributors
@nathanaelbosch made their first contribution in https://github.com/autogluon/autogluon/pull/4366
@adibiasio made their first contribution in https://github.com/autogluon/autogluon/pull/4391
@abdulfatir made their first contribution in https://github.com/autogluon/autogluon/pull/4608
@echowve made their first contribution in https://github.com/autogluon/autogluon/pull/4667
@abhishek-iitmadras made their first contribution in https://github.com/autogluon/autogluon/pull/4685
@xiyuanzh made their first contribution in https://github.com/autogluon/autogluon/pull/4694
v1.1.1
Version 1.1.1
We’re happy to announce the AutoGluon 1.1.1 release.
AutoGluon 1.1.1 contains bug fixes and logging improvements for Tabular, TimeSeries, and Multimodal modules, as well as support for PyTorch 2.2 and 2.3.
Join the community:
Get the latest updates:
This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.1.
This release contains 52 commits from 10 contributors!
General
Add support for PyTorch 2.2. @prateekdesai04 (#4123)
Tabular
Note: Trying to load a TabularPredictor with a FastAI model trained on a previous AutoGluon release will raise an exception when calling predict
due to a fix in the model-interals.pkl
path. Please ensure matching versions.
Fix deadlock when
num_gpus>0
and dynamic_stacking is enabled. @Innixma (#4208)Improve decision threshold calibration. @Innixma (#4136, #4137)
Fix regression metrics (other than RMSE and MSE) being calculated incorrectly for LightGBM early stopping. @Innixma (#4174)
Fix custom multiclass metrics being calculated incorrectly for LightGBM early stopping. @Innixma (#4250)
Fix HPO crashing with NN_TORCH and FASTAI models. @Innixma (#4232)
Disable sklearnex for linear models due to observed performance degradation. @Innixma (#4223)
Improve sklearnex logging verbosity in Kaggle. @Innixma (#4216)
Add AsTypeFeatureGenerator detailed exception logging. @Innixma (#4251, #4252)
TimeSeries
Ensure prediction_length is stored as an integer. @shchur (#4160)
Fix tabular model preprocessing failure edge-case. @shchur (#4175)
Fix loading of Tabular models failure if predictor moved to a different directory. @shchur (#4171)
Fix cached predictions error when predictor saved on-top of an existing predictor. @shchur (#4202)
Fix off-by-one bug in Chronos inference. @canerturkmen (#4205)
Use correct target and quantile_levels in fallback model for MLForecast. @shchur (#4230)
Multimodal
Fix bug in CLIP’s image feature normalization. @Harry-zzh (#4114)
Fix bug in text augmentation. @Harry-zzh (#4115)
Modify default fine-tuning tricks. @Harry-zzh (#4166)
Add PyTorch version warning for object detection. @FANGAreNotGnu (#4217)
Docs and CI
Add competition solutions to
AWESOME.md
. @Innixma @shchur (#4122, #4163, #4245)Fix PDF classification tutorial. @zhiqiangdon (#4127)
Add AutoMM paper citation. @zhiqiangdon (#4154)
Add pickle load warning in all modules and tutorials. @shchur (#4243)
Various minor doc and test fixes and improvements. @tonyhoo @shchur @lovvge @Innixma @suzhoum (#4113, #4176, #4225, #4233, #4235, #4249, #4266)
Contributors
Full Contributor List (ordered by # of commits):
@Innixma @shchur @Harry-zzh @suzhoum @zhiqiangdon @lovvge @rey-allan @prateekdesai04 @canerturkmen @FANGAreNotGnu
New Contributors
@lovvge made their first contribution in https://github.com/autogluon/autogluon/commit/57a15fcfbbbc94514ff20ed2774cd447d9f4115f
@rey-allan made their first contribution in #4145
v1.1.0
Version 1.1.0
We’re happy to announce the AutoGluon 1.1 release.
AutoGluon 1.1 contains major improvements to the TimeSeries module, achieving a 60% win-rate vs AutoGluon 1.0 through the addition of Chronos, a pretrained model for time series forecasting, along with numerous other enhancements. The other modules have also been enhanced through new features such as Conv-LORA support and improved performance for large tabular datasets between 5 - 30 GB in size. For a full breakdown of AutoGluon 1.1 features, please refer to the feature spotlights and the itemized enhancements below.
Join the community:
Get the latest updates:
This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.
This release contains 121 commits from 20 contributors!
Full Contributor List (ordered by # of commits):
@shchur @prateekdesai04 @Innixma @canerturkmen @zhiqiangdon @tonyhoo @AnirudhDagar @Harry-zzh @suzhoum @FANGAreNotGnu @nimasteryang @lostella @dassaswat @afmkt @npepin-hub @mglowacki100 @ddelange @LennartPurucker @taoyang1122 @gradientsky
Special thanks to @ddelange for their continued assistance with Python 3.11 support and Ray version upgrades!
Spotlight
AutoGluon Achieves Top Placements in ML Competitions!
AutoGluon has experienced wide-spread adoption on Kaggle since the AutoGluon 1.0 release. AutoGluon has been used in over 130 Kaggle notebooks and mentioned in over 100 discussion threads in the past 90 days! Most excitingly, AutoGluon has already been used to achieve top ranking placements in multiple competitions with thousands of competitors since the start of 2024:
Placement |
Competition |
Author |
Date |
AutoGluon Details |
Notes |
---|---|---|---|---|---|
:3rd_place_medal: Rank 3/2303 (Top 0.1%) |
2024/03/31 |
v1.0, Tabular |
Kaggle Playground Series S4E3 |
||
:2nd_place_medal: Rank 2/93 (Top 2%) |
2024/03/21 |
v1.0, Tabular |
|||
:2nd_place_medal: Rank 2/1542 (Top 0.1%) |
2024/03/01 |
v1.0, Tabular |
|||
:2nd_place_medal: Rank 2/3746 (Top 0.1%) |
2024/02/29 |
v1.0, Tabular |
Kaggle Playground Series S4E2 |
||
:2nd_place_medal: Rank 2/3777 (Top 0.1%) |
2024/01/31 |
v1.0, Tabular |
Kaggle Playground Series S4E1 |
||
Rank 4/1718 (Top 0.2%) |
2024/01/01 |
v1.0, Tabular |
Kaggle Playground Series S3E26 |
We are thrilled that the data science community is leveraging AutoGluon as their go-to method to quickly and effectively achieve top-ranking ML solutions! For an up-to-date list of competition solutions using AutoGluon refer to our AWESOME.md, and don’t hesitate to let us know if you used AutoGluon in a competition!
Chronos, a pretrained model for time series forecasting
AutoGluon-TimeSeries now features Chronos, a family of forecasting models pretrained on large collections of open-source time series datasets that can generate accurate zero-shot predictions for new unseen data. Check out the new tutorial to learn how to use Chronos through the familiar TimeSeriesPredictor
API.
General
Refactor project README & project Tagline @Innixma (#3861, #4066)
Add AWESOME.md competition results and other doc improvements. @Innixma (#4023)
PyTorch, CUDA, Lightning version upgrades. @prateekdesai04 @canerturkmen @zhiqiangdon (#3982, #3984, #3991, #4006)
Scikit-learn version upgrade. @prateekdesai04 (#3872, #3881, #3947)
Various dependency upgrades. @Innixma @tonyhoo (#4024, #4083)
TimeSeries
Highlights
AutoGluon 1.1 comes with numerous new features and improvements to the time series module. These include highly requested functionality such as feature importance, support for categorical covariates, ability to visualize forecasts, and enhancements to logging. The new release also comes with considerable improvements to forecast accuracy, achieving 60% win rate and 3% average error reduction compared to the previous AutoGluon version. These improvements are mostly attributed to the addition of Chronos, improved preprocessing logic, and native handling of missing values.
New Features
Add Chronos pretrained forecasting model (tutorial). @canerturkmen @shchur @lostella (#3978, #4013, #4052, #4055, #4056, #4061, #4092, #4098)
Measure the importance of features & covariates on the forecast accuracy with
TimeSeriesPredictor.feature_importance()
. @canerturkmen (#4033, #4087)Native missing values support (no imputation required). @shchur (#3995, #4068, #4091)
Add support for categorical covariates. @shchur (#3874, #4037)
Improve inference speed by persisting models in memory with
TimeSeriesPredictor.persist()
. @canerturkmen (#4005)Visualize forecasts with
TimeSeriesPredictor.plot()
. @shchur (#3889)Add
RMSLE
evaluation metric. @canerturkmen (#3938)Enable logging to file. @canerturkmen (#3877)
Add option to keep lightning logs after training with
keep_lightning_logs
hyperparameter. @shchur (#3937)
Fixes and Improvements
Automatically preprocess real-valued covariates @shchur (#4042, #4069)
Add option to skip model selection when only one model is trained. @shchur (#4002)
Ensure all metrics handle missing values in target @shchur (#3966)
Fix bug when loading a GPU trained model on a CPU machine @shchur (#3979)
Fix inconsistent random seed. @canerturkmen @shchur (#3934, #4099)
Fix leaderboard crash when no models trained. @shchur (#3849)
Add prototype TabRepo simulation artifact generation. @shchur (#3829)
Documentation improvements, hide deprecated methods. @shchur (#3764, #4054, #4098)
Minor fixes. @canerturkmen, @shchur, @AnirudhDagar (#4009, #4040, #4041, #4051, #4070, #4094)
AutoMM
Highlights
AutoMM 1.1 introduces the innovative Conv-LoRA, a parameter-efficient fine-tuning (PEFT) method stemming from our latest paper presented at ICLR 2024, titled “Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model”. Conv-LoRA is designed for fine-tuning the Segment Anything Model, exhibiting superior performance compared to previous PEFT approaches, such as LoRA and visual prompt tuning, across various semantic segmentation tasks in diverse domains including natural images, agriculture, remote sensing, and healthcare. Check out our Conv-LoRA example.
New Features
Added Conv-LoRA, a new parameter efficient fine-tuning method. @Harry-zzh @zhiqiangdon (#3933, #3999, #4007, #4022, #4025)
Added support for new column type: ‘image_base64_str’. @Harry-zzh @zhiqiangdon (#3867)
Added support for loading pre-trained weights in FT-Transformer. @taoyang1122 @zhiqiangdon (#3859)
Fixes and Improvements
Fixed bugs in semantic segmentation. @Harry-zzh (#3801, #3812)
Fixed bugs in PEFT methods. @Harry-zzh (#3840)
Accelerated object detection training by ~30% for the high_quality and best_quality presets. @FANGAreNotGnu (#3970)
Depreciated Grounding-DINO @FANGAreNotGnu (#3974)
Fixed lightning upgrade issues @zhiqiangdon (#3991)
Fixed using f1, f1_macro, f1_micro for binary classification in knowledge distillation. @nimasteryang (#3837)
Removed MyMuPDF from installation due to the license issue. Users need to install it by themselves to do document classification. @zhiqiangdon (#4093)
Tabular
Highlights
AutoGluon-Tabular 1.1 primarily focuses on bug fixes and stability improvements. In particular, we have greatly improved the runtime performance for large datasets between 5 - 30 GB in size through the usage of subsampling for decision threshold calibration and the weighted ensemble fitting to 1 million rows, maintaining the same quality while being far faster to execute. We also adjusted the default weighted ensemble iterations from 100 to 25, which will speedup all weighted ensemble fit times by 4x. We heavily refactored the fit_pseudolabel
logic, and it should now achieve noticeably stronger results.
Fixes and Improvements
Fix return value in
predictor.fit_weighted_ensemble(refit_full=True)
. @Innixma (#1956)Enhance performance on large datasets through subsampling. @Innixma (#3977)
Refactor and enhance
.fit_pseudolabel
logic. @Innixma (#3930)Fix crash in memory check during HPO for LightGBM, CatBoost, and XGBoost. @Innixma (#3931)
LightGBM version upgrade. @mglowacki100, @Innixma (#3427)
Fix memory-safe sub-fits being skipped if Ray is not initialized. @LennartPurucker (#3868)
Logging improvements. @AnirudhDagar (#3873)
Documentation improvements. @Innixma @AnirudhDagar (#2024, #3975, #3976, #3996)
Docs and CI
Add auto benchmarking report generation. @prateekdesai04 (#4038, #4039)
Fix hanging tabular unit tests. @prateekdesai04 (#4031)
Add package version comparison between CI runs @prateekdesai04 (#3962, #3968, #3972)
Update conf.py to reflect current year. @dassaswat (#3932)
Avoid redundant unit test runs. @prateekdesai04 (#3942)
Fix colab notebook links @prateekdesai04 (#3926)
v1.0.0
Version 1.0.0
Today is finally the day… AutoGluon 1.0 has arrived!! After over four years of development and 2061 commits from 111 contributors, we are excited to share with you the culmination of our efforts to create and democratize the most powerful, easy to use, and feature rich automated machine learning system in the world. AutoGluon 1.0 comes with transformative enhancements to predictive quality resulting from the combination of multiple novel ensembling innovations, spotlighted below. Besides performance enhancements, many other improvements have been made that are detailed in the individual module sections.
Note: Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.0.
This release supports Python versions 3.8, 3.9, 3.10, and 3.11.
This release contains 223 commits from 17 contributors!
Full Contributor List (ordered by # of commits):
@shchur, @zhiqiangdon, @Innixma, @prateekdesai04, @FANGAreNotGnu, @yinweisu, @taoyang1122, @LennartPurucker, @Harry-zzh, @AnirudhDagar, @jaheba, @gradientsky, @melopeo, @ddelange, @tonyhoo, @canerturkmen, @suzhoum
Join the community:
Get the latest updates:
Spotlight
Tabular Performance Enhancements
AutoGluon 1.0 features major enhancements to predictive quality, establishing a new state-of-the-art in Tabular modeling. To the best of our knowledge, AutoGluon 1.0 marks the largest leap forward in the state-of-the-art for tabular data since the original AutoGluon paper from March 2020. The enhancements come primarily from two features: Dynamic stacking to mitigate stacked overfitting, and a new learned model hyperparameters portfolio via Zeroshot-HPO, obtained from the newly released TabRepo ensemble simulation library. Together, they lead to a 75% win-rate compared to AutoGluon 0.8 with faster inference speed, lower disk usage, and higher stability.
AutoML Benchmark Results
OpenML released the official 2023 AutoML Benchmark results on November 16th, 2023. Their results show AutoGluon 0.8 as the state-of-the-art in AutoML systems across a wide variety of tasks: “Overall, in terms of model performance, AutoGluon consistently has the highest average rank in our benchmark.” We now showcase that AutoGluon 1.0 achieves far superior results even to AutoGluon 0.8!
Below is a comparison on the OpenML AutoML Benchmark across 1040 tasks. LightGBM, XGBoost, and CatBoost results were obtained via AutoGluon, and other methods are from the official AutoML Benchmark 2023 results. AutoGluon 1.0 has a 95%+ win-rate against traditional tabular models, including a 99% win-rate vs LightGBM and a 100% win-rate vs XGBoost. AutoGluon 1.0 has between an 82% and 94% win-rate against other AutoML systems. For all methods, AutoGluon is able to achieve >10% average loss improvement (Ex: Going from 90% accuracy to 91% accuracy is a 10% loss improvement). AutoGluon 1.0 achieves first place in 63% of tasks, with lightautoml having the second most at 12% (AutoGluon 0.8 previously took first place 48% of the time). AutoGluon 1.0 even achieves a 7.4% average loss improvement over AutoGluon 0.8!
Method |
AG Winrate |
AG Loss Improvement |
Rescaled Loss |
Rank |
Champion |
---|---|---|---|---|---|
AutoGluon 1.0 (Best, 4h8c) |
- |
- |
0.04 |
1.95 |
63% |
lightautoml (2023, 4h8c) |
84% |
12.0% |
0.2 |
4.78 |
12% |
H2OAutoML (2023, 4h8c) |
94% |
10.8% |
0.17 |
4.98 |
1% |
FLAML (2023, 4h8c) |
86% |
16.7% |
0.23 |
5.29 |
5% |
MLJAR (2023, 4h8c) |
82% |
23.0% |
0.33 |
5.53 |
6% |
autosklearn (2023, 4h8c) |
91% |
12.5% |
0.22 |
6.07 |
4% |
GAMA (2023, 4h8c) |
86% |
15.4% |
0.28 |
6.13 |
5% |
CatBoost (2023, 4h8c) |
95% |
18.2% |
0.28 |
6.89 |
3% |
TPOT (2023, 4h8c) |
91% |
23.1% |
0.4 |
8.15 |
1% |
LightGBM (2023, 4h8c) |
99% |
23.6% |
0.4 |
8.95 |
0% |
XGBoost (2023, 4h8c) |
100% |
24.1% |
0.43 |
9.5 |
0% |
RandomForest (2023, 4h8c) |
97% |
25.1% |
0.53 |
9.78 |
1% |
Not only is AutoGluon more accurate in 1.0, it is also more stable thanks to our new usage of Ray subprocesses during low-memory training, resulting in 0 task failures on the AutoML Benchmark.
AutoGluon 1.0 is capable of achieving the fastest inference throughput of any AutoML system while still obtaining state-of-the-art results.
By specifying the infer_limit
fit argument, users can trade off between accuracy and inference speed to meet their needs.
As seen in the below plot, AutoGluon 1.0 sets the Pareto Frontier for quality and inference throughput, achieving Pareto Dominance compared to all other AutoML systems. AutoGluon 1.0 High achieves superior performance to AutoGluon 0.8 Best with 8x faster inference and 8x less disk usage!
You can get more details on the results here.
We are excited to see what our users can accomplish with AutoGluon 1.0’s enhanced performance. As always, we will continue to improve AutoGluon in future releases to push the boundaries of AutoML forward for all.
AutoGluon Multimodal (AutoMM) Highlights in One Figure
AutoMM Uniqueness
AutoGluon Multimodal (AutoMM) distinguishes itself from other open-source AutoML toolboxes like AutosSklearn, LightAutoML, H2OAutoML, FLAML, MLJAR, TPOT and GAMA, which mainly focus on tabular data for classification or regression. AutoMM is designed for fine-tuning foundation models across multiple modalities—image, text, tabular, and document, either individually or combined. It offers extensive capabilities for tasks like classification, regression, object detection, named entity recognition, semantic matching, and image segmentation. In contrast, other AutoML systems generally have limited support for image or text, typically using a few pretrained models like EfficientNet or hand-crafted rules like bag-of-words as feature extractors. They often rely on traditional models or simple neural networks. AutoMM provides a uniquely comprehensive and versatile approach to AutoML, being the only AutoML system to support flexible multimodality and support for a wide range of tasks. A comparative table detailing support for various data modalities, tasks, and model types is provided below.
Data |
Task |
Model |
||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
image |
text |
tabular |
document |
any combination |
classification |
regression |
object detection |
semantic matching |
named entity recognition |
image segmentation |
traditional models |
deep learning models |
foundation models |
|
LightAutoML |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||||
H2OAutoML |
✓ |
✓ |
✓ |
✓ |
||||||||||
FLAML |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||||
MLJAR |
✓ |
✓ |
✓ |
✓ |
||||||||||
AutoSklearn |
✓ |
✓ |
✓ |
✓ |
✓ |
|||||||||
GAMA |
✓ |
✓ |
✓ |
✓ |
||||||||||
TPOT |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
||||||||
AutoMM |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
✓ |
Special Thanks
We would like to conclude this spotlight by thanking Pieter Gijsbers, Sébastien Poirier, Erin LeDell, Joaquin Vanschoren, and the rest of the AutoML Benchmark authors for their key role in providing a shared and extensive benchmark to monitor the progress of the AutoML field. Their support has been invaluable to the AutoGluon project’s continued growth.
We would also like to thank Frank Hutter, who continues to be a leader within the AutoML field, for organizing the AutoML Conference in 2022 and 2023 to bring the community together to share ideas and align on a compelling vision.
Finally, we would like to thank Alex Smola and Mu Li for championing open source software at Amazon to make this project possible.
Additional Special Thanks
Special thanks to @LennartPurucker for leading development of dynamic stacking
Special thanks to @geoalgo for co-authoring TabRepo to enable Zeroshot-HPO
Special thanks to @ddelange for helping to add Python 3.11 support
Special thanks to @mglowacki100 for providing numerous feedback and suggestions
Special thanks to @Harry-zzh for contributing the new semantic segmentation problem type
General
Highlights
Other Enhancements
Dependency Updates
Upgraded torch to
>=2.0,<2.2
@zhiqiangdon @yinweisu @shchur (#3404, #3587, #3588)Upgraded numpy to
>=1.21,<1.29
@prateekdesai04 (#3709)Upgraded Pandas to
>=2.0,<2.2
@yinweisu @tonyhoo @shchur (#3498)Upgraded scikit-learn to
>=1.3,<1.5
@yinweisu @tonyhoo @shchur (#3498)Upgraded scipy to
>=1.5.4,<1.13
@prateekdesai04 (#3709)Upgraded LightGBM to
>=3.3,<4.2
@mglowacki100 @prateekdesai04 @Innixma (#3427, #3709, #3733)
Tabular
Highlights
AutoGluon 1.0 features major enhancements to predictive quality, establishing a new state-of-the-art in Tabular modeling. Refer to the spotlight section above for more details!
New Features
Added
dynamic_stacking
predictor fit argument to mitigate stacked overfitting @LennartPurucker @Innixma (#3616)Added zeroshot-HPO learned portfolio as new hyperparameters for
best_quality
andhigh_quality
presets. @Innixma @geoalgo (#3750)Added experimental scikit-learn API compatible wrappers to TabularPredictor. You can access them via
from autogluon.tabular.experimental import TabularClassifier, TabularRegressor
. @Innixma (#3769)Added enhanced FT-Transformer @taoyang1122 @Innixma (#3621, #3644, #3692)
Added
predictor.simulation_artifact()
to support integration with TabRepo @Innixma (#3555)
Performance Improvements
Enhanced FastAI model quality on regression via output clipping @LennartPurucker @Innixma (#3597)
Added Skip-connection Weighted Ensemble @LennartPurucker (#3598)
Fix memory leaks by using ray processes for sequential fitting @LennartPurucker (#3614)
Added dynamic parallel folds support to better utilize compute in low memory scenarios @yinweisu @Innixma (#3511)
Fixed linear model crashes during HPO and added search space for linear models @Innixma (#3571, #3720)
Other Enhancements
Multi-layer stacking now produces deterministic results @LennartPurucker (#3573)
Various model dependency updates @mglowacki100 (#3373)
Various code cleanup and logging improvements @Innixma (#3408, #3570, #3652, #3734)
Bug Fixes / Code and Doc Improvements
Fixed incorrect model memory usage calculation @Innixma (#3591)
Fixed
infer_limit
being used incorrectly when bagging @Innixma (#3467)
AutoMM
AutoGluon Multimodal (AutoMM) is designed to simplify the fine-tuning of foundation models for downstream applications with just three lines of code. It seamlessly integrates with popular model zoos such as HuggingFace Transformers, TIMM, and MMDetection, providing support for a diverse range of data modalities, including image, text, tabular, and document data, whether used individually or in combination.
New Features
Semantic Segmentation
Introducing the new problem type
semantic_segmentation
, for fine-tuning Segment Anything Model (SAM) with three lines of code. @Harry-zzh @zhiqiangdon (#3645, #3677, #3697, #3711, #3722, #3728)Added comprehensive benchmarks from diverse domains, including natural images, agriculture, remote sensing, and healthcare.
Utilizing parameter-efficient finetuning (PEFT) LoRA, showcasing consistent superior performance over alternatives (VPT, adaptor, BitFit, SAM-adaptor, and LST) in the extensive benchmarks.
Added one semantic segmentation tutorial @zhiqiangdon (#3716).
Using SAM-ViT Huge by default (GPU memory > 25GB required).
Few Shot Classification
Added the new
few_shot_classification
problem type for training few shot classifiers on images or texts. @zhiqiangdon (#3662, #3681, #3695)Leveraging image/text foundation models to extract features and train SVM classifiers.
Added one few shot classification tutorial. @zhiqiangdon (#3662)
Supported torch.compile for faster training (experimental and torch >=2.2 required) @zhiqiangdon (#3520).
Performance Improvements
Improved default image backbones, achieving a 100% win-rate on the image benchmark. @taoyang1122 (#3738)
Replaced MLPs with FT-Transformer as the default tabular backbones, resulting in a 67% win-rate on the text+tabular benchmark. @taoyang1122 (#3732)
Using both the improved default image backbones and FT-Transformer achieves a 62% win-rate on the text+tabular+image benchmark. @taoyang1122 (#3732, #3738)
Stability Enhancements
Enabled rigorous multi-GPU CI testing. @prateekdesai04 (#3566)
Fixed multi-GPU issues. @FANGAreNotGnu (#3617 #3665 #3684 #3691, #3639, #3618)
Enhanced Usability
Supported custom evaluation metrics, which allows defining custom metric object and passing it to the
eval_metric
argument. @taoyang1122 (#3548)Supported multi-GPU training in notebooks (experimental). @zhiqiangdon (#3484)
Improved logging with system info. @zhiqiangdon (#3735)
Improved Scalability
The introduction of the new learner class design facilitates easier support for new tasks and data modalities within AutoMM, enhancing overall scalability. @zhiqiangdon (#3650, #3685, #3735)
Other Enhancements
Added the option
hf_text.use_fast
for customizing fast tokenizer usage inhf_text
models. @zhiqiangdon (#3379)Added fallback evaluation/validation metric, supporting
f1_macro
f1_micro
, andf1_weighted
. @FANGAreNotGnu (#3696)Supported multi-GPU inference with the DDP strategy. @zhiqiangdon (#3445, #3451)
Upgraded torch to 2.0. @zhiqiangdon (#3404)
Upgraded lightning to 2.0 @zhiqiangdon (#3419)
Upgraded torchmetrics to 1.0 @zhiqiangdon (#3422)
Code Improvements
Refactored AutoMM with the learner class for improved design. @zhiqiangdon (#3650, #3685, #3735)
Refactored FT-Transformer. @taoyang1122 (#3621, #3700)
Refactored the visualizers of object detection, semantic segmentation, and NER. @zhiqiangdon (#3716)
Other code refactor/clean-up: @zhiqiangdon @FANGAreNotGnu (#3383 #3399 #3434 #3667 #3684 #3695)
Bug Fixes/Doc Improvements
Fixed one ONNX export issue. @AnirudhDagar (#3725)
Improved AutoMM introduction for clarity. @zhiqiangdon (#3388 #3726)
Improved AutoMM API doc. @zhiqiangdon @AnirudhDagar (#3772 #3777)
Other bug fixes @zhiqiangdon @FANGAreNotGnu @taoyang1122 @tonyhoo @rsj123 @AnirudhDagar (#3384, #3424, #3526, #3593, #3615, #3638, #3674, #3693, #3702, #3690, #3729, #3736, #3474, #3456, #3590, #3660)
Other doc improvements @zhiqiangdon @FANGAreNotGnu @taoyang1122 (#3397, #3461, #3579, #3670, #3699, #3710, #3716, #3737, #3744, #3745, #3680)
TimeSeries
Highlights
AutoGluon 1.0 features numerous usability and performance improvements to the TimeSeries module. These include automatic handling of missing data and irregular time series, new forecasting metrics (including custom metric support), advanced time series cross-validation options, and new forecasting models. AutoGluon produces state-of-the-art results in forecast accuracy, achieving 70%+ win rate compared to other popular forecasting frameworks.
New features
Support for custom forecasting metrics @shchur (#3760, #3602)
New forecasting metrics
WAPE
,RMSSE
,SQL
+ improved documentation for metrics @melopeo @shchur (#3747, #3632, #3510, #3490)Improved robustness:
TimeSeriesPredictor
can now handle data with all pandas frequencies, irregular timestamps, or missing values represented byNaN
@shchur (#3563, #3454)New models: intermittent demand forecasting models based on conformal prediction (
ADIDA
,CrostonClassic
,CrostonOptimized
,CrostonSBA
,IMAPA
);WaveNet
andNPTS
from GluonTS; new baseline models (Average
,SeasonalAverage
,Zero
) @canerturkmen @shchur (#3706, #3742, #3606, #3459)Advanced cross-validation options: avoid retraining the models for each validation window with
refit_every_n_windows
or adjust the step size between validation windows withval_step_size
arguments toTimeSeriesPredictor.fit
@shchur (#3704, #3537)
Enhancements
Enable Ray Tune for deep-learning forecasting models @canerturkmen (#3705)
Support passing multiple evaluation metrics to
TimeSeriesPredictor.evaluate
@shchur (#3646)Static features can now be passed directly to
TimeSeriesDataFrame.from_path
andTimeSeriesDataFrame.from_data_frame
constructors @shchur (#3635)
Performance improvements
Much more accurate forecasts at low time limits thanks to new presets and updated logic for splitting the training time across models @shchur (#3749, #3657, #3741)
Faster training and prediction + lower memory usage for
DirectTabular
andRecursiveTabular
models (#3740, #3620, #3559)Enable early stopping and improve inference speed for GluonTS models @shchur (#3575)
Reduce import time for
autogluon.timeseries
by moving import statements inside model classes (#3514)
Bug Fixes / Code and Doc Improvements
Add reference to the publication on AutoGluon-TimeSeries to README @shchur (#3482)
Align API of
TimeSeriesPredictor
withTabularPredictor
, remove deprecated methods @shchur (#3714, #3655, #3396)General bug fixes and improvements @shchur(#3758, #3756, #3755, #3754, #3746, #3743, #3727, #3698, #3654, #3653, #3648, #3628, #3588, #3560, #3558, #3536, #3533, #3523, #3522, #3476, #3463)
EDA
The EDA module will be released at a later time, as it requires additional development effort before it is ready for 1.0.
We will make an announcement when EDA is ready for release. For now, please continue to use "autogluon.eda==0.8.2"
.
Deprecations
General
autogluon.core.spaces
has been deprecated. Please useautogluon.common.spaces
instead @Innixma (#3701)
Tabular
Tabular will log warnings if using the deprecated methods. Deprecated methods are planned to be removed in AutoGluon 1.2 @Innixma (#3701)
autogluon.tabular.TabularPredictor
predictor.get_model_names()
->predictor.model_names()
predictor.get_model_names_persisted()
->predictor.model_names(persisted=True)
predictor.compile_models()
->predictor.compile()
predictor.persist_models()
->predictor.persist()
predictor.unpersist_models()
->predictor.unpersist()
predictor.get_model_best()
->predictor.model_best
predictor.get_pred_from_proba()
->predictor.predict_from_proba()
predictor.get_oof_pred_proba()
->predictor.predict_proba_oof()
predictor.get_oof_pred()
->predictor.predict_oof()
predictor.get_model_full_dict()
->predictor.model_refit_map()
predictor.get_size_disk()
->predictor.disk_usage()
predictor.get_size_disk_per_file()
->predictor.disk_usage_per_file()
predictor.leaderboard()
silent
argument deprecated, replaced bydisplay
, defaults to FalseSame for
predictor.evaluate()
andpredictor.evaluate_predictions()
AutoMM
Deprecated the
FewShotSVMPredictor
in favor of the newfew_shot_classification
problem type @zhiqiangdon (#3699)Deprecated the
AutoMMPredictor
in favor ofMultiModalPredictor
@zhiqiangdon (#3650)autogluon.multimodal.MultiModalPredictor
Deprecated the
config
argument in the fit API. @zhiqiangdon (#3679)Deprecated the
init_scratch
andpipeline
arguments in the init API @zhiqiangdon (#3668)
TimeSeries
autogluon.timeseries.TimeSeriesPredictor
Deprecated argument
TimeSeriesPredictor(ignore_time_index: bool)
. Now, if the data contains irregular timestamps, either convert it to regular frequency withdata = data.convert_frequency(freq)
or provide frequency when creating the predictor asTimeSeriesPredictor(freq=freq)
.predictor.evaluate()
now returns a dictionary (previously returned a float)predictor.score()
->predictor.evaluate()
predictor.get_model_names()
->predictor.model_names()
predictor.get_model_best()
->predictor.model_best
Metric
"mean_wQuantileLoss"
has been renamed to"WQL"
predictor.leaderboard()
silent
argument deprecated, replaced bydisplay
, defaults to FalseWhen setting
hyperparameters
to a string inpredictor.fit()
, supported values are now"default"
,"light"
and"very_light"
autogluon.timeseries.TimeSeriesDataFrame
df.to_regular_index()
->df.convert_frequency()
Deprecated method
df.get_reindexed_view()
. Please see deprecation notes forignore_time_index
underTimeSeriesPredictor
above for information on how to deal with irregular timestamps
Models
All models based on MXNet (
DeepARMXNet
,MQCNNMXNet
,MQRNNMXNet
,SimpleFeedForwardMXNet
,TemporalFusionTransformerMXNet
,TransformerMXNet
) have been removedStatistical models from Statmodels (
ARIMA
,Theta
,ETS
) have been replaced by their counterparts from StatsForecast (#3513). Note that these models now have different hyperparameter names.DirectTabular
is now implemented usingmlforecast
backend (same asRecursiveTabular
), most hyperparameter names for the model have changed.
autogluon.timeseries.TimeSeriesEvaluator
has been deprecated. Please use metrics available inautogluon.timeseries.metrics
instead.autogluon.timeseries.splitter.MultiWindowSplitter
andautogluon.timeseries.splitter.LastWindowSplitter
have been deprecated. Please usenum_val_windows
andval_step_size
arguments toTimeSeriesPredictor.fit
instead (alternatively, useautogluon.timeseries.splitter.ExpandingWindowSplitter
).
Papers
AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting
We have published a paper on AutoGluon-TimeSeries at AutoML Conference 2023 (Paper Link, YouTube Video). In the paper, we benchmarked AutoGluon and popular open-source forecasting frameworks (including DeepAR, TFT, AutoARIMA, AutoETS, AutoPyTorch). AutoGluon produces SOTA results in point and probabilistic forecasting, and even achieves 65% win rate against the best-in-hindsight combination of models.
TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications
We have published a paper on Tabular Zeroshot-HPO ensembling simulation to arXiv (Paper Link, GitHub). This paper is key to achieving the performance improvements seen in AutoGluon 1.0, and we plan to continue to develop the code-base to support future enhancements.
XTab: Cross-table Pretraining for Tabular Transformers
We have published a paper on tabular Transformer pre-training at ICML 2023 (Paper Link, GitHub). In the paper we demonstrate state-of-the-art performance for tabular deep learning models, including being able to match the performance of XGBoost and LightGBM models. While the pre-trained transformer is not yet incorporated into AutoGluon, we plan to integrate it in a future release.
Learning Multimodal Data Augmentation in Feature Space
Our paper on learning multimodal data augmentation was accepted at ICLR 2023 (Paper Link, GitHub). This paper introduces a plug-and-play module to learn multimodal data augmentation in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that it can (1) improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data. This work is not yet incorporated into AutoGluon, but we plan to integrate it in a future release.
Data Augmentation for Object Detection via Controllable Diffusion Models
Our paper on generative object detection data augmentation has been accepted at WACV 2024 (Paper and GitHub link will be available soon). This paper proposes a data augmentation pipeline based on controllable diffusion models and CLIP, with visual prior generation to guide the generation and post-filtering by category-calibrated CLIP scores to control its quality. We demonstrate that the performance improves across various tasks and settings when using our augmentation pipeline with different detectors. Although diffusion models are currently not integrated into AutoGluon, we plan to incorporate the data augmentation techniques in a future release.
Adapting Image Foundation Models for Video Understanding
We have published a paper on how to efficiently adapt image foundation models for video understanding at ICLR 2023 (Paper Link, GitHub). This paper introduces spatial adaptation, temporal adaptation and joint adaptation to gradually equip a frozen image model with spatiotemporal reasoning capability. The proposed method achieves competitive or even better performance than traditional full finetuning while largely saving the training cost of large foundation models.
v0.8.3
Version 0.8.3
v0.8.3 is a patch release to address security vulnerabilities.
See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v0.8.2…v0.8.3
This version supports Python versions 3.8, 3.9, and 3.10.
Changes
v0.8.2
Version 0.8.2
v0.8.2 is a hot-fix release to pin pydantic
version to avoid crashing during HPO
As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.
See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v0.8.1…v0.8.2
This version supports Python versions 3.8, 3.9, and 3.10.
Changes
codespell: action, config + some typos fixed @yarikoptic @yinweisu (#3323)
Unpin sentencepiece @zhiqiangdon (#3368)
Pin pydantic @yinweisu (3370)
v0.8.1
Version 0.8.1
v0.8.1 is a bug fix release.
As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.
See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v0.8.0…v0.8.1
This version supports Python versions 3.8, 3.9, and 3.10.
Changes
Documentation improvements
Bug Fixes / General Improvements
Move PyMuPDF to optional @Innixma @zhiqiangdon (#3331)
Update persist_models max_memory 0.1 -> 0.4 @Innixma (#3338)
Remove fairscale @zhiqiangdon (#3342)
Fix
DirectTabular
model failing for some metrics; hide warnings produced byAutoARIMA
@shchur (#3350)Reduce per gpu batch size for AutoMM high_quality_hpo to avoid out of memory error for some corner cases @zhiqiangdon (#3360)
Fix HPO crash by setting reuse_actor to False @yinweisu (#3361)
v0.8.0
Version 0.8.0
We’re happy to announce the AutoGluon 0.8 release.
Note: Loading models trained in different versions of AutoGluon is not supported.
This release contains 196 commits from 20 contributors!
See the full commit change-log here: https://github.com/autogluon/autogluon/compare/0.7.0…0.8.0
Special thanks to @geoalgo for the joint work in generating the experimental tabular Zeroshot-HPO portfolio this release!
Full Contributor List (ordered by # of commits):
@shchur, @Innixma, @yinweisu, @gradientsky, @FANGAreNotGnu, @zhiqiangdon, @gidler, @liangfu, @tonyhoo, @cheungdaven, @cnpgs, @giswqs, @suzhoum, @yongxinw, @isunli, @jjaeyeon, @xiaochenbin9527, @yzhliu, @jsharpna, @sxjscience
AutoGluon 0.8 supports Python versions 3.8, 3.9, and 3.10.
Changes
Highlights
AutoGluon TimeSeries introduced several major improvements, including new models, upgraded presets that lead to better forecast accuracy, and optimizations that speed up training & inference.
AutoGluon Tabular now supports calibrating the decision threshold in binary classification (API), leading to massive improvements in metrics such as
f1
andbalanced_accuracy
. It is not uncommon to seef1
scores improve from0.70
to0.73
as an example. We strongly encourage all users who are using these metrics to try out the new decision threshold calibration logic.AutoGluon MultiModal introduces two new features: 1) PDF document classification, and 2) Open Vocabulary Object Detection.
AutoGluon MultiModal upgraded the presets for object detection, now offering
medium_quality
,high_quality
, andbest_quality
options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.AutoGluon Tabular has added an experimental Zeroshot HPO config which performs well on small datasets <10000 rows when at least an hour of training time is provided (~60% win-rate vs
best_quality
). To try it out, specifypresets="experimental_zeroshot_hpo_hybrid"
when callingfit()
.AutoGluon EDA added support for Anomaly Detection and Partial Dependence Plots.
AutoGluon Tabular has added experimental support for TabPFN, a pre-trained tabular transformer model. Try it out via
pip install autogluon.tabular[all,tabpfn]
(hyperparameter key is “TABPFN”)!
General
General doc improvements @tonyhoo @Innixma @yinweisu @gidler @cnpgs @isunli @giswqs (#2940, #2953, #2963, #3007, #3027, #3059, #3068, #3083, #3128, #3129, #3130, #3147, #3174, #3187, #3256, #3258, #3280, #3306, #3307, #3311, #3313)
General code fixes and improvements @yinweisu @Innixma (#2921, #3078, #3113, #3140, #3206)
CI improvements @yinweisu @gidler @yzhliu @liangfu @gradientsky (#2965, #3008, #3013, #3020, #3046, #3053, #3108, #3135, #3159, #3283, #3185)
Update namespace packages for PEP420 compatibility @gradientsky (#3228)
Multimodal
AutoGluon MultiModal (also known as AutoMM) introduces two new features: 1) PDF document classification, and 2) Open Vocabulary Object Detection. Additionally, we have upgraded the presets for object detection, now offering medium_quality
, high_quality
, and best_quality
options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.
New Features
PDF Document Classification. See tutorial @cheungdaven (#2864, #3043)
Open Vocabulary Object Detection. See tutorial @FANGAreNotGnu (#3164)
Performance Improvements
Upgrade the detection engine from mmdet 2.x to mmdet 3.x, and upgrade our presets @FANGAreNotGnu (#3262)
medium_quality
: yolo-s -> yolox-lhigh_quality
: yolox-l -> DINO-Res50best_quality
: yolox-x -> DINO-Swin_l
Speedup fusion model training with deepspeed strategy. @liangfu (#2932)
Enable detection backbone freezing to boost finetuning speed and save GPU usage @FANGAreNotGnu (#3220)
Other Enhancements
Support passing data path to the fit() API @zhiqiangdon (#3006)
Upgrade TIMM to the latest v0.9.* @zhiqiangdon (#3282)
Support xywh output for object detection @FANGAreNotGnu (#2948)
Fusion model inference acceleration with TensorRT @liangfu (#2836, #2987)
Support customizing advanced image data augmentation. Users can pass a list of torchvision transform objects as image augmentation. @zhiqiangdon (#3022)
Add yoloxm and yoloxtiny @FangAreNotGnu (#3038)
Add MultiImageMix Dataset for Object Detection @FangAreNotGnu (#3094)
Support loading specific checkpoints. Users can load the intermediate checkpoints other than model.ckpt and last.ckpt. @zhiqiangdon (#3244)
Add some predictor properties for model statistics @zhiqiangdon (#3289)
trainable_parameters
returns the number of trainable parameters.total_parameters
returns the number of total parameters.model_size
returns the model size measured by megabytes.
Bug Fixes / Code and Doc Improvements
General bug fixes and improvements @zhiqiangdon @liangfu @cheungdaven @xiaochenbin9527 @Innixma @FANGAreNotGnu @gradientsky @yinweisu @yongxinw (#2939, #2989, #2983, #2998, #3001, #3004, #3006, #3025, #3026, #3048, #3055, #3064, #3070, #3081, #3090, #3103, #3106, #3119, #3155, #3158, #3167, #3180, #3188, #3222, #3261, #3266, #3277, #3279, #3261, #3267)
Refactor inferring problem type and output shape @zhiqiangdon (#3227)
Log GPU info including GPU total memory, free memory, GPU card name, and CUDA version during training @zhiqaingdon (#3291)
Tabular
New Features
Added
calibrate_decision_threshold
(tutorial), which allows to optimize a given metric’s decision threshold for predictions to strongly enhance the metric score. @Innixma (#3298)We’ve added an experimental Zeroshot HPO config, which performs well on small datasets <10000 rows when at least an hour of training time is provided. To try it out, specify
presets="experimental_zeroshot_hpo_hybrid"
when callingfit()
@Innixma @geoalgo (#3312)The TabPFN model is now supported as an experimental model. TabPFN is a viable model option when inference speed is not a concern, and the number of rows of training data is less than 10,000. Try it out via
pip install autogluon.tabular[all,tabpfn]
! @Innixma (#3270)Backend support for distributed training, which will be available with the next Cloud module release. @yinweisu (#3054, #3110, #3115, #3131, #3142, #3179, #3216)
Performance Improvements
Other Enhancements
Add quantile regression support for CatBoost @shchur (#3165)
Add enable_categorical=True support to XGBoost @Innixma (#3286)
Bug Fixes / Code and Doc Improvements
Cross-OS loading of a fit TabularPredictor should now work properly @yinweisu @Innixma
General bug fixes and improvements @Innixma @cnpgs @shchur @yinweisu @gradientsky (#2865, #2936, #2990, #3045, #3060, #3069, #3148, #3182, #3199, #3226, #3257, #3259, #3268, #3269, #3287, #3288, #3285, #3293, #3294, #3302)
Move interpretable logic to InterpretableTabularPredictor @Innixma (#2981)
TimeSeries
In v0.8 we introduce several major improvements to the Time Series module, including new models, upgraded presets that lead to better forecast accuracy, and optimizations that speed up training & inference.
Highlights
New models:
PatchTST
andDLinear
from GluonTS, andRecursiveTabular
based on integration with themlforecast
library @shchur (#3177, #3184, #3230)Improved accuracy and reduced overall training time thanks to updated presets @shchur (#3281, #3120)
3-6x faster training and inference for
AutoARIMA
,AutoETS
,Theta
,DirectTabular
,WeightedEnsemble
models @shchur (#3062, #3214, #3252)
New Features
Dramatically faster repeated calls to
predict()
,leaderboard()
andevaluate()
thanks to prediction caching @shchur (#3237)Reduce overfitting by using multiple validation windows with the
num_val_windows
argument tofit()
@shchur (#3080)Exclude certain models from presets with the
excluded_model_types
argument tofit()
@shchur (#3231)New method
refit_full()
that refits models on combined train and validation data @shchur (#3157)Train multiple configurations of the same model by providing lists in the
hyperparameters
argument @shchur (#3183)Time limit set by
time_limit
is now respected by all models @shchur (#3214)
Enhancements
Improvements to the
DirectTabular
model (previously calledAutoGluonTabular
): faster featurization, trained as a quantile regression model ifeval_metric
is set to"mean_wQuantileLoss"
@shchur (#2973, #3211)Use correct seasonal period when computing the MASE metric @shchur (#2970)
Check the AutoGluon version when loading
TimeSeriesPredictor
from disk @shchur (#3233)
Minor Improvements / Documentation / Bug Fixes
Update documentation and tutorials @shchur (#2960, #2964, #3296, #3297)
General bug fixes and improvements @shchur (#2977, #3058, #3066, #3160, #3193, #3202, #3236, #3255, #3275, #3290)
Exploratory Data Analysis (EDA) tools
In 0.8 we introduce a few new tools to help with data exploration and feature engineering:
Anomaly Detection @gradientsky (#3124, #3137) - helps to identify unusual patterns or behaviors in data that deviate significantly from the norm. It’s best used when finding outliers, rare events, or suspicious activities that could indicate fraud, defects, or system failures. Check the Anomaly Detection Tutorial to explore the functionality.
Partial Dependence Plots @gradientsky (#3071, #3079) - visualize the relationship between a feature and the model’s output for each individual instance in the dataset. Two-way variant can visualize potential interactions between any two features. Please see this tutorial for more detail: Using Interaction Charts To Learn Information About the Data
Bug Fixes / Code and Doc Improvements
Switch regression analysis in
quick_fit
to use residuals plot @gradientsky (#3039)Added
explain_rows
method toautogluon.eda.auto
- Kernel SHAP visualization @gradientsky (#3014)General improvements and fixes @gradientsky (#2991, #3056, #3102, #3107, #3138)