Version 1.5.0¶
We are happy to announce the AutoGluon 1.5.0 release!
AutoGluon 1.5.0 introduces new features and major improvements to both tabular and time series modules.
This release contains 131 commits from 17 contributors! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/1.4.0…1.5.0
Join the community:
Get the latest updates:
This release supports Python versions 3.10, 3.11, 3.12 and 3.13. Support for Python 3.13 is currently experimental, and some features might not be available when running Python 3.13 on Windows. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.5.0.
Spotlight¶
Chronos-2¶
AutoGluon v1.5 adds support for Chronos-2, our latest generation of foundation models for time series forecasting. Chronos-2 natively handles all types of dynamic covariates, and performs cross-learning from items in the batch. It produces multi-step quantile forecasts and is designed for strong out-of-the-box performance on new datasets.
Chronos-2 achieves state-of-the-art zero-shot accuracy among public models on major benchmarks such as fev-bench and GIFT-Eval, making it a strong default choice when little or no task-specific training data is available.
In AutoGluon, Chronos-2 can be used in zero-shot mode or fine-tuned on custom data. Both LoRA fine-tuning and full fine-tuning are supported. Chronos-2 integrates into the standard TimeSeriesPredictor workflow, making it easy to backtest, compare against classical and deep learning models, and combine with other models in ensembles.
from autogluon.timeseries import TimeSeriesPredictor
predictor = TimeSeriesPredictor(...)
predictor.fit(train_data, presets="chronos2") # zero-shot mode
More details on zero-shot usage, fine-tuning and ensembling are available in the updated tutorial.
AutoGluon Tabular¶
TabArena¶
AutoGluon Assistant (MLZero)¶
MITRA¶
General¶
Dependencies¶
Update torch to
>=2.6,<2.10@FANGAreNotGnu @shchur (#5270) (#5425)Update ray to
>=2.43.0,<2.53@shchur @prateekdesai04 (#5442) (#5312)Update lightning to
>=2.5.1,<2.6@canerturkmen (#5432)Update scikit-learn-intelex to
2025.0,<2025.10@Innixma (#5434)Add experimental support for Python 3.13 @shchur @shou10152208 (#5073) (#5423)
Fixes and Improvements¶
Minor typing fixes. @canerturkmen (#5292)
Fix conda install instructions for ray version. @Innixma (#5323)
Remove LICENSE and NOTICE files from common. @prateekdesai04 (#5396)
Fix upload python package. @prateekdesai04 (#5397)
Change build order. @prateekdesai04 (#5398)
Decouple and enable module-wise installation. @prateekdesai04 (#5399)
Fix get_smallest_valid_dtype_int for negative values. @Innixma (#5421)
Tabular¶
AutoGluon-Tabular v1.5 introduces several improvements focused on accuracy, robustness, and usability. The release adds new foundation models, updates the feature preprocessing pipeline, and improves GPU stability and memory estimation. New model portfolios are provided for both CPU and GPU workloads.
Highlights¶
New foundation models: RealTabPFN-2, RealTabPFN-2.5, and TabDPT are now available in AutoGluon-Tabular.
Updated preprocessing pipeline with more consistent feature handling across models.
Improved GPU stability and more reliable memory estimation during training.
New CPU and GPU portfolios tuned for better performance across a wide range of datasets.
Stronger benchmark results: with the new presets, AutoGluon-Tabular v1.5 achieves an 85% win rate over AutoGluon v1.4 Extreme on the 51 TabArena datasets, with a 3% reduction in mean relative error.
New Features¶
Fixes and Improvements¶
Fix bug if pred is inf and weight is 0 in weighted ensemble. @Innixma (#5317)
Default TabularPredictor.delete_models dry_run=False. @Innixma (#5260)
Support different random seeds per fold. @LennartPurucker (#5267)
Changing the default output dir’s base path. @LennartPurucker (#5285)
Ensure compatibility of flash attention unpad_input. @xiyuanzh (#5298)
Refactor of validation technique selection. @LennartPurucker (#4585)
MakeOneFeatureGenerator pass check_is_fitted test. @betatim (#5386)
Enable CPU loading of models trained on GPU @Innixma (#5403) (#5434)
Remove unused variable val_improve_epoch in TabularNeuralNetTorchModel. @celestinoxp (#5466)
Fix memory estimation for RF/XT in parallel mode. @celestinoxp (#5467)
Pass label cleaner to model for semantic encodings. @LennartPurucker (#5482)
Fix time_epoch_average calculation in TabularNeuralNetTorch. @celestinoxp (#5484)
GPU optimization, scheduling for parallel_local fitting strategy. @prateekdesai04 (#5388)
Fix XGBoost crashing on eval metric name in HPs. @LennartPurucker (#5493)
TimeSeries¶
AutoGluon v1.5 introduces substantial improvements to the time series module, with clear gains in both accuracy and usability. Across our benchmarks, v1.5 achieves up to an 80% win rate compared to v1.4. The release adds new models, more flexible ensembling options, and numerous bug fixes and quality-of-life improvements.
Highlights¶
Chronos-2 is now available in AutoGluon, with support for zero-shot inference as well as full and LoRA fine-tuning (tutorial).
Customizable ensembling logic: Adds item-level ensembling, multi-layer stack ensembles, and other advanced forecast combination methods (documentation).
New presets leading to major gains in accuracy & efficiency. AG-TS v1.5 achieves up to 80% win rate over v1.4 on point and probabilistic forecasting tasks. With just a 10 minute time limit, v1.5 outperforms v1.4 running for 2 hours.
Usability improvements: Automatically determine an appropriate backtesting configuration by setting
num_val_windows="auto"andrefit_every_n_windows="auto". Easily access the validation predictions and perform rolling evaluation on custom data with new predictor methodsbacktest_predictionsandbacktest_targets.
New Features¶
Add multi-layer stack ensembling support @canerturkmen (#5459) (#5472) (#5463) (#5456) (#5436) (#5422) (#5391)
Add new advanced ensembling methods @canerturkmen @shchur (#5465) (#5420) (#5401) (#5389) (#5376)
Add Chronos-2 model. @abdulfatir @canerturkmen (#5427) (#5447) (#5448) (#5449) (#5454) (#5455) (#5450) (#5458) (#5492) (#5495) (#5487) (#5486)
Update Chronos-2 tutorial. @abdulfatir (#5481)
Add Toto model. @canerturkmen (#5321) (#5390) (#5475)
Fine-tune Chronos-Bolt on user-provided
quantile_levels. @shchur (#5315)Add backtesting methods for the TimeSeriesPredictor. @shchur (#5356)
API Changes and Deprecations¶
Remove outdated presets related to the original Chronos model:
chronos,chronos_large,chronos_base,chronos_small,chronos_mini,chronos_tiny,chronos_ensemble. We recommend to use the new presetschronos2,chronos2_smallandchronos2_ensembleinstead.
Fixes and Improvements¶
Replace
infvalues withNaNinside_check_and_prepare_data_frame. @shchur (#5240)Add model registry and fix presets typing. @canerturkmen (#5100)
Move ITEMID and TIMESTAMP to dataset namespace. @canerturkmen (#5363)
Replace Chronos code with a dependency on
chronos-forecasting@canerturkmen (#5380) (#5383)Avoid errors if date_feature clashes with known_covariates. @shchur (#5414)
Make
rayan optional dependency forautogluon.timeseries. @shchur (#5430)Minor fixes and improvements @shchur @abdulfatir @canerturkmen (#5489) (#5452) (#5444) (#5416) (#5413) (#5410) (#5406)
Code Quality¶
Refactor trainable model set build logic. @canerturkmen (#5297)
Typing improvements to multiwindow model. @canerturkmen (#5308)
Move prediction cache out of trainer. @canerturkmen (#5313)
Refactor trainer methods with ensemble logic. @canerturkmen (#5375)
Use builtin generics for typing, remove types in internal docstrings. @canerturkmen (#5300)
Reorganize ensembles, add base class for array-based ensemble learning. @canerturkmen (#5332)
Separate ensemble training logic from trainer. @canerturkmen (#5384)
Clean up typing and documentation for Chronos. @canerturkmen (#5392)
Add timer utility, fix time limit in ensemble regressors, clean up tests. @canerturkmen (#5393)
upgrade type annotations to Python3.10. @canerturkmen (#5431)
Multimodal¶
Fixes and Improvements¶
Bug Fix and Update AutoMM Tutorials. @FANGAreNotGnu (#5167)
Fix Focal Loss. @FANGAreNotGnu (#5496)
Fix false positive document detection for images with incidental text. @FANGAreNotGnu (#5469)
Documentation and CI¶
[Test] Fix CI + Upgrade Ray. @prateekdesai04 (#5306)
Fix notebook build failures. @prateekdesai04 (#5348)
ci: scope down GitHub Token permissions. @AdnaneKhan (#5351)
[CI] Fix docker build. @prateekdesai04 (#5402)
remove ROADMAP.md. @canerturkmen (#5405)
[docs] Add citations for Chronos-2 and multi-layer stacking for TS. @shchur (#5412)
Revert “Fix permissions for platform_tests action”. @shchur (#5419)
Contributors¶
Full Contributor List (ordered by # of commits):
@shchur @canerturkmen @Innixma @prateekdesai04 @abdulfatir @LennartPurucker @celestinoxp @FANGAreNotGnu @xiyuanzh @nathanaelbosch @betatim @AdnaneKhan @paulbkoch @shou10152208 @ryuichi-ichinose @atschalz @colesussmeier
New Contributors¶
@AdnaneKhan made their first contribution in (#5351)
@paulbkoch made their first contribution in (#4480)
@shou10152208 made their first contribution in (#5073)
@ryuichi-ichinose made their first contribution in (#5458)
@colesussmeier made their first contribution in (#5452)