# Version 1.5.0
We are happy to announce the AutoGluon 1.5.0 release!
AutoGluon 1.5.0 introduces new features and major improvements to both tabular and time series modules.
This release contains [131 commits from 17 contributors](https://github.com/autogluon/autogluon/graphs/contributors?from=7%2F28%2F2025&to=12%2F19%2F2025&type=c)! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/1.4.0...1.5.0
Join the community: [](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [](https://twitter.com/autogluon)
This release supports Python versions 3.10, 3.11, 3.12 and 3.13. Support for Python 3.13 is currently experimental, and some features might not be available when running Python 3.13 on Windows. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.5.0.
--------
## Spotlight
### Chronos-2
AutoGluon v1.5 adds support for [Chronos-2](https://huggingface.co/amazon/chronos-2), our latest generation of foundation models for time series forecasting. Chronos-2 natively handles all types of dynamic covariates, and performs cross-learning from items in the batch. It produces multi-step quantile forecasts and is designed for strong out-of-the-box performance on new datasets.
Chronos-2 achieves state-of-the-art zero-shot accuracy among public models on major benchmarks such as [fev-bench](https://huggingface.co/spaces/autogluon/fev-bench) and [GIFT-Eval](https://huggingface.co/spaces/Salesforce/GIFT-Eval), making it a strong default choice when little or no task-specific training data is available.
In AutoGluon, Chronos-2 can be used in **zero-shot mode** or **fine-tuned** on custom data. Both **LoRA fine-tuning** and **full fine-tuning** are supported. Chronos-2 integrates into the standard `TimeSeriesPredictor` workflow, making it easy to backtest, compare against classical and deep learning models, and combine with other models in ensembles.
```python
from autogluon.timeseries import TimeSeriesPredictor
predictor = TimeSeriesPredictor(...)
predictor.fit(train_data, presets="chronos2") # zero-shot mode
```
More details on zero-shot usage, fine-tuning and ensembling are available in the [updated tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
### AutoGluon Tabular
**AutoGluon 1.5 Extreme sets a new state-of-the-art on TabArena**, with a 60 Elo improvement over AutoGluon 1.4 Extreme.
On average, AutoGluon 1.5 Extreme trains in half the time, has 50% faster inference speed, a 70% win-rate, and 2.8% less relative error compared to AutoGluon 1.4 Extreme. Whereas 1.4 used a mixed portfolio that changed depending on data size, 1.5 uses a single fixed portfolio for all datasets.
Notable Improvements:
1. Added TabDPT model, a tabular foundation model pre-trained exclusively on real data.
2. Added TabPrep-LightGBM, a LightGBM model with custom preprocessing logic including target mean encoding and feature crossing.
3. Added early stopping logic for the portfolio which stops training early for small datasets to mitigate overfitting and reduce training time.
AutoGluon 1.5 Extreme uses exclusively open and permissively licensed models, making it suitable for production and commercial use-cases.
To use AutoGluon 1.5 Extreme, you will need a GPU, ideally with at least 20 GB of VRAM to ensure stability. Performance gains are primarily on datasets with up to 100k training samples.
```python
# pip install autogluon.tabular[tabarena] # <-- Required for TabDPT, TabICL, TabPFN, and Mitra
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(...).fit(train_data, presets="extreme") # GPU required
```
|
TabArena All (51 datasets)
|
| Model |
Elo [⬆️] |
Improvability (%) [⬇️] |
Train Time (s/1K) [⬇️] |
Predict Time (s/1K) [⬇️] |
| AutoGluon 1.5 (extreme, 4h) |
1736 |
3.498 |
289.07 |
4.031 |
| AutoGluon 1.4 (extreme, 4h) |
1675 |
6.381 |
582.21 |
6.116 |
| AutoGluon 1.4 (best, 4h) |
1536 |
9.308 |
1735.72 |
2.559 |
| Pareto Frontier (Elo) |
Pareto Frontier (Improvability) |
|
|
#### New Model: RealTabPFN-2.5
Tech Report: [TabPFN-2.5: Advancing the State of the Art in Tabular Foundation Models](https://arxiv.org/pdf/2511.08667v1)
AutoGluon 1.5 adds support for fitting the RealTabPFN-2.5 model, the current strongest individual model on TabArena. Unlike TabPFN-2 which has a permissive license, RealTabPFN-2.5 comes with a non-commercial license and requires the user to authenticate with HuggingFace and accept a terms of use agreement before being able to download the weights. The user will be automatically prompted to perform these steps during AutoGluon's fit call if RealTabPFN-2.5 is specified, and the model will be skipped until the weights have been downloaded by the user. RealTabPFN-2.5 is not currently used in any AutoGluon preset, and must be manually specified.
All TabPFN user telemetry is disabled when used with AutoGluon.
To use RealTabPFN-2.5 (non-commercial use only):
```python
# pip install autogluon.tabular[all,tabpfn]
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(...).fit(
train_data,
hyperparameters={"REALTABPFN-V2.5": [{}]},
) # GPU required, non-commercial
```
To use RealTabPFN-2 (permissive license):
```python
# pip install autogluon.tabular[all,tabpfn]
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(...).fit(train_data, hyperparameters={"REALTABPFN-V2": [{}]}) # GPU required
```
For users who previously were using `"TABPFNV2"`, we strongly recommend switching to `"REALTABPFN-V2"` to avoid breaking changes in the latest TabPFN releases.
#### New Model: TabDPT
Paper: [TabDPT: Scaling Tabular Foundation Models on Real Data](https://arxiv.org/pdf/2410.18164)
To use TabDPT (permissive license):
```python
# pip install autogluon.tabular[all,tabdpt]
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(...).fit(train_data, hyperparameters={"TABDPT": [{}]}) # GPU recommended
```
#### New Model: TabPrep-LightGBM
TabPrep-LightGBM is an experimental model that uses a custom data preprocessing pipeline to enhance the performance of LightGBM. It represents a working snapshot of an in-progress research effort. Further details will be shared as part of an upcoming paper.
TabPrep-LightGBM achieves a new state-of-the-art for model performance on TabArena's 15 largest datasets (10k - 100k training samples), exceeding RealTabPFN-2.5 by 100 Elo while fitting 3x faster using just 8 CPU cores. TabPrep-LightGBM is also incorporated into the AutoGluon 1.5 extreme preset.
##### TabArena Medium (10k - 100k samples, 15 datasets)
| Model | Elo [⬆️] | Imp (%) [⬇️] | Train Time (s/1K) [⬇️] | Predict Time (s/1K) [⬇️] |
|:----------------------------------------|-----------:|-------------------------:|--------------------------------:|----------------------------------:|
| AutoGluon 1.5 (extreme, 4h) | 1965 | 1.876 | 191.18 | 2.207 |
| AutoGluon 1.4 (extreme, 4h) | 1813 | 3.016 | 289.53 | 3.187 |
| AutoGluon 1.4 (best, 4h) | 1794 | 3.122 | 432.35 | 4.085 |
| TabPrep-LightGBM (tuned + ensembled) | 1787 | 3.573 | 256.12 | 2.281 |
| RealTabPFN-v2.5 (tuned + ensembled) | 1680 | 5.818 | 735.58 | 11.736 |
| RealMLP (tuned + ensembled) | 1649 | 6.102 | 1719.82 | 1.675 |
| ModernNCA (tuned + ensembled) | 1636 | 6.189 | 2526.28 | 6.013 |
| CatBoost (tuned + ensembled) | 1616 | 6.011 | 777.59 | 0.25 |
| LightGBM (tuned + ensembled) | 1598 | 7.77 | 131.56 | 2.639 |
To use TabPrep-LightGBM, we recommend trying the presets it is used in: `"extreme", "best_v150", "high_v150"`. Fitting TabPrep-LightGBM outside of the use of presets is currently complicated.
--------
## General
### Dependencies
- Update torch to `>=2.6,<2.10` [@FANGAreNotGnu](https://github.com/FANGAreNotGnu) [@shchur](https://github.com/shchur) ([#5270](https://github.com/autogluon/autogluon/pull/5270)) ([#5425](https://github.com/autogluon/autogluon/pull/5425))
- Update seaborn to `>=0.12.0,<0.14`. [@Innixma](https://github.com/Innixma) ([#5378](https://github.com/autogluon/autogluon/pull/5378))
- Update onnx to `>=1.13.0,<1.21.0` [@shchur](https://github.com/shchur) ([#5439](https://github.com/autogluon/autogluon/pull/5439))
- Update ray to `>=2.43.0,<2.53` [@shchur](https://github.com/shchur) [@prateekdesai04](https://github.com/prateekdesai04) ([#5442](https://github.com/autogluon/autogluon/pull/5442)) ([#5312](https://github.com/autogluon/autogluon/pull/5312))
- Update transformers to `">=4.51.0,<4.58"` [@shchur](https://github.com/shchur) ([#5439](https://github.com/autogluon/autogluon/pull/5439))
- Update lightning to `>=2.5.1,<2.6` [@canerturkmen](https://github.com/canerturkmen) ([#5432](https://github.com/autogluon/autogluon/pull/5432))
- Update psutil to `>=5.7.3,<7.2.0` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Update xgboost to `>=2.0,<3.2` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Update pytabkit to `1.7.2,<1.8` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Update tabpfn to `>=6.1.0,<6.1.1` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Update tabicl to `0.1.4,<0.2` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Update scikit-learn-intelex to `2025.0,<2025.10` [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Add experimental support for Python 3.13 [@shchur](https://github.com/shchur) [@shou10152208](https://github.com/shou10152208) ([#5073](https://github.com/autogluon/autogluon/pull/5073)) ([#5423](https://github.com/autogluon/autogluon/pull/5423))
### Fixes and Improvements
- Minor typing fixes. [@canerturkmen](https://github.com/canerturkmen) ([#5292](https://github.com/autogluon/autogluon/pull/5292))
- Fix conda install instructions for ray version. [@Innixma](https://github.com/Innixma) ([#5323](https://github.com/autogluon/autogluon/pull/5323))
- Use standalone uv in full_install.sh. [@Innixma](https://github.com/Innixma) ([#5328](https://github.com/autogluon/autogluon/pull/5328))
- Cleanup load_pd and save_pd. [@Innixma](https://github.com/Innixma) ([#5359](https://github.com/autogluon/autogluon/pull/5359))
- Remove LICENSE and NOTICE files from common. [@prateekdesai04](https://github.com/prateekdesai04) ([#5396](https://github.com/autogluon/autogluon/pull/5396))
- Fix upload python package. [@prateekdesai04](https://github.com/prateekdesai04) ([#5397](https://github.com/autogluon/autogluon/pull/5397))
- Change build order. [@prateekdesai04](https://github.com/prateekdesai04) ([#5398](https://github.com/autogluon/autogluon/pull/5398))
- Decouple and enable module-wise installation. [@prateekdesai04](https://github.com/prateekdesai04) ([#5399](https://github.com/autogluon/autogluon/pull/5399))
- Fix get_smallest_valid_dtype_int for negative values. [@Innixma](https://github.com/Innixma) ([#5421](https://github.com/autogluon/autogluon/pull/5421))
--------
## Tabular
AutoGluon-Tabular v1.5 introduces several improvements focused on accuracy, robustness, and usability. The release adds new foundation models, updates the feature preprocessing pipeline, and improves GPU stability and memory estimation. New model portfolios are provided for both CPU and GPU workloads.
### Highlights
- **New models**: RealTabPFN-2, RealTabPFN-2.5, TabDPT, TabPrep-LightGBM, and EBM are now available in AutoGluon-Tabular.
- **Updated preprocessing pipeline** with more consistent feature handling across models.
- **Improved GPU stability** and more reliable memory estimation during training.
- **New CPU and GPU portfolios** tuned for better performance across a wide range of datasets: `"extreme", "best_v150", "high_v150"`.
- **Stronger benchmark results**: with the new presets, AutoGluon-Tabular v1.5 Extreme achieves a **70% win rate** over AutoGluon v1.4 Extreme on the 51 TabArena datasets, with a **2.8% reduction in mean relative error**.
### New Features
- New preprocessors for tabular data. [@atschalz](https://github.com/atschalz) [@Innixma](https://github.com/Innixma) ([#5441](https://github.com/autogluon/autogluon/pull/5441))
- New model: LightGBMPrep. [@atschalz](https://github.com/atschalz) [@Innixma](https://github.com/Innixma) ([#5490](https://github.com/autogluon/autogluon/pull/5490))
- New models: TabPFN-2.5, TabDPT. [@Innixma](https://github.com/Innixma) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- New model: Explainable Boosting Machine. [@paulbkoch](https://github.com/paulbkoch) ([#4480](https://github.com/autogluon/autogluon/pull/4480))
- Add v1.5.0 presets [@Innixma](https://github.com/Innixma) ([#5505](https://github.com/autogluon/autogluon/pull/5505))
### Fixes and Improvements
- Fix bug if pred is inf and weight is 0 in weighted ensemble. [@Innixma](https://github.com/Innixma) ([#5317](https://github.com/autogluon/autogluon/pull/5317))
- Default TabularPredictor.delete_models dry_run=False. [@Innixma](https://github.com/Innixma) ([#5260](https://github.com/autogluon/autogluon/pull/5260))
- Remove redundant TabPFNv2 CPU log. [@Innixma](https://github.com/Innixma) ([#5259](https://github.com/autogluon/autogluon/pull/5259))
- Add einops in mitra install. [@xiyuanzh](https://github.com/xiyuanzh) ([#5266](https://github.com/autogluon/autogluon/pull/5266))
- Support different random seeds per fold. [@LennartPurucker](https://github.com/LennartPurucker) ([#5267](https://github.com/autogluon/autogluon/pull/5267))
- Changing the default output dir's base path. [@LennartPurucker](https://github.com/LennartPurucker) ([#5285](https://github.com/autogluon/autogluon/pull/5285))
- Add Mitra download_default_weights. [@Innixma](https://github.com/Innixma) ([#5271](https://github.com/autogluon/autogluon/pull/5271))
- Ensure compatibility of flash attention unpad_input. [@xiyuanzh](https://github.com/xiyuanzh) ([#5298](https://github.com/autogluon/autogluon/pull/5298))
- Refactor of validation technique selection. [@LennartPurucker](https://github.com/LennartPurucker) ([#4585](https://github.com/autogluon/autogluon/pull/4585))
- Mitra HF Args. [@xiyuanzh](https://github.com/xiyuanzh) ([#5272](https://github.com/autogluon/autogluon/pull/5272))
- Gracefully handle ray exceptions. [@Innixma](https://github.com/Innixma) ([#5327](https://github.com/autogluon/autogluon/pull/5327))
- Add logs for LightGBM CUDA device. [@Innixma](https://github.com/Innixma) ([#5325](https://github.com/autogluon/autogluon/pull/5325))
- Add Load/Save to TabularDataset. [@Innixma](https://github.com/Innixma) ([#5357](https://github.com/autogluon/autogluon/pull/5357))
- Fix model random state. [@Innixma](https://github.com/Innixma) ([#5369](https://github.com/autogluon/autogluon/pull/5369))
- Add AbstractModel type hints. [@Innixma](https://github.com/Innixma) ([#5358](https://github.com/autogluon/autogluon/pull/5358))
- MakeOneFeatureGenerator pass check_is_fitted test. [@betatim](https://github.com/betatim) ([#5386](https://github.com/autogluon/autogluon/pull/5386))
- Enable CPU loading of models trained on GPU [@Innixma](https://github.com/Innixma) ([#5403](https://github.com/autogluon/autogluon/pull/5403)) ([#5434](https://github.com/autogluon/autogluon/pull/5434))
- Remove unused variable val_improve_epoch in TabularNeuralNetTorchModel. [@celestinoxp](https://github.com/celestinoxp) ([#5466](https://github.com/autogluon/autogluon/pull/5466))
- Fix memory estimation for RF/XT in parallel mode. [@celestinoxp](https://github.com/celestinoxp) ([#5467](https://github.com/autogluon/autogluon/pull/5467))
- Pass label cleaner to model for semantic encodings. [@LennartPurucker](https://github.com/LennartPurucker) ([#5482](https://github.com/autogluon/autogluon/pull/5482))
- Fix time_epoch_average calculation in TabularNeuralNetTorch. [@celestinoxp](https://github.com/celestinoxp) ([#5484](https://github.com/autogluon/autogluon/pull/5484))
- GPU optimization, scheduling for parallel_local fitting strategy. [@prateekdesai04](https://github.com/prateekdesai04) ([#5388](https://github.com/autogluon/autogluon/pull/5388))
- Fix XGBoost crashing on eval metric name in HPs. [@LennartPurucker](https://github.com/LennartPurucker) ([#5493](https://github.com/autogluon/autogluon/pull/5493))
--------
## TimeSeries
AutoGluon v1.5 introduces substantial improvements to the time series module, with clear gains in both accuracy and usability. Across our benchmarks, v1.5 achieves up to an 80% win rate compared to v1.4. The release adds new models, more flexible ensembling options, and numerous bug fixes and quality-of-life improvements.
### Highlights
- **Chronos-2** is now available in AutoGluon, with support for zero-shot inference as well as full and LoRA fine-tuning ([tutorial](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-chronos.html)).
- **Customizable ensembling logic**: Adds item-level ensembling, multi-layer stack ensembles, and other advanced forecast combination methods ([documentation](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-ensembles.html)).
- **New presets** leading to major gains in accuracy & efficiency. AG-TS v1.5 achieves up to **80% win rate** over v1.4 on point and probabilistic forecasting tasks. With just a 10 minute time limit, v1.5 outperforms v1.4 running for 2 hours.
- **Usability improvements**: Automatically determine an appropriate backtesting configuration by setting `num_val_windows="auto"` and `refit_every_n_windows="auto"`. Easily access the validation predictions and perform rolling evaluation on custom data with new predictor methods [`backtest_predictions`](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.backtest_predictions.html) and [`backtest_targets`](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.backtest_targets.html).
### New Features
- Add multi-layer stack ensembling support [@canerturkmen](https://github.com/canerturkmen) ([#5459](https://github.com/autogluon/autogluon/pull/5459)) ([#5472](https://github.com/autogluon/autogluon/pull/5472)) ([#5463](https://github.com/autogluon/autogluon/pull/5463)) ([#5456](https://github.com/autogluon/autogluon/pull/5456)) ([#5436](https://github.com/autogluon/autogluon/pull/5436)) ([#5422](https://github.com/autogluon/autogluon/pull/5422)) ([#5391](https://github.com/autogluon/autogluon/pull/5391))
- Add new advanced ensembling methods [@canerturkmen](https://github.com/canerturkmen) [@shchur](https://github.com/shchur) ([#5465](https://github.com/autogluon/autogluon/pull/5465)) ([#5420](https://github.com/autogluon/autogluon/pull/5420)) ([#5401](https://github.com/autogluon/autogluon/pull/5401)) ([#5389](https://github.com/autogluon/autogluon/pull/5389)) ([#5376](https://github.com/autogluon/autogluon/pull/5376))
- Add Chronos-2 model. [@abdulfatir](https://github.com/abdulfatir) [@canerturkmen](https://github.com/canerturkmen) ([#5427](https://github.com/autogluon/autogluon/pull/5427)) ([#5447](https://github.com/autogluon/autogluon/pull/5447)) ([#5448](https://github.com/autogluon/autogluon/pull/5448)) ([#5449](https://github.com/autogluon/autogluon/pull/5449)) ([#5454](https://github.com/autogluon/autogluon/pull/5454)) ([#5455](https://github.com/autogluon/autogluon/pull/5455)) ([#5450](https://github.com/autogluon/autogluon/pull/5450)) ([#5458](https://github.com/autogluon/autogluon/pull/5458)) ([#5492](https://github.com/autogluon/autogluon/pull/5492)) ([#5495](https://github.com/autogluon/autogluon/pull/5495)) ([#5487](https://github.com/autogluon/autogluon/pull/5487)) ([#5486](https://github.com/autogluon/autogluon/pull/5486))
- Update Chronos-2 tutorial. [@abdulfatir](https://github.com/abdulfatir) ([#5481](https://github.com/autogluon/autogluon/pull/5481))
- Add Toto model. [@canerturkmen](https://github.com/canerturkmen) ([#5321](https://github.com/autogluon/autogluon/pull/5321)) ([#5390](https://github.com/autogluon/autogluon/pull/5390)) ([#5475](https://github.com/autogluon/autogluon/pull/5475))
- Fine-tune Chronos-Bolt on user-provided `quantile_levels`. [@shchur](https://github.com/shchur) ([#5315](https://github.com/autogluon/autogluon/pull/5315))
- Add backtesting methods for the TimeSeriesPredictor. [@shchur](https://github.com/shchur) ([#5356](https://github.com/autogluon/autogluon/pull/5356))
- Update predictor presets. [@shchur](https://github.com/shchur) ([#5480](https://github.com/autogluon/autogluon/pull/5480)) ([#5480](https://github.com/autogluon/autogluon/pull/5494))
### API Changes and Deprecations
- Remove outdated presets related to the original Chronos model: `chronos`, `chronos_large`, `chronos_base`, `chronos_small`, `chronos_mini`, `chronos_tiny`, `chronos_ensemble`. We recommend to use the new presets `chronos2`, `chronos2_small` and `chronos2_ensemble` instead.
### Fixes and Improvements
- Replace `inf` values with `NaN` inside `_check_and_prepare_data_frame`. [@shchur](https://github.com/shchur) ([#5240](https://github.com/autogluon/autogluon/pull/5240))
- Add model registry and fix presets typing. [@canerturkmen](https://github.com/canerturkmen) ([#5100](https://github.com/autogluon/autogluon/pull/5100))
- Fix broken unittests for time series. [@shchur](https://github.com/shchur) ([#5361](https://github.com/autogluon/autogluon/pull/5361))
- Move ITEMID and TIMESTAMP to dataset namespace. [@canerturkmen](https://github.com/canerturkmen) ([#5363](https://github.com/autogluon/autogluon/pull/5363))
- Remove deprecated arguments and classes. [@shchur](https://github.com/shchur) ([#5354](https://github.com/autogluon/autogluon/pull/5354))
- Replace Chronos code with a dependency on `chronos-forecasting` [@canerturkmen](https://github.com/canerturkmen) ([#5380](https://github.com/autogluon/autogluon/pull/5380)) ([#5383](https://github.com/autogluon/autogluon/pull/5383))
- Avoid errors if date_feature clashes with known_covariates. [@shchur](https://github.com/shchur) ([#5414](https://github.com/autogluon/autogluon/pull/5414))
- Make `ray` an optional dependency for `autogluon.timeseries`. [@shchur](https://github.com/shchur) ([#5430](https://github.com/autogluon/autogluon/pull/5430))
- Sort feature importance df. [@shchur](https://github.com/shchur) ([#5468](https://github.com/autogluon/autogluon/pull/5468))
- Make NPTS model deterministic. [@shchur](https://github.com/shchur) ([#5471](https://github.com/autogluon/autogluon/pull/5471))
- Store cardinality inside CovariateMetadata. [@shchur](https://github.com/shchur) ([#5476](https://github.com/autogluon/autogluon/pull/5476))
- Minor fixes and improvements [@shchur](https://github.com/shchur) [@abdulfatir](https://github.com/abdulfatir) [@canerturkmen](https://github.com/canerturkmen) ([#5489](https://github.com/autogluon/autogluon/pull/5489)) ([#5452](https://github.com/autogluon/autogluon/pull/5452)) ([#5444](https://github.com/autogluon/autogluon/pull/5444)) ([#5416](https://github.com/autogluon/autogluon/pull/5416)) ([#5413](https://github.com/autogluon/autogluon/pull/5413)) ([#5410](https://github.com/autogluon/autogluon/pull/5410)) ([#5406](https://github.com/autogluon/autogluon/pull/5406))
### Code Quality
- Refactor trainable model set build logic. [@canerturkmen](https://github.com/canerturkmen) ([#5297](https://github.com/autogluon/autogluon/pull/5297))
- Typing improvements to multiwindow model. [@canerturkmen](https://github.com/canerturkmen) ([#5308](https://github.com/autogluon/autogluon/pull/5308))
- Move prediction cache out of trainer. [@canerturkmen](https://github.com/canerturkmen) ([#5313](https://github.com/autogluon/autogluon/pull/5313))
- Refactor trainer methods with ensemble logic. [@canerturkmen](https://github.com/canerturkmen) ([#5375](https://github.com/autogluon/autogluon/pull/5375))
- Use builtin generics for typing, remove types in internal docstrings. [@canerturkmen](https://github.com/canerturkmen) ([#5300](https://github.com/autogluon/autogluon/pull/5300))
- Reorganize ensembles, add base class for array-based ensemble learning. [@canerturkmen](https://github.com/canerturkmen) ([#5332](https://github.com/autogluon/autogluon/pull/5332))
- Separate ensemble training logic from trainer. [@canerturkmen](https://github.com/canerturkmen) ([#5384](https://github.com/autogluon/autogluon/pull/5384))
- Clean up typing and documentation for Chronos. [@canerturkmen](https://github.com/canerturkmen) ([#5392](https://github.com/autogluon/autogluon/pull/5392))
- Add timer utility, fix time limit in ensemble regressors, clean up tests. [@canerturkmen](https://github.com/canerturkmen) ([#5393](https://github.com/autogluon/autogluon/pull/5393))
- upgrade type annotations to Python3.10. [@canerturkmen](https://github.com/canerturkmen) ([#5431](https://github.com/autogluon/autogluon/pull/5431))
--------
## Multimodal
### Fixes and Improvements
- Bug Fix and Update AutoMM Tutorials. [@FANGAreNotGnu](https://github.com/FANGAreNotGnu) ([#5167](https://github.com/autogluon/autogluon/pull/5167))
- Fix Focal Loss. [@FANGAreNotGnu](https://github.com/FANGAreNotGnu) ([#5496](https://github.com/autogluon/autogluon/pull/5496))
- Fix false positive document detection for images with incidental text. [@FANGAreNotGnu](https://github.com/FANGAreNotGnu) ([#5469](https://github.com/autogluon/autogluon/pull/5469))
--------
## Documentation and CI
- [doc] Clarify tuning_data documentation. [@Innixma](https://github.com/Innixma) ([#5296](https://github.com/autogluon/autogluon/pull/5296))
- [Test] Fix CI + Upgrade Ray. [@prateekdesai04](https://github.com/prateekdesai04) ([#5306](https://github.com/autogluon/autogluon/pull/5306))
- Fix notebook build failures. [@prateekdesai04](https://github.com/prateekdesai04) ([#5348](https://github.com/autogluon/autogluon/pull/5348))
- ci: scope down GitHub Token permissions. [@AdnaneKhan](https://github.com/AdnaneKhan) ([#5351](https://github.com/autogluon/autogluon/pull/5351))
- Fix CodeQL GitHub action. [@shchur](https://github.com/shchur) ([#5367](https://github.com/autogluon/autogluon/pull/5367))
- [CI] Fix docker build. [@prateekdesai04](https://github.com/prateekdesai04) ([#5402](https://github.com/autogluon/autogluon/pull/5402))
- [docs] Reorder modules in docs. [@shchur](https://github.com/shchur) ([#5404](https://github.com/autogluon/autogluon/pull/5404))
- remove ROADMAP.md. [@canerturkmen](https://github.com/canerturkmen) ([#5405](https://github.com/autogluon/autogluon/pull/5405))
- [docs] Add citations for Chronos-2 and multi-layer stacking for TS. [@shchur](https://github.com/shchur) ([#5412](https://github.com/autogluon/autogluon/pull/5412))
- Fix permissions for platform_tests action. [@shchur](https://github.com/shchur) ([#5418](https://github.com/autogluon/autogluon/pull/5418))
- Revert "Fix permissions for platform_tests action". [@shchur](https://github.com/shchur) ([#5419](https://github.com/autogluon/autogluon/pull/5419))
- Fix torch<2.10 issues in the CI. [@shchur](https://github.com/shchur) ([#5435](https://github.com/autogluon/autogluon/pull/5435))
--------
## Contributors
Full Contributor List (ordered by # of commits):
[@shchur](https://github.com/shchur) [@canerturkmen](https://github.com/canerturkmen) [@Innixma](https://github.com/Innixma) [@prateekdesai04](https://github.com/prateekdesai04) [@abdulfatir](https://github.com/abdulfatir) [@LennartPurucker](https://github.com/LennartPurucker) [@celestinoxp](https://github.com/celestinoxp) [@FANGAreNotGnu](https://github.com/FANGAreNotGnu) [@xiyuanzh](https://github.com/xiyuanzh) [@nathanaelbosch](https://github.com/nathanaelbosch) [@betatim](https://github.com/betatim) [@AdnaneKhan](https://github.com/AdnaneKhan) [@paulbkoch](https://github.com/paulbkoch) [@shou10152208](https://github.com/shou10152208) [@ryuichi-ichinose](https://github.com/ryuichi-ichinose) [@atschalz](https://github.com/atschalz) [@colesussmeier](https://github.com/colesussmeier)
### New Contributors
- [@betatim](https://github.com/betatim) made their first contribution in ([#5386](https://github.com/autogluon/autogluon/pull/5386))
- [@AdnaneKhan](https://github.com/AdnaneKhan) made their first contribution in ([#5351](https://github.com/autogluon/autogluon/pull/5351))
- [@paulbkoch](https://github.com/paulbkoch) made their first contribution in ([#4480](https://github.com/autogluon/autogluon/pull/4480))
- [@shou10152208](https://github.com/shou10152208) made their first contribution in ([#5073](https://github.com/autogluon/autogluon/pull/5073))
- [@ryuichi-ichinose](https://github.com/ryuichi-ichinose) made their first contribution in ([#5458](https://github.com/autogluon/autogluon/pull/5073))
- [@atschalz](https://github.com/atschalz) made their first contribution in ([#5441](https://github.com/autogluon/autogluon/pull/5441))
- [@colesussmeier](https://github.com/colesussmeier) made their first contribution in ([#5452](https://github.com/autogluon/autogluon/pull/5452))