Multimodal Prediction¶
For problems on multimodal data tables that contain image, text, and tabular data, AutoGluon provides MultiModalPredictor (abbreviated as AutoMM) that automatically selects, fuses, and tunes foundation models from popular packages like timm, huggingface/transformers, CLIP, MMDetection etc.
You can not only use AutoMM to solve standard NLP/Vision tasks such as sentiment classification, intent detection, paraphrase detection, image classification, but also use it for multimodal problems that involve image, text, tabular features, object bounding boxes, named entities, etc. Moreover, AutoMM can be used as a basic model in the multi-layer stack-ensemble of AutoGluon Tabular, and is powering up the FT-Transformer in TabularPredictor.
Here are some example use-cases of AutoMM:
Multilingual text classification [Tutorial]
Predicting pets’ popularity based on their description, photo, and other metadata. [Tutorial] [Example].
Predicting the price of book. [Tutorial].
Scoring student’s essays. [Example].
Image classification. [Tutorial].
Object detection. [Tutorial] [Example].
Extracting named entities. [Tutorial].
Search for relevant text / image via text queries. [Tutorial].
Document Classification (Experimental). [Tutorial].
In the following, we decomposed the functionalities of AutoMM and prepared step-by-step guide for each functionality.
Text Data¶
How to train high-quality text prediction models with MultiModalPredictor.
How to use MultiModalPredictor to build models on datasets with languages other than English.
How to use MultiModalPredictor for entity extraction.
Image Data – Classification / Regression¶
How to train image classification models with MultiModalPredictor.
How to enable zero-shot image classification in AutoMM via pretrained CLIP model.
Image Data – Object Detection¶
How to train high quality object detection model with MultiModalPredictor in under 5 minutes on COCO format dataset.
How to prepare COCO2017 dataset for object detection.
How to prepare Pascal VOC dataset for object detection.
How to prepare Watercolor dataset for object detection.
How to convert a dataset from VOC format to COCO format for object detection.
How to use pd.DataFrame format for object detection
How to fast finetune a pretrained model on a dataset in COCO format.
How to finetune a pretrained model on a dataset in COCO format with high performance.
How to evaluate the very fast pretrained YOLOv3 model on dataset in COCO format.
How to evaluate the pretrained Faster R-CNN model with high performance on dataset in COCO format.
How to evaluate the pretrained Deformable DETR model with higher performance on dataset in COCO format
How to evaluate the pretrained Faster R-CNN model on dataset in VOC format
Document Data¶
How to use MultiModalPredictor to build a scanned document classifier.
Matching¶
How to use AutoMM for text to text matching.
How to use semantic embeddings to improve search ranking performance.
How to use CLIP to extract embeddings for retrieval problem.
Multimodal Data¶
How MultiModalPredictor can be applied to multimodal data tables with a mix of text, numerical, and categorical columns. Here, we train a model to predict the price of books.
How to use MultiModalPredictor to train a model that predicts the adoption speed of pets.
How to use MultiModalPredictor to train a model for multimodal named entity recognition.
Advanced Topics¶
How to take advantage of larger foundation models with the help of parameter-efficient finetuning. In the tutorial, we will use combine IA^3, BitFit, and gradient checkpointing to finetune FLAN-T5-XL.
How to do hyperparameter optimization in AutoMM.
How to do knowledge distillation in AutoMM.
How to customize AutoMM configurations.
How to use AutoMM presets.
How to use SVM combined with feature extraction for few shot learning.
How to use focal loss in AutoMM.