AutoGluon Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models¶
Foundation models have transformed landscapes across fields like computer vision and natural language processing. These models are pre-trained on extensive common-domain data, serving as powerful tools for a wide range of applications. However, seamlessly integrating foundation models into real-world application scenarios has posed challenges. The diversity of data modalities, the multitude of available foundation models, and the considerable model sizes make this integration a nontrivial task.
AutoMM is dedicated to breaking these barriers by substantially reducing the engineering effort and manual intervention required in data preprocessing, model selection, and fine-tuning. With AutoMM, users can effortlessly adapt foundation models (from popular model zoos like HuggingFace, TIMM, MMDetection) to their domain-specific data using just three lines of code. Our toolkit accommodates various data types, including image, text, tabular, and document data, either individually or in combination. It offers support for an array of tasks, encompassing classification, regression, object detection, named entity recognition, semantic matching, and image segmentation. AutoMM represents a state-of-the-art and user-friendly solution, empowering multimodal AutoML with foundation models.
In the following, we decompose the functionalities of AutoMM and prepare step-by-step guide for each functionality.