{ "cells": [ { "cell_type": "markdown", "id": "eca05d74", "metadata": {}, "source": [ "# Image-Text Semantic Matching with AutoMM\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/autogluon/autogluon/blob/stable/docs/tutorials/multimodal/semantic_matching/image_text_matching.ipynb)\n", "[![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/autogluon/autogluon/blob/stable/docs/tutorials/multimodal/semantic_matching/image_text_matching.ipynb)\n", "\n", "\n", "\n", "Vision and language are two important aspects of human intelligence to understand the real world. Image-text semantic matching, measuring the visual-semantic\n", "similarity between image and text, plays a critical role in bridging the vision and language. \n", "Learning a joint space where text\n", "and image feature vectors are aligned is a typical solution for image-text matching. It is becoming increasingly significant for various vision-and-language tasks,\n", "such as cross-modal retrieval, image\n", "captioning, text-to-image synthesis, and multimodal neural machine translation. This tutorial will introduce how to apply AutoMM to the image-text matching task." ] }, { "cell_type": "code", "execution_count": null, "id": "aa00faab-252f-44c9-b8f7-57131aa8251c", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "!pip install autogluon.multimodal\n" ] }, { "cell_type": "code", "execution_count": null, "id": "c75d0317", "metadata": {}, "outputs": [], "source": [ "import os\n", "import warnings\n", "from IPython.display import Image, display\n", "import numpy as np\n", "warnings.filterwarnings('ignore')\n", "np.random.seed(123)" ] }, { "cell_type": "markdown", "id": "71779b6e", "metadata": {}, "source": [ "## Dataset\n", "\n", "In this tutorial, we will use the Flickr30K dataset to demonstrate the image-text matching.\n", "The Flickr30k dataset is a popular benchmark for sentence-based picture portrayal. The dataset is comprised of 31,783 images that capture people engaged in everyday activities and events. Each image has a descriptive caption. We organized the dataset using pandas dataframe. To get started, Let's download the dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "21aa9620", "metadata": {}, "outputs": [], "source": [ "from autogluon.core.utils.loaders import load_pd\n", "import pandas as pd\n", "download_dir = './ag_automm_tutorial_imgtxt'\n", "zip_file = 'https://automl-mm-bench.s3.amazonaws.com/flickr30k.zip'\n", "from autogluon.core.utils.loaders import load_zip\n", "load_zip.unzip(zip_file, unzip_dir=download_dir)" ] }, { "cell_type": "markdown", "id": "c4755989", "metadata": {}, "source": [ "Then we will load the csv files." ] }, { "cell_type": "code", "execution_count": null, "id": "7249ca84", "metadata": {}, "outputs": [], "source": [ "dataset_path = os.path.join(download_dir, 'flickr30k_processed')\n", "train_data = pd.read_csv(f'{dataset_path}/train.csv', index_col=0)\n", "val_data = pd.read_csv(f'{dataset_path}/val.csv', index_col=0)\n", "test_data = pd.read_csv(f'{dataset_path}/test.csv', index_col=0)\n", "image_col = \"image\"\n", "text_col = \"caption\"" ] }, { "cell_type": "markdown", "id": "ff40907f", "metadata": {}, "source": [ "We also need to expand the relative image paths to use their absolute local paths." ] }, { "cell_type": "code", "execution_count": null, "id": "b8291260", "metadata": {}, "outputs": [], "source": [ "def path_expander(path, base_folder):\n", " path_l = path.split(';')\n", " return ';'.join([os.path.abspath(os.path.join(base_folder, path)) for path in path_l])\n", "\n", "train_data[image_col] = train_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))\n", "val_data[image_col] = val_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))\n", "test_data[image_col] = test_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))" ] }, { "cell_type": "markdown", "id": "c63101fc", "metadata": {}, "source": [ "Take `train_data` for example, let's see how the data look like in the dataframe." ] }, { "cell_type": "code", "execution_count": null, "id": "bd5c068b", "metadata": {}, "outputs": [], "source": [ "train_data.head()" ] }, { "cell_type": "markdown", "id": "c0a4a47c", "metadata": {}, "source": [ "Each row is one image and text pair, implying that they match each other. Since one image corresponds to five captions in the dataset, we copy each image path five times to build the correspondences. We can visualize one image-text pair." ] }, { "cell_type": "code", "execution_count": null, "id": "606b2939", "metadata": {}, "outputs": [], "source": [ "train_data[text_col][0]" ] }, { "cell_type": "code", "execution_count": null, "id": "826b343c", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=train_data[image_col][0])\n", "display(pil_img)" ] }, { "cell_type": "markdown", "id": "ba4f0ca2", "metadata": {}, "source": [ "To perform evaluation or semantic search, we need to extract the unique image and text items from `text_data` and add one label column in the `test_data`." ] }, { "cell_type": "code", "execution_count": null, "id": "6abd469d", "metadata": {}, "outputs": [], "source": [ "test_image_data = pd.DataFrame({image_col: test_data[image_col].unique().tolist()})\n", "test_text_data = pd.DataFrame({text_col: test_data[text_col].unique().tolist()})\n", "test_data_with_label = test_data.copy()\n", "test_label_col = \"relevance\"\n", "test_data_with_label[test_label_col] = [1] * len(test_data)" ] }, { "cell_type": "markdown", "id": "b130e72a", "metadata": {}, "source": [ "## Initialize Predictor\n", "To initialize a predictor for image-text matching, we need to set `problem_type` as `image_text_similarity`. `query` and `response` refer to the two dataframe columns in which two items in one row should match each other. You can set `query=text_col` and `response=image_col`, or `query=image_col` and `response=text_col`. In image-text matching, `query` and `response` are equivalent." ] }, { "cell_type": "code", "execution_count": null, "id": "e7b287d2", "metadata": {}, "outputs": [], "source": [ "from autogluon.multimodal import MultiModalPredictor\n", "predictor = MultiModalPredictor(\n", " query=text_col,\n", " response=image_col,\n", " problem_type=\"image_text_similarity\",\n", " eval_metric=\"recall\",\n", " )" ] }, { "cell_type": "markdown", "id": "28984367", "metadata": {}, "source": [ "By initializing the predictor for `image_text_similarity`, you have loaded the pretrained CLIP backbone `openai/clip-vit-base-patch32`.\n", "\n", "## Directly Evaluate on Test Dataset (Zero-shot)\n", "You may be interested in getting the pretrained model's performance on your data. Let's compute the text-to-image and image-to-text retrieval scores." ] }, { "cell_type": "code", "execution_count": null, "id": "bcaa4415", "metadata": {}, "outputs": [], "source": [ "txt_to_img_scores = predictor.evaluate(\n", " data=test_data_with_label,\n", " query_data=test_text_data,\n", " response_data=test_image_data,\n", " label=test_label_col,\n", " cutoffs=[1, 5, 10],\n", " )\n", "img_to_txt_scores = predictor.evaluate(\n", " data=test_data_with_label,\n", " query_data=test_image_data,\n", " response_data=test_text_data,\n", " label=test_label_col,\n", " cutoffs=[1, 5, 10],\n", " )\n", "print(f\"txt_to_img_scores: {txt_to_img_scores}\")\n", "print(f\"img_to_txt_scores: {img_to_txt_scores}\")" ] }, { "cell_type": "markdown", "id": "de420577", "metadata": {}, "source": [ "Here we report the `recall`, which is the `eval_metric` in initializing the predictor above. One `cutoff` value means using the top k retrieved items to calculate the score. You may find that the text-to-image recalls are much higher than the image-to-text recalls. This is because each image is paired with five texts. In image-to-text retrieval, the upper bound of `recall@1` is 20%, which means that the top-1 text is correct, but there are totally five texts to retrieve.\n", "\n", "## Finetune Predictor\n", "After measuring the pretrained performance, we can finetune the model on our dataset to see whether we can get improvements. For a quick demo, here we set the time limit to 180 seconds." ] }, { "cell_type": "code", "execution_count": null, "id": "ad4a2cb5", "metadata": {}, "outputs": [], "source": [ "predictor.fit(\n", " train_data=train_data,\n", " tuning_data=val_data,\n", " time_limit=180,\n", " )" ] }, { "cell_type": "markdown", "id": "22d1ff52", "metadata": {}, "source": [ "## Evaluate the Finetuned Model on the Test Dataset\n", "Now Let's evaluate the finetuned model. Similarly, we also compute the recalls of text-to-image and image-to-text retrievals." ] }, { "cell_type": "code", "execution_count": null, "id": "96806960", "metadata": {}, "outputs": [], "source": [ "txt_to_img_scores = predictor.evaluate(\n", " data=test_data_with_label,\n", " query_data=test_text_data,\n", " response_data=test_image_data,\n", " label=test_label_col,\n", " cutoffs=[1, 5, 10],\n", " )\n", "img_to_txt_scores = predictor.evaluate(\n", " data=test_data_with_label,\n", " query_data=test_image_data,\n", " response_data=test_text_data,\n", " label=test_label_col,\n", " cutoffs=[1, 5, 10],\n", " )\n", "print(f\"txt_to_img_scores: {txt_to_img_scores}\")\n", "print(f\"img_to_txt_scores: {img_to_txt_scores}\")" ] }, { "cell_type": "markdown", "id": "fdacb792", "metadata": {}, "source": [ "We can observe large improvements over the zero-shot predictor. This means that finetuning CLIP on our customized data may help achieve better performance.\n", "\n", "## Predict Whether Image and Text Match\n", "Whether finetuned or not, the predictor can predict whether image and text pairs match." ] }, { "cell_type": "code", "execution_count": null, "id": "75378474", "metadata": {}, "outputs": [], "source": [ "pred = predictor.predict(test_data.head(5))\n", "print(pred)" ] }, { "cell_type": "markdown", "id": "15f39684", "metadata": {}, "source": [ "## Predict Matching Probabilities\n", "The predictor can also return to you the matching probabilities." ] }, { "cell_type": "code", "execution_count": null, "id": "b46c79ac", "metadata": {}, "outputs": [], "source": [ "proba = predictor.predict_proba(test_data.head(5))\n", "print(proba)" ] }, { "cell_type": "markdown", "id": "59775b35", "metadata": {}, "source": [ "The second column is the probability of being a match.\n", "\n", "## Extract Embeddings\n", "Another common user case is to extract image and text embeddings." ] }, { "cell_type": "code", "execution_count": null, "id": "3019ce60", "metadata": {}, "outputs": [], "source": [ "image_embeddings = predictor.extract_embedding({image_col: test_image_data[image_col][:5].tolist()})\n", "print(image_embeddings.shape) " ] }, { "cell_type": "code", "execution_count": null, "id": "877683e2", "metadata": {}, "outputs": [], "source": [ "text_embeddings = predictor.extract_embedding({text_col: test_text_data[text_col][:5].tolist()})\n", "print(text_embeddings.shape)" ] }, { "cell_type": "markdown", "id": "fd492784", "metadata": {}, "source": [ "## Semantic Search\n", "We also provide an advanced util function to conduct semantic search. First, given one or more texts, we can search semantically similar images from an image database." ] }, { "cell_type": "code", "execution_count": null, "id": "272468e2", "metadata": {}, "outputs": [], "source": [ "from autogluon.multimodal.utils import semantic_search\n", "text_to_image_hits = semantic_search(\n", " matcher=predictor,\n", " query_data=test_text_data.iloc[[3]],\n", " response_data=test_image_data,\n", " top_k=5,\n", " )" ] }, { "cell_type": "markdown", "id": "ce5a714c", "metadata": {}, "source": [ "Let's visualize the text query and top-1 image response." ] }, { "cell_type": "code", "execution_count": null, "id": "997916fe", "metadata": {}, "outputs": [], "source": [ "test_text_data.iloc[[3]]" ] }, { "cell_type": "code", "execution_count": null, "id": "fcb4b0a8", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=test_image_data[image_col][text_to_image_hits[0][0]['response_id']])\n", "display(pil_img)" ] }, { "cell_type": "markdown", "id": "e9bcd6e3", "metadata": {}, "source": [ "Similarly, given one or more images, we can retrieve texts with similar semantic meanings." ] }, { "cell_type": "code", "execution_count": null, "id": "a5c63cc6", "metadata": {}, "outputs": [], "source": [ "image_to_text_hits = semantic_search(\n", " matcher=predictor,\n", " query_data=test_image_data.iloc[[6]],\n", " response_data=test_text_data,\n", " top_k=5,\n", " )" ] }, { "cell_type": "markdown", "id": "a86bf709", "metadata": {}, "source": [ "Let's visualize the image query and top-1 text response." ] }, { "cell_type": "code", "execution_count": null, "id": "e1d791cf", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=test_image_data[image_col][6])\n", "display(pil_img)" ] }, { "cell_type": "code", "execution_count": null, "id": "2aa4bc20", "metadata": {}, "outputs": [], "source": [ "test_text_data[text_col][image_to_text_hits[0][1]['response_id']]" ] }, { "cell_type": "markdown", "id": "3959c1c4", "metadata": {}, "source": [ "## Other Examples\n", "\n", "You may go to [AutoMM Examples](https://github.com/autogluon/autogluon/tree/master/examples/automm) to explore other examples about AutoMM.\n", "\n", "\n", "## Customization\n", "\n", "To learn how to customize AutoMM, please refer to [Customize AutoMM](../advanced_topics/customization.ipynb)." ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 5 }