{ "cells": [ { "cell_type": "markdown", "id": "9d3fb084", "metadata": {}, "source": [ "# Image-to-Image Semantic Matching with AutoMM \n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/autogluon/autogluon/blob/master/docs/tutorials/multimodal/matching/image2image_matching.ipynb)\n", "[![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/autogluon/autogluon/blob/master/docs/tutorials/multimodal/matching/image2image_matching.ipynb)\n", "\n", "\n", "\n", "Computing the similarity between two images is a common task in computer vision, with several practical applications such as detecting same or different product, etc. In general, image similarity models will take two images as input and transform them into vectors, and then similarity scores calculated using cosine similarity, dot product, or Euclidean distances are used to measure how alike or different of the two images." ] }, { "cell_type": "code", "execution_count": null, "id": "aa00faab-252f-44c9-b8f7-57131aa8251c", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "!pip install autogluon.multimodal\n" ] }, { "cell_type": "code", "execution_count": null, "id": "a99807ec", "metadata": {}, "outputs": [], "source": [ "import os\n", "import pandas as pd\n", "import warnings\n", "from IPython.display import Image, display\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "markdown", "id": "09312bee", "metadata": {}, "source": [ "## Prepare your Data\n", "In this tutorial, we will demonstrate how to use AutoMM for image-to-image semantic matching with the simplified Stanford Online Products dataset ([SOP](https://cvgl.stanford.edu/projects/lifted_struct/)). \n", "\n", "Stanford Online Products dataset is introduced for metric learning. There are 12 categories of products in this dataset: bicycle, cabinet, chair, coffee maker, fan, kettle, lamp, mug, sofa, stapler, table and toaster. Each category has some products, and each product has several images captured from different views. Here, we consider different views of the same product as positive pairs (labeled as 1) and images from different products as negative pairs (labeled as 0). \n", "\n", "The following code downloads the dataset and unzip the images and annotation files." ] }, { "cell_type": "code", "execution_count": null, "id": "23522f4c", "metadata": {}, "outputs": [], "source": [ "download_dir = './ag_automm_tutorial_img2img'\n", "zip_file = 'https://automl-mm-bench.s3.amazonaws.com/Stanford_Online_Products.zip'\n", "from autogluon.core.utils.loaders import load_zip\n", "load_zip.unzip(zip_file, unzip_dir=download_dir)" ] }, { "cell_type": "markdown", "id": "ec7f0420", "metadata": {}, "source": [ "Then we can load the annotations into dataframes." ] }, { "cell_type": "code", "execution_count": null, "id": "d34b7b16", "metadata": {}, "outputs": [], "source": [ "dataset_path = os.path.join(download_dir, 'Stanford_Online_Products')\n", "train_data = pd.read_csv(f'{dataset_path}/train.csv', index_col=0)\n", "test_data = pd.read_csv(f'{dataset_path}/test.csv', index_col=0)\n", "image_col_1 = \"Image1\"\n", "image_col_2 = \"Image2\"\n", "label_col = \"Label\"\n", "match_label = 1" ] }, { "cell_type": "markdown", "id": "104c2d3e", "metadata": {}, "source": [ "Here you need to specify the `match_label`, the label class representing that a pair semantically match. In this demo dataset, we use 1 since we assigned 1 to image pairs from the same product. You may consider your task context to specify `match_label`.\n", "\n", "Next, we expand the image paths since the original paths are relative." ] }, { "cell_type": "code", "execution_count": null, "id": "a7952021", "metadata": {}, "outputs": [], "source": [ "def path_expander(path, base_folder):\n", " path_l = path.split(';')\n", " return ';'.join([os.path.abspath(os.path.join(base_folder, path)) for path in path_l])\n", "\n", "for image_col in [image_col_1, image_col_2]:\n", " train_data[image_col] = train_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))\n", " test_data[image_col] = test_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))" ] }, { "cell_type": "markdown", "id": "99b5568d", "metadata": {}, "source": [ "The annotations are only image path pairs and their binary labels (1 and 0 mean the image pair matching or not, respectively)." ] }, { "cell_type": "code", "execution_count": null, "id": "c34e9b37", "metadata": {}, "outputs": [], "source": [ "train_data.head()" ] }, { "cell_type": "markdown", "id": "6c0f403a", "metadata": {}, "source": [ "Let's visualize a matching image pair." ] }, { "cell_type": "code", "execution_count": null, "id": "cf227027", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=train_data[image_col_1][5])\n", "display(pil_img)" ] }, { "cell_type": "code", "execution_count": null, "id": "abd09a8d", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=train_data[image_col_2][5])\n", "display(pil_img)" ] }, { "cell_type": "markdown", "id": "540da2ba", "metadata": {}, "source": [ "Here are two images that do not match." ] }, { "cell_type": "code", "execution_count": null, "id": "6de853a1", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=train_data[image_col_1][0])\n", "display(pil_img)" ] }, { "cell_type": "code", "execution_count": null, "id": "a7e8f748", "metadata": {}, "outputs": [], "source": [ "pil_img = Image(filename=train_data[image_col_2][0])\n", "display(pil_img)" ] }, { "cell_type": "markdown", "id": "8422fe59", "metadata": {}, "source": [ "## Train your Model\n", "\n", "Ideally, we want to obtain a model that can return high/low scores for positive/negative image pairs. With AutoMM, we can easily train a model that captures the semantic relationship between images. Basically, it uses [Swin Transformer](https://arxiv.org/abs/2103.14030) to project each image into a high-dimensional vector and compute the cosine similarity of feature vectors. \n", "\n", "With AutoMM, you just need to specify the `query`, `response`, and `label` column names and fit the model on the training dataset without worrying the implementation details." ] }, { "cell_type": "code", "execution_count": null, "id": "cd3524c7", "metadata": {}, "outputs": [], "source": [ "from autogluon.multimodal import MultiModalPredictor\n", "predictor = MultiModalPredictor(\n", " problem_type=\"image_similarity\",\n", " query=image_col_1, # the column name of the first image\n", " response=image_col_2, # the column name of the second image\n", " label=label_col, # the label column name\n", " match_label=match_label, # the label indicating that query and response have the same semantic meanings.\n", " eval_metric='auc', # the evaluation metric\n", " )\n", " \n", "# Fit the model\n", "predictor.fit(\n", " train_data=train_data,\n", " time_limit=180,\n", ")" ] }, { "cell_type": "markdown", "id": "257074de", "metadata": {}, "source": [ "## Evaluate on Test Dataset\n", "You can evaluate the predictor on the test dataset to see how it performs with the roc_auc score:" ] }, { "cell_type": "code", "execution_count": null, "id": "4c471904", "metadata": {}, "outputs": [], "source": [ "score = predictor.evaluate(test_data)\n", "print(\"evaluation score: \", score)" ] }, { "cell_type": "markdown", "id": "1192e3f6", "metadata": {}, "source": [ "## Predict on Image Pairs\n", "Given new image pairs, we can predict whether they match or not." ] }, { "cell_type": "code", "execution_count": null, "id": "5c9f3554", "metadata": {}, "outputs": [], "source": [ "pred = predictor.predict(test_data.head(3))\n", "print(pred)" ] }, { "cell_type": "markdown", "id": "8c3ad3e2", "metadata": {}, "source": [ "The predictions use a naive probability threshold 0.5. That is, we choose the label with the probability larger than 0.5.\n", "\n", "## Predict Matching Probabilities\n", "However, you can do more customized thresholding by getting probabilities." ] }, { "cell_type": "code", "execution_count": null, "id": "76435bdb", "metadata": {}, "outputs": [], "source": [ "proba = predictor.predict_proba(test_data.head(3))\n", "print(proba)" ] }, { "cell_type": "markdown", "id": "50bfbb74", "metadata": {}, "source": [ "## Extract Embeddings\n", "You can also extract embeddings for each image of a pair." ] }, { "cell_type": "code", "execution_count": null, "id": "c8173399", "metadata": {}, "outputs": [], "source": [ "embeddings_1 = predictor.extract_embedding({image_col_1: test_data[image_col_1][:5].tolist()})\n", "print(embeddings_1.shape)\n", "embeddings_2 = predictor.extract_embedding({image_col_2: test_data[image_col_2][:5].tolist()})\n", "print(embeddings_2.shape)" ] }, { "cell_type": "markdown", "id": "42a3e8b0", "metadata": {}, "source": [ "## Other Examples\n", "\n", "You may go to [AutoMM Examples](https://github.com/autogluon/autogluon/tree/master/examples/automm) to explore other examples about AutoMM.\n", "\n", "\n", "## Customization\n", "\n", "To learn how to customize AutoMM, please refer to [Customize AutoMM](../advanced_topics/customization.ipynb)." ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 5 }