{ "cells": [ { "cell_type": "markdown", "id": "388db11c", "metadata": {}, "source": [ "# AutoMM for Text - Quick Start\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/autogluon/autogluon/blob/stable/docs/tutorials/multimodal/text_prediction/beginner_text.ipynb)\n", "[![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/autogluon/autogluon/blob/stable/docs/tutorials/multimodal/text_prediction/beginner_text.ipynb)\n", "\n", "\n", "\n", "`MultiModalPredictor` can solve problems where the data are either image, text, numerical values, or categorical features. \n", "To get started, we first demonstrate how to use it to solve problems that only contain text. We pick two classical NLP problems for the purpose of demonstration:\n", "\n", "- [Sentiment Analysis](https://en.wikipedia.org/wiki/Sentiment_analysis)\n", "- [Sentence Similarity](https://arxiv.org/abs/1910.03940)\n", "\n", "Here, we format the NLP datasets as data tables where \n", "the feature columns contain text fields and the label column contain numerical (regression) / categorical (classification) values. \n", "Each row in the table corresponds to one training sample." ] }, { "cell_type": "code", "execution_count": null, "id": "aa00faab-252f-44c9-b8f7-57131aa8251c", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "!pip install autogluon.multimodal\n" ] }, { "cell_type": "code", "execution_count": null, "id": "2a13e1da", "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import numpy as np\n", "import warnings\n", "import matplotlib.pyplot as plt\n", "\n", "warnings.filterwarnings('ignore')\n", "np.random.seed(123)" ] }, { "cell_type": "markdown", "id": "5d16b44a", "metadata": {}, "source": [ "## Sentiment Analysis Task\n", "\n", "First, we consider the Stanford Sentiment Treebank ([SST](https://nlp.stanford.edu/sentiment/)) dataset, which consists of movie reviews and their associated sentiment. \n", "Given a new movie review, the goal is to predict the sentiment reflected in the text (in this case a **binary classification**, where reviews are \n", "labeled as 1 if they convey a positive opinion and labeled as 0 otherwise). Let's first load and look at the data, \n", "noting the labels are stored in a column called **label**." ] }, { "cell_type": "code", "execution_count": null, "id": "a7574175", "metadata": {}, "outputs": [], "source": [ "from autogluon.core.utils.loaders import load_pd\n", "train_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet')\n", "test_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet')\n", "subsample_size = 1000 # subsample data for faster demo, try setting this to larger values\n", "train_data = train_data.sample(n=subsample_size, random_state=0)\n", "train_data.head(10)" ] }, { "cell_type": "markdown", "id": "bf835d5d", "metadata": {}, "source": [ "Above the data happen to be stored in the [Parquet](https://databricks.com/glossary/what-is-parquet) format, but you can also directly `load()` data from a [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) file or other equivalent formats. \n", "While here we load files from [AWS S3 cloud storage](https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html), these could instead be local files on your machine. \n", "After loading, `train_data` is simply a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html), \n", "where each row represents a different training example.\n", "\n", "### Training\n", "\n", "To ensure this tutorial runs quickly, we simply call `fit()` with a subset of 1000 training examples and limit its runtime to approximately 1 minute.\n", "To achieve reasonable performance in your applications, you are recommended to set much longer `time_limit` (eg. 1 hour), or do not specify `time_limit` at all (`time_limit=None`)." ] }, { "cell_type": "code", "execution_count": null, "id": "d2535bb3", "metadata": {}, "outputs": [], "source": [ "from autogluon.multimodal import MultiModalPredictor\n", "import uuid\n", "model_path = f\"./tmp/{uuid.uuid4().hex}-automm_sst\"\n", "predictor = MultiModalPredictor(label='label', eval_metric='acc', path=model_path)\n", "predictor.fit(train_data, time_limit=180)" ] }, { "cell_type": "markdown", "id": "17f0013f", "metadata": {}, "source": [ "Above we specify that: the column named **label** contains the label values to predict, AutoGluon should optimize its predictions for the accuracy evaluation metric, \n", "trained models should be saved in the **automm_sst** folder, and training should run for around 60 seconds.\n", "\n", "### Evaluation\n", "\n", "After training, we can easily evaluate our predictor on separate test data formatted similarly to our training data." ] }, { "cell_type": "code", "execution_count": null, "id": "3caa5f80", "metadata": {}, "outputs": [], "source": [ "test_score = predictor.evaluate(test_data)\n", "print(test_score)" ] }, { "cell_type": "markdown", "id": "2aa302c2", "metadata": {}, "source": [ "By default, `evaluate()` will report the evaluation metric previously specified, which is `accuracy` in our example. You may also specify additional metrics, e.g. F1 score, when calling evaluate." ] }, { "cell_type": "code", "execution_count": null, "id": "75cb24d5", "metadata": {}, "outputs": [], "source": [ "test_score = predictor.evaluate(test_data, metrics=['acc', 'f1'])\n", "print(test_score)" ] }, { "cell_type": "markdown", "id": "f124ec9d", "metadata": {}, "source": [ "### Prediction\n", "\n", "And you can easily obtain predictions from these models by calling `predictor.predict()`." ] }, { "cell_type": "code", "execution_count": null, "id": "8f530bec", "metadata": {}, "outputs": [], "source": [ "sentence1 = \"it's a charming and often affecting journey.\"\n", "sentence2 = \"It's slow, very, very, very slow.\"\n", "predictions = predictor.predict({'sentence': [sentence1, sentence2]})\n", "print('\"Sentence\":', sentence1, '\"Predicted Sentiment\":', predictions[0])\n", "print('\"Sentence\":', sentence2, '\"Predicted Sentiment\":', predictions[1])" ] }, { "cell_type": "markdown", "id": "71b3f028", "metadata": {}, "source": [ "For classification tasks, you can ask for predicted class-probabilities instead of predicted classes." ] }, { "cell_type": "code", "execution_count": null, "id": "cfbfaea3", "metadata": {}, "outputs": [], "source": [ "probs = predictor.predict_proba({'sentence': [sentence1, sentence2]})\n", "print('\"Sentence\":', sentence1, '\"Predicted Class-Probabilities\":', probs[0])\n", "print('\"Sentence\":', sentence2, '\"Predicted Class-Probabilities\":', probs[1])" ] }, { "cell_type": "markdown", "id": "d5242813", "metadata": {}, "source": [ "We can just as easily produce predictions over an entire dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "0537e35b", "metadata": {}, "outputs": [], "source": [ "test_predictions = predictor.predict(test_data)\n", "test_predictions.head()" ] }, { "cell_type": "markdown", "id": "86da66ed", "metadata": {}, "source": [ "### Save and Load\n", "\n", "The trained predictor is automatically saved at the end of `fit()`, and you can easily reload it.\n", "\n", "```{warning}\n", "\n", "`MultiModalPredictor.load()` uses `pickle` module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source, or that could have been tampered with. **Only load data you trust.**\n", "\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "fdb04ad8", "metadata": {}, "outputs": [], "source": [ "loaded_predictor = MultiModalPredictor.load(model_path)\n", "loaded_predictor.predict_proba({'sentence': [sentence1, sentence2]})" ] }, { "cell_type": "markdown", "id": "dfb3fa3d", "metadata": {}, "source": [ "You can also save the predictor to any location by calling `.save()`." ] }, { "cell_type": "code", "execution_count": null, "id": "be5603bd", "metadata": {}, "outputs": [], "source": [ "new_model_path = f\"./tmp/{uuid.uuid4().hex}-automm_sst\"\n", "loaded_predictor.save(new_model_path)\n", "loaded_predictor2 = MultiModalPredictor.load(new_model_path)\n", "loaded_predictor2.predict_proba({'sentence': [sentence1, sentence2]})" ] }, { "cell_type": "markdown", "id": "d8c0d5e7", "metadata": {}, "source": [ "### Extract Embeddings\n", "\n", "\n", "\n", "You can also use a trained predictor to extract embeddings that maps each row of the data table to an embedding vector extracted from intermediate neural network representations of the row." ] }, { "cell_type": "code", "execution_count": null, "id": "2686d317", "metadata": {}, "outputs": [], "source": [ "embeddings = predictor.extract_embedding(test_data)\n", "print(embeddings.shape)" ] }, { "cell_type": "markdown", "id": "ce712da0", "metadata": {}, "source": [ "Here, we use TSNE to visualize these extracted embeddings. We can see that there are two clusters corresponding to our two labels, since this network has been trained to discriminate between these labels." ] }, { "cell_type": "code", "execution_count": null, "id": "5ec51998", "metadata": {}, "outputs": [], "source": [ "from sklearn.manifold import TSNE\n", "X_embedded = TSNE(n_components=2, random_state=123).fit_transform(embeddings)\n", "for val, color in [(0, 'red'), (1, 'blue')]:\n", " idx = (test_data['label'].to_numpy() == val).nonzero()\n", " plt.scatter(X_embedded[idx, 0], X_embedded[idx, 1], c=color, label=f'label={val}')\n", "plt.legend(loc='best')" ] }, { "cell_type": "markdown", "id": "7156cef6", "metadata": {}, "source": [ "## Sentence Similarity Task\n", "\n", "Next, let's use MultiModalPredictor to train a model for evaluating how semantically similar two sentences are.\n", "We use the [Semantic Textual Similarity Benchmark](https://paperswithcode.com/dataset/sts-benchmark) dataset for illustration." ] }, { "cell_type": "code", "execution_count": null, "id": "dca10480", "metadata": {}, "outputs": [], "source": [ "sts_train_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/train.parquet')[['sentence1', 'sentence2', 'score']]\n", "sts_test_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/dev.parquet')[['sentence1', 'sentence2', 'score']]\n", "sts_train_data.head(10)" ] }, { "cell_type": "markdown", "id": "0b0e6413", "metadata": {}, "source": [ "In this data, the column named **score** contains numerical values (which we'd like to predict) that are human-annotated similarity scores for each given pair of sentences." ] }, { "cell_type": "code", "execution_count": null, "id": "c8474486", "metadata": {}, "outputs": [], "source": [ "print('Min score=', min(sts_train_data['score']), ', Max score=', max(sts_train_data['score']))" ] }, { "cell_type": "markdown", "id": "088b9e8f", "metadata": {}, "source": [ "Let's train a regression model to predict these scores. Note that we only need to specify the label column and AutoGluon automatically determines the type of prediction problem and an appropriate loss function. Once again, you should increase the short `time_limit` below to obtain reasonable performance in your own applications." ] }, { "cell_type": "code", "execution_count": null, "id": "8b84c841", "metadata": {}, "outputs": [], "source": [ "sts_model_path = f\"./tmp/{uuid.uuid4().hex}-automm_sts\"\n", "predictor_sts = MultiModalPredictor(label='score', path=sts_model_path)\n", "predictor_sts.fit(sts_train_data, time_limit=60)" ] }, { "cell_type": "markdown", "id": "26fc1df5", "metadata": {}, "source": [ "We again evaluate our trained model's performance on separate test data. Below we choose to compute the following metrics: RMSE, Pearson Correlation, and Spearman Correlation." ] }, { "cell_type": "code", "execution_count": null, "id": "afc64da6", "metadata": {}, "outputs": [], "source": [ "test_score = predictor_sts.evaluate(sts_test_data, metrics=['rmse', 'pearsonr', 'spearmanr'])\n", "print('RMSE = {:.2f}'.format(test_score['rmse']))\n", "print('PEARSONR = {:.4f}'.format(test_score['pearsonr']))\n", "print('SPEARMANR = {:.4f}'.format(test_score['spearmanr']))" ] }, { "cell_type": "markdown", "id": "df27bdeb", "metadata": {}, "source": [ "Let's use our model to predict the similarity score between a few sentences." ] }, { "cell_type": "code", "execution_count": null, "id": "96c05eec", "metadata": {}, "outputs": [], "source": [ "sentences = ['The child is riding a horse.',\n", " 'The young boy is riding a horse.',\n", " 'The young man is riding a horse.',\n", " 'The young man is riding a bicycle.']\n", "\n", "score1 = predictor_sts.predict({'sentence1': [sentences[0]],\n", " 'sentence2': [sentences[1]]}, as_pandas=False)\n", "\n", "score2 = predictor_sts.predict({'sentence1': [sentences[0]],\n", " 'sentence2': [sentences[2]]}, as_pandas=False)\n", "\n", "score3 = predictor_sts.predict({'sentence1': [sentences[0]],\n", " 'sentence2': [sentences[3]]}, as_pandas=False)\n", "print(score1, score2, score3)" ] }, { "cell_type": "markdown", "id": "15e958c6", "metadata": {}, "source": [ "Although the `MultiModalPredictor` currently supports classification and regression tasks, it can directly be used for \n", "many NLP tasks if you properly format them into a data table. Note that there can be many text columns in this data table. \n", "Refer to the [MultiModalPredictor documentation](../../../../api/autogluon.multimodal.MultiModalPredictor.fit.rst) to see all available methods/options.\n", "\n", "Unlike `TabularPredictor` which trains/ensembles different types of models,\n", "`MultiModalPredictor` focuses on selecting and finetuning deep learning based models. \n", "Internally, it integrates with [timm](https://github.com/rwightman/pytorch-image-models) , [huggingface/transformers](https://github.com/huggingface/transformers), \n", "[openai/clip](https://github.com/openai/CLIP) as the model zoo.\n", "\n", "## Other Examples\n", "\n", "You may go to [AutoMM Examples](https://github.com/autogluon/autogluon/tree/master/examples/automm) to explore other examples about AutoMM.\n", "\n", "## Customization\n", "To learn how to customize AutoMM, please refer to [Customize AutoMM](../advanced_topics/customization.ipynb)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.15" } }, "nbformat": 4, "nbformat_minor": 5 }