Text-to-Text Semantic Matching with AutoMM#

Open In Colab Open In SageMaker Studio Lab

Computing the similarity between two sentences/passages is a common task in NLP, with several practical applications such as web search, question answering, documents deduplication, plagiarism comparison, natural language inference, recommendation engines, etc. In general, text similarity models will take two sentences/passages as input and transform them into vectors, and then similarity scores calculated using cosine similarity, dot product, or Euclidean distances are used to measure how alike or different of the two text pieces.

Prepare your Data#

In this tutorial, we will demonstrate how to use AutoMM for text-to-text semantic matching with the Stanford Natural Language Inference (SNLI) corpus. SNLI is a corpus contains around 570k human-written sentence pairs labeled with entailment, contradiction, and neutral. It is a widely used benchmark for evaluating the representation and inference capbility of machine learning methods. The following table contains three examples taken from this corpus.

Premise

Hypothesis

Label

A black race car starts up in front of a crowd of people.

A man is driving down a lonely road.

contradiction

An older and younger man smiling.

Two men are smiling and laughing at the cats playing on the floor.

neutral

A soccer game with multiple males playing.

Some men are playing a sport.

entailment

Here, we consider sentence pairs with label entailment as positive pairs (labeled as 1) and those with label contradiction as negative pairs (labeled as 0). Sentence pairs with neural relationship are discarded. The following code downloads and loads the corpus into dataframes.

from autogluon.core.utils.loaders import load_pd
import pandas as pd

snli_train = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/snli/snli_train.csv', delimiter="|")
snli_test = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/snli/snli_test.csv', delimiter="|")
snli_train.head()
premise hypothesis label
0 A person on a horse jumps over a broken down a... A person is at a diner , ordering an omelette . 0
1 A person on a horse jumps over a broken down a... A person is outdoors , on a horse . 1
2 Children smiling and waving at camera There are children present 1
3 Children smiling and waving at camera The kids are frowning 0
4 A boy is jumping on skateboard in the middle o... The boy skates down the sidewalk . 0

Train your Model#

Ideally, we want to obtain a model that can return high/low scores for positive/negative text pairs. Traditional text similarity methods only work on a lexical level without taking the semantic aspect into account, for example, using term frequency or tf-idf vectors. With AutoMM, we can easily train a model that captures the semantic relationship between sentences. Basically, it uses BERT to project each sentence into a high-dimensional vector and treat the matching problem as a classification problem following the design in sentence transformers. With AutoMM, you just need to specify the query, response, and label column names and fit the model on the training dataset without worrying the implementation details. Note that the labels should be binary, and we need to specify the match_label, which means two sentences have the same semantic meaning. In practice, your tasks may have different labels, e.g., duplicate or not duplicate. You may need to define the match_label by considering your specific task contexts.

from autogluon.multimodal import MultiModalPredictor

# Initialize the model
predictor = MultiModalPredictor(
        problem_type="text_similarity",
        query="premise", # the column name of the first sentence
        response="hypothesis", # the column name of the second sentence
        label="label", # the label column name
        match_label=1, # the label indicating that query and response have the same semantic meanings.
        eval_metric='auc', # the evaluation metric
    )

# Fit the model
predictor.fit(
    train_data=snli_train,
    time_limit=180,
)
Global seed set to 0
No path specified. Models will be saved in: "AutogluonModels/ag-20230629_225816/"
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  [0, 1]
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/metric.py:93: UserWarning: Currently, we cannot convert the metric: auc to a metric supported in torchmetrics. Thus, we will fall-back to use accuracy for multi-class classification problems , ROC-AUC for binary classification problem, and RMSE for regression problems.
  warnings.warn(
Using 16bit None Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name              | Type                         | Params
-------------------------------------------------------------------
0 | query_model       | HFAutoModelForTextPrediction | 33.4 M
1 | response_model    | HFAutoModelForTextPrediction | 33.4 M
2 | validation_metric | BinaryAUROC                  | 0     
3 | loss_func         | ContrastiveLoss              | 0     
4 | miner_func        | PairMarginMiner              | 0     
-------------------------------------------------------------------
33.4 M    Trainable params
0         Non-trainable params
33.4 M    Total params
66.720    Total estimated model params size (MB)
Time limit reached. Elapsed time is 0:03:00. Signaling Trainer to stop.
Epoch 0, global step 144: 'val_roc_auc' reached 0.89560 (best 0.89560), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/matching/AutogluonModels/ag-20230629_225816/epoch=0-step=144.ckpt' as top 3
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f8b82c320a0>

Evaluate on Test Dataset#

You can evaluate the macther on the test dataset to see how it performs with the roc_auc score:

score = predictor.evaluate(snli_test)
print("evaluation score: ", score)
evaluation score:  {'roc_auc': 0.9140563716587528}

Predict on a New Sentence Pair#

We create a new sentence pair with similar meaning (expected to be predicted as \(1\)) and make predictions using the trained model.

pred_data = pd.DataFrame.from_dict({"premise":["The teacher gave his speech to an empty room."], 
                                    "hypothesis":["There was almost nobody when the professor was talking."]})

predictions = predictor.predict(pred_data)
print('Predicted entities:', predictions[0])
Predicted entities: 1

Predict Matching Probabilities#

We can also compute the matching probabilities of sentence pairs.

probabilities = predictor.predict_proba(pred_data)
print(probabilities)
          0         1
0  0.206403  0.793597

Extract Embeddings#

Moreover, we support extracting embeddings separately for two sentence groups.

embeddings_1 = predictor.extract_embedding({"premise":["The teacher gave his speech to an empty room."]})
print(embeddings_1.shape)
embeddings_2 = predictor.extract_embedding({"hypothesis":["There was almost nobody when the professor was talking."]})
print(embeddings_2.shape)
(1, 384)
(1, 384)

Other Examples#

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization#

To learn how to customize AutoMM, please refer to Customize AutoMM.