Text-to-Text Semantic Matching with AutoMM¶
Computing the similarity between two sentences/passages is a common task in NLP, with several practical applications such as web search, question answering, documents deduplication, plagiarism comparison, natural language inference, recommendation engines, etc. In general, text similarity models will take two sentences/passages as input and transform them into vectors, and then similarity scores calculated using cosine similarity, dot product, or Euclidean distances are used to measure how alike or different of the two text pieces.
Prepare your Data¶
In this tutorial, we will demonstrate how to use AutoMM for text-to-text semantic matching with the Stanford Natural Language Inference (SNLI) corpus. SNLI is a corpus contains around 570k human-written sentence pairs labeled with entailment, contradiction, and neutral. It is a widely used benchmark for evaluating the representation and inference capbility of machine learning methods. The following table contains three examples taken from this corpus.
Premise |
Hypothesis |
Labe l |
---|---|---|
A black race car starts up in front of a crowd of people. |
A man is driving down a lonely road. |
cont radi ctio n |
An older and younger man smiling. |
Two men are smiling and laughing at the cats playing on the floor. |
neut ral |
A soccer game with multiple males playing. |
Some men are playing a sport. |
enta ilme nt |
Here, we consider sentence pairs with label entailment as positive pairs (labeled as 1) and those with label contradiction as negative pairs (labeled as 0). Sentence pairs with neural relationship are discarded. The following code downloads and loads the corpus into dataframes.
from autogluon.core.utils.loaders import load_pd
import pandas as pd
snli_train = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/snli/snli_train.csv', delimiter="|")
snli_test = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/snli/snli_test.csv', delimiter="|")
snli_train.head()
premise | hypothesis | label | |
---|---|---|---|
0 | A person on a horse jumps over a broken down a... | A person is at a diner , ordering an omelette . | 0 |
1 | A person on a horse jumps over a broken down a... | A person is outdoors , on a horse . | 1 |
2 | Children smiling and waving at camera | There are children present | 1 |
3 | Children smiling and waving at camera | The kids are frowning | 0 |
4 | A boy is jumping on skateboard in the middle o... | The boy skates down the sidewalk . | 0 |
Train your Model¶
Ideally, we want to obtain a model that can return high/low scores for positive/negative text pairs. Traditional text similarity methods only work on a lexical level without taking the semantic aspect into account, for example, using term frequency or tf-idf vectors. With AutoMM, we can easily train a model that captures the semantic relationship between sentences. Bascially, it uses BERT to project each sentence into a high-dimensional vector and treat the matching problem as a classification problem following the design in sentence transformers.
With AutoMM, you just need to specify the query, response, and label
column names and fit the model on the training dataset without worrying
the implementation details. Note that the labels should be binary, and
we need to specify the match_label
, which means two sentences have
the same semantic meaning. In practice, your tasks may have different
labels, e.g., duplicate or not duplicate. You may need to define the
match_label
by considering your specific task contexts.
from autogluon.multimodal import MultiModalPredictor
# Initialize the model
predictor = MultiModalPredictor(
problem_type="text_similarity",
query="premise", # the column name of the first sentence
response="hypothesis", # the column name of the second sentence
label="label", # the label column name
match_label=1, # the label indicating that query and response have the same semantic meanings.
eval_metric='auc', # the evaluation metric
)
# Fit the model
predictor.fit(
train_data=snli_train,
time_limit=180,
)
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().
Moving 0 files to the new cache system
0it [00:00, ?it/s]
Global seed set to 123 /home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/metric.py:92: UserWarning: Currently, we cannot convert the metric: auc to a metric supported in torchmetrics. Thus, we will fall-back to use accuracy for multi-class classification problems , ROC-AUC for binary classification problem, and RMSE for regression problems. warnings.warn( /home/ci/opt/venv/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric AUROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) Auto select gpus: [0] Using 16bit native Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] | Name | Type | Params ------------------------------------------------------------------- 0 | query_model | HFAutoModelForTextPrediction | 33.4 M 1 | response_model | HFAutoModelForTextPrediction | 33.4 M 2 | validation_metric | AUROC | 0 3 | loss_func | ContrastiveLoss | 0 4 | miner_func | PairMarginMiner | 0 ------------------------------------------------------------------- 33.4 M Trainable params 0 Non-trainable params 33.4 M Total params 66.720 Total estimated model params size (MB) Time limit reached. Elapsed time is 0:03:00. Signaling Trainer to stop. Epoch 0, global step 177: 'val_roc_auc' reached 0.90859 (best 0.90859), saving model to '/home/ci/autogluon/docs/_build/eval/tutorials/multimodal/matching/AutogluonModels/ag-20221213_015621/epoch=0-step=177.ckpt' as top 3
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f341a6cf370>
Evaluate on Test Dataset¶
You can evaluate the macther on the test dataset to see how it performs with the roc_auc score:
score = predictor.evaluate(snli_test)
print("evaluation score: ", score)
evaluation score: {'roc_auc': 0.9171061186092809}
Predict on a New Sentence Pair¶
We create a new sentence pair with similar meaning (expected to be predicted as \(1\)) and make predictions using the trained model.
pred_data = pd.DataFrame.from_dict({"premise":["The teacher gave his speech to an empty room."],
"hypothesis":["There was almost nobody when the professor was talking."]})
predictions = predictor.predict(pred_data)
print('Predicted entities:', predictions[0])
Predicted entities: 1
Predict Matching Probabilities¶
We can also compute the matching probabilities of sentence pairs.
probabilities = predictor.predict_proba(pred_data)
print(probabilities)
0 1
0 0.20701 0.79299
Extract Embeddings¶
Moreover, we support extracting embeddings separately for two sentence groups.
embeddings_1 = predictor.extract_embedding({"premise":["The teacher gave his speech to an empty room."]})
print(embeddings_1.shape)
embeddings_2 = predictor.extract_embedding({"hypothesis":["There was almost nobody when the professor was talking."]})
print(embeddings_2.shape)
(1, 384)
(1, 384)
Other Examples¶
You may go to AutoMM Examples to explore other examples about AutoMM.
Customization¶
To learn how to customize AutoMM, please refer to Customize AutoMM.