Multimodal Data Tables: Combining BERT/Transformers and Classical Tabular Models¶
Tip: If your data contains images, consider also checking out Multimodal Data Tables: Tabular, Text, and Image which handles images in addition to text and tabular features.
Here we introduce how to use AutoGluon Tabular to deal with multimodal
tabular data that contains text, numeric, and categorical columns. In
AutoGluon, raw text data is considered as a first-class citizen of
data tables. AutoGluon Tabular can help you train and combine a diverse
set of models including classical tabular models like
LightGBM/RF/CatBoost as well as our pretrained NLP model based
multimodal network that is introduced in Section
“sec_textprediction_architecture” of
sec_textprediction_multimodal (used by AutoGluon’s
TextPredictor
).
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pprint
import random
from autogluon.tabular import TabularPredictor
np.random.seed(123)
random.seed(123)
Product Sentiment Analysis Dataset¶
We consider the product sentiment analysis dataset from a MachineHack hackathon. The goal is to predict a user’s sentiment towards a product given their review (raw text) and a categorical feature indicating the product’s type (e.g., Tablet, Mobile, etc.). We have already split the original dataset to be 90% for training and 10% for development/testing (if submitting your models to the hackathon, we recommend training them on 100% of the dataset).
!mkdir -p product_sentiment_machine_hack
!wget https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/train.csv -O product_sentiment_machine_hack/train.csv
!wget https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/dev.csv -O product_sentiment_machine_hack/dev.csv
!wget https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/test.csv -O product_sentiment_machine_hack/test.csv
--2023-02-22 23:29:24-- https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/train.csv
Resolving autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)... 52.217.130.97, 52.217.110.220, 52.216.137.228, ...
Connecting to autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)|52.217.130.97|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 689486 (673K) [text/csv]
Saving to: ‘product_sentiment_machine_hack/train.csv’
product_sentiment_m 100%[===================>] 673.33K --.-KB/s in 0.009s
2023-02-22 23:29:24 (73.9 MB/s) - ‘product_sentiment_machine_hack/train.csv’ saved [689486/689486]
--2023-02-22 23:29:24-- https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/dev.csv
Resolving autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)... 52.216.10.27, 52.217.225.57, 3.5.11.212, ...
Connecting to autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)|52.216.10.27|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75517 (74K) [text/csv]
Saving to: ‘product_sentiment_machine_hack/dev.csv’
product_sentiment_m 100%[===================>] 73.75K --.-KB/s in 0.001s
2023-02-22 23:29:24 (57.6 MB/s) - ‘product_sentiment_machine_hack/dev.csv’ saved [75517/75517]
--2023-02-22 23:29:24-- https://autogluon-text-data.s3.amazonaws.com/multimodal_text/machine_hack_product_sentiment/test.csv
Resolving autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)... 52.217.130.97, 52.216.10.27, 52.217.225.57, ...
Connecting to autogluon-text-data.s3.amazonaws.com (autogluon-text-data.s3.amazonaws.com)|52.217.130.97|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 312194 (305K) [text/csv]
Saving to: ‘product_sentiment_machine_hack/test.csv’
product_sentiment_m 100%[===================>] 304.88K --.-KB/s in 0.002s
2023-02-22 23:29:24 (137 MB/s) - ‘product_sentiment_machine_hack/test.csv’ saved [312194/312194]
subsample_size = 2000 # for quick demo, try setting to larger values
feature_columns = ['Product_Description', 'Product_Type']
label = 'Sentiment'
train_df = pd.read_csv('product_sentiment_machine_hack/train.csv', index_col=0).sample(2000, random_state=123)
dev_df = pd.read_csv('product_sentiment_machine_hack/dev.csv', index_col=0)
test_df = pd.read_csv('product_sentiment_machine_hack/test.csv', index_col=0)
train_df = train_df[feature_columns + [label]]
dev_df = dev_df[feature_columns + [label]]
test_df = test_df[feature_columns]
print('Number of training samples:', len(train_df))
print('Number of dev samples:', len(dev_df))
print('Number of test samples:', len(test_df))
Number of training samples: 2000
Number of dev samples: 637
Number of test samples: 2728
There are two features in the dataset: the users’ review of the product and the product’s type, and four possible classes to predict.
train_df.head()
Product_Description | Product_Type | Sentiment | |
---|---|---|---|
4532 | they took away the lego pit but replaced it wi... | 0 | 1 |
1831 | #Apple to Open Pop-Up Shop at #SXSW [REPORT]: ... | 9 | 2 |
3536 | RT @mention False Alarm: Google Circles Not Co... | 5 | 1 |
5157 | Will Google reveal a new social network called... | 9 | 2 |
4643 | Niceness RT @mention Less than 2 hours until w... | 6 | 3 |
dev_df.head()
Product_Description | Product_Type | Sentiment | |
---|---|---|---|
3170 | Do it. RT @mention Come party w/ Google tonigh... | 3 | 3 |
6301 | Line for iPads at #SXSW. Doesn't look too bad!... | 6 | 3 |
5643 | First up: iPad Design Headaches (2 Tablets, Ca... | 6 | 2 |
1953 | #SXSW: Mint Talks Mobile App Development Chall... | 9 | 2 |
2658 | ÛÏ@mention Apple store downtown Austin open t... | 9 | 2 |
test_df.head()
Product_Description | Product_Type | |
---|---|---|
Text_ID | ||
5786 | RT @mention Going to #SXSW? The new iPhone gui... | 7 |
5363 | RT @mention 95% of iPhone and Droid apps have ... | 9 |
6716 | RT @mention Thank you to @mention for letting ... | 9 |
4339 | #Thanks @mention we're lovin' the @mention app... | 7 |
66 | At #sxsw? @mention / @mention wanna buy you a ... | 9 |
AutoGluon Tabular with Multimodal Support¶
To utilize the TextPredictor
model inside of TabularPredictor
,
we must specify the hyperparameters = 'multimodal'
in AutoGluon
Tabular. Internally, this will train multiple tabular models as well as
the TextPredictor model, and then combine them via either a weighted
ensemble or stack ensemble, as explained in AutoGluon Tabular
Paper. If you do not specify
hyperparameters = 'multimodal'
, then AutoGluon Tabular will simply
featurize text fields using N-grams and train only tabular models (which
may work better if your text is mostly uncommon strings/vocabulary).
from autogluon.tabular import TabularPredictor
predictor = TabularPredictor(label='Sentiment', path='ag_tabular_product_sentiment_multimodal')
predictor.fit(train_df, hyperparameters='multimodal')
Beginning AutoGluon training ...
AutoGluon will save models to "ag_tabular_product_sentiment_multimodal/"
AutoGluon Version: 0.7.0b20230222
Python Version: 3.8.13
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Nov 30 00:17:50 UTC 2021
Train Data Rows: 2000
Train Data Columns: 2
Label Column: Sentiment
Preprocessing data ...
AutoGluon infers your prediction problem is: 'multiclass' (because dtype of label-column == int, but few unique label-values observed).
4 unique label values: [1, 2, 3, 0]
If 'multiclass' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Train Data Class Count: 4
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 31470.97 MB
Train Data (Original) Memory Usage: 0.34 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting IdentityFeatureGenerator...
Fitting RenameFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Fitting TextSpecialFeatureGenerator...
Fitting BinnedFeatureGenerator...
Fitting DropDuplicatesFeatureGenerator...
Fitting TextNgramFeatureGenerator...
Fitting CountVectorizer for text features: ['Product_Description']
CountVectorizer fit with vocabulary size = 230
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('int', []) : 1 | ['Product_Type']
('object', ['text']) : 1 | ['Product_Description']
Types of features in processed data (raw dtype, special dtypes):
('category', ['text_as_category']) : 1 | ['Product_Description']
('int', []) : 1 | ['Product_Type']
('int', ['binned', 'text_special']) : 30 | ['Product_Description.char_count', 'Product_Description.word_count', 'Product_Description.capital_ratio', 'Product_Description.lower_ratio', 'Product_Description.digit_ratio', ...]
('int', ['text_ngram']) : 231 | ['__nlp__.about', '__nlp__.all', '__nlp__.amp', '__nlp__.an', '__nlp__.an ipad', ...]
('object', ['text']) : 1 | ['Product_Description_raw_text']
0.6s = Fit runtime
2 features in original data used to generate 264 features in processed data.
Train Data (Processed) Memory Usage: 1.34 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.6s ...
AutoGluon will gauge predictive performance using evaluation metric: 'accuracy'
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.2, Train Rows: 1600, Val Rows: 400
Fitting 8 L1 models ...
Fitting model: LightGBM ...
0.8925 = Validation score (accuracy)
1.86s = Training runtime
0.01s = Validation runtime
Fitting model: LightGBMXT ...
0.8575 = Validation score (accuracy)
1.6s = Training runtime
0.01s = Validation runtime
Fitting model: CatBoost ...
0.8875 = Validation score (accuracy)
3.03s = Training runtime
0.01s = Validation runtime
Fitting model: XGBoost ...
0.8875 = Validation score (accuracy)
2.64s = Training runtime
0.01s = Validation runtime
Fitting model: NeuralNetTorch ...
0.8825 = Validation score (accuracy)
2.46s = Training runtime
0.02s = Validation runtime
Fitting model: VowpalWabbit ...
0.675 = Validation score (accuracy)
0.77s = Training runtime
0.04s = Validation runtime
Fitting model: LightGBMLarge ...
0.885 = Validation score (accuracy)
3.12s = Training runtime
0.03s = Validation runtime
Fitting model: MultiModalPredictor ...
Configuration saved in ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/hf_text/config.json
tokenizer config file saved in ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/hf_text/tokenizer_config.json
Special tokens file saved in ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/hf_text/special_tokens_map.json
0.705 = Validation score (accuracy)
284.36s = Training runtime
1.42s = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
0.8975 = Validation score (accuracy)
0.21s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 303.07s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("ag_tabular_product_sentiment_multimodal/")
<autogluon.tabular.predictor.predictor.TabularPredictor at 0x7f038e08a190>
predictor.leaderboard(dev_df)
loading file vocab.txt
loading file tokenizer.json
loading file added_tokens.json
loading file special_tokens_map.json
loading file tokenizer_config.json
loading configuration file /home/ci/autogluon/docs/_build/eval/tutorials/tabular_prediction/ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/hf_text/config.json
Model config ElectraConfig {
"_name_or_path": "/home/ci/autogluon/docs/_build/eval/tutorials/tabular_prediction/ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/hf_text",
"architectures": [
"ElectraForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"embedding_size": 768,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "electra",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"summary_activation": "gelu",
"summary_last_dropout": 0.1,
"summary_type": "first",
"summary_use_proj": true,
"transformers_version": "4.26.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading file vocab.txt
loading file tokenizer.json
loading file added_tokens.json
loading file special_tokens_map.json
loading file tokenizer_config.json
Load pretrained checkpoint: /home/ci/autogluon/docs/_build/eval/tutorials/tabular_prediction/ag_tabular_product_sentiment_multimodal/models/MultiModalPredictor/automm_model/model.ckpt
model score_test score_val pred_time_test pred_time_val fit_time pred_time_test_marginal pred_time_val_marginal fit_time_marginal stack_level can_infer fit_order
0 NeuralNetTorch 0.886970 0.8825 0.025658 0.018475 2.457242 0.025658 0.018475 2.457242 1 True 5
1 WeightedEnsemble_L2 0.885400 0.8975 3.113442 1.476970 288.809416 0.004947 0.000503 0.211976 2 True 9
2 LightGBM 0.883830 0.8925 0.026906 0.008776 1.857775 0.026906 0.008776 1.857775 1 True 1
3 XGBoost 0.883830 0.8875 0.038126 0.007634 2.644335 0.038126 0.007634 2.644335 1 True 4
4 LightGBMLarge 0.883830 0.8850 0.139274 0.029354 3.124176 0.139274 0.029354 3.124176 1 True 7
5 CatBoost 0.883830 0.8875 0.274167 0.012111 3.031290 0.274167 0.012111 3.031290 1 True 3
6 LightGBMXT 0.863422 0.8575 0.014624 0.006234 1.602339 0.014624 0.006234 1.602339 1 True 2
7 MultiModalPredictor 0.717425 0.7050 2.959985 1.421993 284.364830 2.959985 1.421993 284.364830 1 True 8
8 VowpalWabbit 0.714286 0.6750 0.106979 0.039466 0.772496 0.106979 0.039466 0.772496 1 True 6
model | score_test | score_val | pred_time_test | pred_time_val | fit_time | pred_time_test_marginal | pred_time_val_marginal | fit_time_marginal | stack_level | can_infer | fit_order | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | NeuralNetTorch | 0.886970 | 0.8825 | 0.025658 | 0.018475 | 2.457242 | 0.025658 | 0.018475 | 2.457242 | 1 | True | 5 |
1 | WeightedEnsemble_L2 | 0.885400 | 0.8975 | 3.113442 | 1.476970 | 288.809416 | 0.004947 | 0.000503 | 0.211976 | 2 | True | 9 |
2 | LightGBM | 0.883830 | 0.8925 | 0.026906 | 0.008776 | 1.857775 | 0.026906 | 0.008776 | 1.857775 | 1 | True | 1 |
3 | XGBoost | 0.883830 | 0.8875 | 0.038126 | 0.007634 | 2.644335 | 0.038126 | 0.007634 | 2.644335 | 1 | True | 4 |
4 | LightGBMLarge | 0.883830 | 0.8850 | 0.139274 | 0.029354 | 3.124176 | 0.139274 | 0.029354 | 3.124176 | 1 | True | 7 |
5 | CatBoost | 0.883830 | 0.8875 | 0.274167 | 0.012111 | 3.031290 | 0.274167 | 0.012111 | 3.031290 | 1 | True | 3 |
6 | LightGBMXT | 0.863422 | 0.8575 | 0.014624 | 0.006234 | 1.602339 | 0.014624 | 0.006234 | 1.602339 | 1 | True | 2 |
7 | MultiModalPredictor | 0.717425 | 0.7050 | 2.959985 | 1.421993 | 284.364830 | 2.959985 | 1.421993 | 284.364830 | 1 | True | 8 |
8 | VowpalWabbit | 0.714286 | 0.6750 | 0.106979 | 0.039466 | 0.772496 | 0.106979 | 0.039466 | 0.772496 | 1 | True | 6 |
Improve the Performance with Stack Ensemble¶
You can improve predictive performance by using stack ensembling. One way to turn it on is as follows:
predictor.fit(train_df, hyperparameters='multimodal', num_bag_folds=5, num_stack_levels=1)
or using:
predictor.fit(train_df, hyperparameters='multimodal', presets='best_quality')
which will automatically select values for num_stack_levels
(how
many stacking layers) and num_bag_folds
(how many folds to split
data into during bagging). Stack ensembling can take much longer, so we
won’t run with this configuration here. You may explore more examples in
https://github.com/autogluon/autogluon/tree/master/examples/text_prediction,
which demonstrate how you can achieve top performance in competitions
with a stack ensembling based solution.