{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "6c4c6389", "metadata": {}, "source": [ "# Adding a custom metric to AutoGluon\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/autogluon/autogluon/blob/master/docs/tutorials/tabular/advanced/tabular-custom-metric.ipynb)\n", "[![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/autogluon/autogluon/blob/master/docs/tutorials/tabular/advanced/tabular-custom-metric.ipynb)\n", "\n", "\n", "\n", "**Tip**: If you are new to AutoGluon, review [Predicting Columns in a Table - Quick Start](../tabular-quick-start.ipynb) to learn the basics of the AutoGluon API.\n", "\n", "This tutorial describes how to add a custom evaluation metric to AutoGluon that is used to inform validation scores, model ensembling, hyperparameter tuning, and more.\n", "\n", "In this example, we show a variety of evaluation metrics and how to convert them to an AutoGluon Scorer ([Scorer source code](https://github.com/autogluon/autogluon/blob/master/core/src/autogluon/core/metrics/__init__.py)), which can then be passed to AutoGluon models and predictors.\n", "\n", "First, we will randomly generate 10 ground truth labels and predictions and show how to calculate metric scores from them." ] }, { "cell_type": "code", "execution_count": null, "id": "aa00faab-252f-44c9-b8f7-57131aa8251c", "metadata": { "tags": [ "remove-cell" ] }, "source": [ "!pip install autogluon.tabular[all]\n" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "1b5cf360", "metadata": {}, "source": [ "import numpy as np\n", "\n", "rng = np.random.default_rng(seed=42)\n", "y_true = rng.integers(low=0, high=2, size=10)\n", "y_pred = rng.integers(low=0, high=2, size=10)\n", "\n", "print(f'y_true: {y_true}')\n", "print(f'y_pred: {y_pred}')" ], "outputs": [] }, { "cell_type": "markdown", "id": "214eee0e", "metadata": {}, "source": [ "## Ensuring Metric is Serializable\n", "Custom metrics must be defined in a separate Python file and imported so that they can be [pickled](https://docs.python.org/3/library/pickle.html) (Python's serialization protocol).\n", "If a custom metric is not pickleable, AutoGluon will crash during fit when trying to parallelize model training with Ray.\n", "In the below example, you would want to create a new python file such as `my_metrics.py` with `ag_accuracy_scorer` defined in it,\n", "and then use it via `from my_metrics import ag_accuracy_scorer`.\n", "\n", "If your metric is not serializable, you will get many errors similar to: `_pickle.PicklingError: Can't pickle`. Refer to https://github.com/autogluon/autogluon/issues/1637 for an example.\n", "For an example of how to specify a custom metric on Kaggle, refer to [this Kaggle Notebook](https://www.kaggle.com/code/rzatemizel/prepare-for-automl-grand-prix-explore-autogluon#Custom-Metric-for-Autogluon).\n", "\n", "The custom metrics in this tutorial are **not** serializable for ease of demonstration. If the `best_quality` preset was used, calls to `fit()` would crash.\n", "\n", "## Custom Accuracy Metric\n", "We will start by creating a custom accuracy metric. A prediction is correct if the predicted value is the same as the true value, otherwise it is wrong.\n", "\n", "First, lets use the default sklearn accuracy scorer:" ] }, { "cell_type": "code", "execution_count": null, "id": "7fff1bb4", "metadata": {}, "source": [ "import sklearn.metrics\n", "\n", "sklearn.metrics.accuracy_score(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "84868cc1", "metadata": {}, "source": [ "There are a variety of limitations with the above logic.\n", "For example, without outside knowledge of the metric it is unknown:\n", "1. What the optimal value is (1)\n", "2. If higher values are better (True)\n", "3. If the metric requires predictions, class predictions, or class probabilities (class predictions)\n", "\n", "Now, let's convert this evaluation metric to an AutoGluon Scorer to address these limitations.\n", "\n", "We do this by calling `autogluon.core.metrics.make_scorer` (Source code: [autogluon/core/metrics/\\_\\_init\\_\\_.py](https://github.com/autogluon/autogluon/blob/master/core/src/autogluon/core/metrics/__init__.py))." ] }, { "cell_type": "code", "execution_count": null, "id": "d82fe4929dd18fab", "metadata": {}, "source": [ "from autogluon.core.metrics import make_scorer\n", "\n", "ag_accuracy_scorer = make_scorer(name='accuracy',\n", " score_func=sklearn.metrics.accuracy_score,\n", " optimum=1,\n", " greater_is_better=True,\n", " needs_class=True)" ], "outputs": [] }, { "cell_type": "markdown", "id": "d727f290", "metadata": {}, "source": [ "When creating the Scorer, we need to specify a name for the Scorer. This does not need to be any particular value but is used when printing information about the Scorer during training.\n", "\n", "Next, we specify the `score_func`. This is the function we want to wrap, in this case, sklearn's `accuracy_score` function.\n", "\n", "We then need to specify the `optimum` value.\n", "This is necessary when calculating `error` (also known as `regret`) as opposed to `score`.\n", "`error` is defined as `sign * optimum - score`, where `sign=1` if `greater_is_better=True`, else `sign=-1`.\n", "It is also useful to identify when a score is optimal and cannot be improved.\n", "Because the best possible value from `sklearn.metrics.accuracy_score` is `1`, we specify `optimum=1`.\n", "\n", "Next we need to specify `greater_is_better`. In this case, `greater_is_better=True`\n", "because the best value returned is 1, and the worst value returned is less than 1 (0).\n", "It is very important to set this value correctly,\n", "otherwise AutoGluon will try to optimize for the **worst** model instead of the best.\n", "\n", "Finally, we specify a bool `needs_*` based on the type of metric we are using. The following options are available: [`needs_pred`, `needs_proba`, `needs_class`, `needs_threshold`, `needs_quantile`].\n", "All of them default to False except `needs_pred` which is inferred based on the other 4, of which only one can be set to True. If none are specified, the metric is treated as a regression metric (`needs_pred=True`).\n", "\n", "Below is a detailed description of each:\n", "\n", " needs_pred : bool | str, default=\"auto\"\n", " Whether score_func requires the predict model method output as input to scoring.\n", " If \"auto\", will be inferred based on the values of the other `needs_*` arguments.\n", " Defaults to True if all other `needs_*` are False.\n", " Examples: [\"root_mean_squared_error\", \"mean_squared_error\", \"r2\", \"mean_absolute_error\", \"median_absolute_error\", \"spearmanr\", \"pearsonr\"]\n", "\n", " needs_proba : bool, default=False\n", " Whether score_func requires predict_proba to get probability estimates out of a classifier.\n", " These scorers can benefit from calibration methods such as temperature scaling.\n", " Examples: [\"log_loss\", \"roc_auc_ovo\", \"roc_auc_ovr\", \"pac\"]\n", "\n", " needs_class : bool, default=False\n", " Whether score_func requires class predictions (classification only).\n", " This is required to determine if the scorer is impacted by a decision threshold.\n", " These scorers can benefit from decision threshold calibration methods such as via `predictor.calibrate_decision_threshold()`.\n", " Examples: [\"accuracy\", \"balanced_accuracy\", \"f1\", \"precision\", \"recall\", \"mcc\", \"quadratic_kappa\", \"f1_micro\", \"f1_macro\", \"f1_weighted\"]\n", "\n", " needs_threshold : bool, default=False\n", " Whether score_func takes a continuous decision certainty.\n", " This only works for binary classification.\n", " These scorers care about the rank order of the prediction probabilities to calculate their scores, and are undefined if given a single sample to score.\n", " Examples: [\"roc_auc\", \"average_precision\"]\n", "\n", " needs_quantile : bool, default=False\n", " Whether score_func is based on quantile predictions.\n", " This only works for quantile regression.\n", " Examples: [\"pinball_loss\"]\n", "\n", "Because we are creating an accuracy scorer, we need the class prediction, and therefore we specify `needs_class=True`.\n", "\n", "**Advanced Note**: `optimum` must correspond to the optimal value\n", "from the original metric callable (in this case `sklearn.metrics.accuracy_score`).\n", "Hypothetically, if a metric callable was `greater_is_better=False` with an optimal value of `-2`,\n", "you should specify `optimum=-2, greater_is_better=False`.\n", "In this case, if `raw_metric_value=-0.5`\n", "then Scorer would return `score=0.5` to enforce higher_is_better (`score = sign * raw_metric_value`).\n", "Scorer's error would be `error=1.5` because `sign (-1) * optimum (-2) - score (0.5) = 1.5`\n", "\n", "Once created, the AutoGluon Scorer can be called in the same fashion as the original metric to compute `score`." ] }, { "cell_type": "code", "execution_count": null, "id": "0114a6fe", "metadata": {}, "source": [ "# score\n", "ag_accuracy_scorer(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "5aa20d0d", "metadata": {}, "source": [ "Alternatively, `.score` is an alias to the above callable for convenience:" ] }, { "cell_type": "code", "execution_count": null, "id": "03894dd7", "metadata": {}, "source": [ "ag_accuracy_scorer.score(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "74a270a0", "metadata": {}, "source": [ "To get the error instead of score:" ] }, { "cell_type": "code", "execution_count": null, "id": "913e2a75", "metadata": {}, "source": [ "# error, error=sign*optimum-score -> error=1*1-score -> error=1-score\n", "ag_accuracy_scorer.error(y_true, y_pred)\n", "\n", "# Can also convert score to error and vice-versa:\n", "# score = ag_accuracy_scorer(y_true, y_pred)\n", "# error = ag_accuracy_scorer.convert_score_to_error(score)\n", "# score = ag_accuracy_scorer.convert_error_to_score(error)\n", "\n", "# Can also convert score to the original score that would be returned in `score_func`:\n", "# score_orig = ag_accuracy_scorer.convert_score_to_original(score) # score_orig = sign * score" ], "outputs": [] }, { "cell_type": "markdown", "id": "38c46df1", "metadata": {}, "source": [ "Note that `score` is in `higher_is_better` format, while error is in `lower_is_better` format.\n", "An error of 0 corresponds to a perfect prediction.\n", "\n", "## Custom Mean Squared Error Metric\n", "\n", "Next, let's show examples of how to convert regression metrics into Scorers.\n", "\n", "First we generate random ground truth labels and their predictions, however this time they are floats instead of integers." ] }, { "cell_type": "code", "execution_count": null, "id": "736399fb", "metadata": {}, "source": [ "y_true = rng.random(10)\n", "y_pred = rng.random(10)\n", "\n", "print(f'y_true: {y_true}')\n", "print(f'y_pred: {y_pred}')" ], "outputs": [] }, { "cell_type": "markdown", "id": "6c3f42ac", "metadata": {}, "source": [ "A common regression metric is Mean Squared Error:" ] }, { "cell_type": "code", "execution_count": null, "id": "8d2776cc", "metadata": {}, "source": [ "sklearn.metrics.mean_squared_error(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "451c4b49", "metadata": {}, "source": [ "ag_mean_squared_error_scorer = make_scorer(name='mean_squared_error',\n", " score_func=sklearn.metrics.mean_squared_error,\n", " optimum=0,\n", " greater_is_better=False)" ], "outputs": [] }, { "cell_type": "markdown", "id": "16dddc00", "metadata": {}, "source": [ "In this case, `optimum=0` because this is an error metric.\n", "\n", "Additionally, `greater_is_better=False` because sklearn reports error as positive values, and the lower the value is, the better.\n", "\n", "A very important point about AutoGluon Scorers is that internally,\n", "they will always report scores in `greater_is_better=True` form.\n", "This means if the original metric was `greater_is_better=False`, AutoGluon's Scorer will flip the value.\n", "Therefore, `score` will be represented as a negative value.\n", "\n", "This is done to ensure consistency between different metrics." ] }, { "cell_type": "code", "execution_count": null, "id": "0bd5d590", "metadata": {}, "source": [ "# score\n", "ag_mean_squared_error_scorer(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "ea16f0e0", "metadata": {}, "source": [ "# error, error=sign*optimum-score -> error=-1*0-score -> error=-score\n", "ag_mean_squared_error_scorer.error(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "5719a5b4", "metadata": {}, "source": [ "We can also specify metrics outside of sklearn. For example, below is a minimal implementation of mean squared error:" ] }, { "cell_type": "code", "execution_count": null, "id": "5b65aab0", "metadata": {}, "source": [ "def mse_func(y_true: np.ndarray, y_pred: np.ndarray) -> float:\n", " return ((y_true - y_pred) ** 2).mean()\n", "\n", "mse_func(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "69db1af3", "metadata": {}, "source": [ "All that is required is that the function take two arguments: `y_true`, and `y_pred` (or `y_pred_proba`), as numpy arrays, and return a float value.\n", "\n", "With the same code as before, we can create an AutoGluon Scorer." ] }, { "cell_type": "code", "execution_count": null, "id": "80f9f81e", "metadata": {}, "source": [ "ag_mean_squared_error_custom_scorer = make_scorer(name='mean_squared_error',\n", " score_func=mse_func,\n", " optimum=0,\n", " greater_is_better=False)\n", "ag_mean_squared_error_custom_scorer(y_true, y_pred)" ], "outputs": [] }, { "cell_type": "markdown", "id": "62d17a80", "metadata": {}, "source": [ "## Custom ROC AUC Metric\n", "\n", "Here we show an example of a thresholding metric, `roc_auc`. A thresholding metric cares about the relative ordering of predictions, but not their absolute values." ] }, { "cell_type": "code", "execution_count": null, "id": "83967a37", "metadata": {}, "source": [ "y_true = rng.integers(low=0, high=2, size=10)\n", "y_pred_proba = rng.random(10)\n", "\n", "print(f'y_true: {y_true}')\n", "print(f'y_pred_proba: {y_pred_proba}')" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "cb4c1ef4", "metadata": {}, "source": [ "sklearn.metrics.roc_auc_score(y_true, y_pred_proba)" ], "outputs": [] }, { "cell_type": "markdown", "id": "ff8e7aa2", "metadata": {}, "source": [ "We will need to specify `needs_threshold=True` in order for downstream models to properly use the metric." ] }, { "cell_type": "code", "execution_count": null, "id": "9230b917", "metadata": {}, "source": [ "# Score functions that need decision values\n", "ag_roc_auc_scorer = make_scorer(name='roc_auc',\n", " score_func=sklearn.metrics.roc_auc_score,\n", " optimum=1,\n", " greater_is_better=True,\n", " needs_threshold=True)\n", "ag_roc_auc_scorer(y_true, y_pred_proba)" ], "outputs": [] }, { "cell_type": "markdown", "id": "30be5d84", "metadata": {}, "source": [ "## Using Custom Metrics in TabularPredictor\n", "\n", "Now that we have created several custom Scorers, let's use them for training and evaluating models.\n", "\n", "For this tutorial, we will be using the Adult Income dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "252ac3a2", "metadata": {}, "source": [ "from autogluon.tabular import TabularDataset\n", "\n", "train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv') # can be local CSV file as well, returns Pandas DataFrame\n", "test_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv') # another Pandas DataFrame\n", "label = 'class' # specifies which column we want to predict\n", "train_data = train_data.sample(n=1000, random_state=0) # subsample dataset for faster demo\n", "\n", "train_data.head(5)" ], "outputs": [] }, { "cell_type": "code", "execution_count": null, "id": "6cb390fc", "metadata": {}, "source": [ "from autogluon.tabular import TabularPredictor\n", "\n", "predictor = TabularPredictor(label=label).fit(train_data, hyperparameters='toy')\n", "\n", "predictor.leaderboard(test_data)" ], "outputs": [] }, { "cell_type": "markdown", "id": "87ded9fd", "metadata": {}, "source": [ "We can pass our custom metrics into `predictor.leaderboard` via the `extra_metrics` argument:" ] }, { "cell_type": "code", "execution_count": null, "id": "499faaab", "metadata": {}, "source": [ "predictor.leaderboard(test_data, extra_metrics=[ag_roc_auc_scorer, ag_accuracy_scorer])" ], "outputs": [] }, { "cell_type": "markdown", "id": "c3bb5bae", "metadata": {}, "source": [ "We can also pass our custom metric into the Predictor itself by specifying it during initialization via the `eval_metric` parameter:" ] }, { "cell_type": "code", "execution_count": null, "id": "807acda7", "metadata": {}, "source": [ "predictor_custom = TabularPredictor(label=label, eval_metric=ag_roc_auc_scorer).fit(train_data, hyperparameters='toy')\n", "\n", "predictor_custom.leaderboard(test_data)" ], "outputs": [] }, { "cell_type": "markdown", "id": "fa9316af", "metadata": {}, "source": [ "That's all it takes to create and use custom metrics in AutoGluon!\n", "\n", "If you create a custom metric, consider [submitting a PR](https://github.com/autogluon/autogluon/pulls) so that we can officially add it to AutoGluon!\n", "\n", "For a tutorial on implementing custom models in AutoGluon, refer to [Adding a custom model to AutoGluon](tabular-custom-model.ipynb).\n", "\n", "For more tutorials, refer to [Predicting Columns in a Table - Quick Start](../tabular-quick-start.ipynb) and [Predicting Columns in a Table - In Depth](../tabular-indepth.ipynb)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }