TabularPredictor.evaluate¶
- TabularPredictor.evaluate(data, model=None, decision_threshold=None, display: bool = False, auxiliary_metrics=True, detailed_report=False, **kwargs) dict [source]¶
Report the predictive performance evaluated over a given dataset. This is basically a shortcut for: pred_proba = predict_proba(data); evaluate_predictions(data[label], pred_proba).
- Parameters:
data (str or
TabularDataset
orpd.DataFrame
) – This dataset must also contain the label with the same column-name as previously specified. If str is passed, data will be loaded using the str value as the file path. If self.sample_weight is set and self.weight_evaluation==True, then a column with the sample weight name is checked and used for weighted metric evaluation if it exists.model (str (optional)) – The name of the model to get prediction probabilities from. Defaults to None, which uses the highest scoring model on the validation set. Valid models are listed in this predictor by calling predictor.model_names().
decision_threshold (float, default = None) – The decision threshold to use when converting prediction probabilities to predictions. This will impact the scores of metrics such as f1 and accuracy. If None, defaults to predictor.decision_threshold. Ignored unless problem_type=’binary’. Refer to the predictor.decision_threshold docstring for more information.
display (bool, default = False) – If True, performance results are printed.
auxiliary_metrics (bool, default = True) – Should we compute other (problem_type specific) metrics in addition to the default metric?
detailed_report (bool, default = False) – Should we computed more detailed versions of the auxiliary_metrics? (requires auxiliary_metrics = True)
- Returns:
Returns dict where keys = metrics, values = performance along each metric. To get the eval_metric score, do output[predictor.eval_metric.name]
NOTE (Metrics scores always show in higher is better form.)
This means that metrics such as log_loss and root_mean_squared_error will have their signs FLIPPED, and values will be negative.