TabularPredictor.evaluate#
- TabularPredictor.evaluate(data, model=None, silent=False, auxiliary_metrics=True, detailed_report=False) dict [source]#
Report the predictive performance evaluated over a given dataset. This is basically a shortcut for: pred_proba = predict_proba(data); evaluate_predictions(data[label], pred_proba).
- Parameters
data (str or
TabularDataset
orpd.DataFrame
) – This dataset must also contain the label with the same column-name as previously specified. If str is passed, data will be loaded using the str value as the file path. If self.sample_weight is set and self.weight_evaluation==True, then a column with the sample weight name is checked and used for weighted metric evaluation if it exists.model (str (optional)) – The name of the model to get prediction probabilities from. Defaults to None, which uses the highest scoring model on the validation set. Valid models are listed in this predictor by calling predictor.get_model_names().
silent (bool, default = False) – If False, performance results are printed.
auxiliary_metrics (bool, default = True) – Should we compute other (problem_type specific) metrics in addition to the default metric?
detailed_report (bool, default = False) – Should we computed more detailed versions of the auxiliary_metrics? (requires auxiliary_metrics = True)
- Returns
Returns dict where keys = metrics, values = performance along each metric. To get the eval_metric score, do output[predictor.eval_metric.name]
NOTE (Metrics scores always show in higher is better form.)
This means that metrics such as log_loss and root_mean_squared_error will have their signs FLIPPED, and values will be negative.