Components: explain#
autogluon.eda.visualization.explain#
Visualize the given SHAP values with an additive force layout |
|
Visualize the given SHAP values with a waterfall layout |
ExplainForcePlot#
- class autogluon.eda.visualization.explain.ExplainForcePlot(display_rows: bool = False, namespace: Optional[str] = None, **kwargs)[source]#
Visualize the given SHAP values with an additive force layout
- Parameters
display_rows (bool, default = False) – if True then display the row before the explanation chart
headers (bool, default = False) – if True then render headers
namespace (str, default = None) – namespace to use; can be nested like ns_a.ns_b.ns_c
Examples
>>> import autogluon.eda.analysis as eda >>> import autogluon.eda.visualization as viz >>> import autogluon.eda.auto as auto >>> >>> rows_to_explain = ... # DataFrame >>> >>> auto.analyze( >>> train_data=..., model=..., >>> anlz_facets=[ >>> eda.explain.ShapAnalysis(rows), >>> ], >>> viz_facets=[ >>> viz.explain.ExplainForcePlot(text_rotation=45, matplotlib=True), # defaults used if not specified >>> ] >>> )
See also
KernelExplainer
,ShapAnalysis
ExplainWaterfallPlot#
- class autogluon.eda.visualization.explain.ExplainWaterfallPlot(display_rows: bool = False, namespace: Optional[str] = None, **kwargs)[source]#
Visualize the given SHAP values with a waterfall layout
- Parameters
display_rows (bool, default = False) – if True then display the row before the explanation chart
headers (bool, default = False) – if True then render headers
namespace (str, default = None) – namespace to use; can be nested like ns_a.ns_b.ns_c
Examples
>>> import autogluon.eda.analysis as eda >>> import autogluon.eda.visualization as viz >>> import autogluon.eda.auto as auto >>> >>> rows_to_explain = ... # DataFrame >>> >>> auto.analyze( >>> train_data=..., model=..., >>> anlz_facets=[ >>> eda.explain.ShapAnalysis(rows_to_explain), >>> ], >>> viz_facets=[ >>> viz.explain.ExplainWaterfallPlot(), >>> ] >>> )
See also
KernelExplainer
,ShapAnalysis
autogluon.eda.analysis.explain#
Perform Shapley values calculation using shap package for the given rows. |
ShapAnalysis#
- class autogluon.eda.analysis.explain.ShapAnalysis(rows: DataFrame, baseline_sample: int = 100, parent: Optional[AbstractAnalysis] = None, children: Optional[List[AbstractAnalysis]] = None, state: Optional[AnalysisState] = None, random_state: int = 0, **kwargs)[source]#
Perform Shapley values calculation using shap package for the given rows.
- Parameters
rows (pd.DataFrame,) – rows to explain
baseline_sample (int, default = 100) – The background dataset size to use for integrating out features. To determine the impact of a feature, that feature is set to “missing” and the change in the model output is observed.
parent (Optional[AbstractAnalysis], default = None) – parent Analysis
children (List[AbstractAnalysis], default = []) – wrapped analyses; these will receive sampled args during fit call
state (AnalysisState) – state to be updated by this fit function
random_state (int, default = 0) – random state for sampling
kwargs –
Examples
>>> import autogluon.eda.analysis as eda >>> import autogluon.eda.visualization as viz >>> import autogluon.eda.auto as auto >>> >>> auto.analyze( >>> train_data=..., model=..., >>> anlz_facets=[ >>> eda.explain.ShapAnalysis(rows, baseline_sample=200), >>> ], >>> viz_facets=[ >>> # Visualize the given SHAP values with an additive force layout >>> viz.explain.ExplainForcePlot(), >>> # Visualize the given SHAP values with a waterfall layout >>> viz.explain.ExplainWaterfallPlot(), >>> ] >>> )
See also
KernelExplainer
,ExplainForcePlot
,ExplainWaterfallPlot