Compare metrics between groups#

Evaluator for a metric per sensitive attribute class.

Exceptions:

MetricNotApplicable

Metric Not Applicable per sensitive attribute, apply to whole dataset instead.

Functions:

diff_per_sensitive_attribute

Compute the difference in the metrics per sensitive attribute.

metric_per_sensitive_attribute

Compute a metric repeatedly on subsets of the data that share a senstitive attribute.

ratio_per_sensitive_attribute

Compute the ratios in the metrics per sensitive attribute.

exception MetricNotApplicable#

Bases: Exception

Metric Not Applicable per sensitive attribute, apply to whole dataset instead.

with_traceback()#

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

diff_per_sensitive_attribute(per_sens_res)#

Compute the difference in the metrics per sensitive attribute.

Parameters

per_sens_res (Dict[str, float]) – dictionary of the results

Returns

dictionary of differences

Return type

Dict[str, float]

metric_per_sensitive_attribute(prediction, actual, metric, use_sens_name=True)#

Compute a metric repeatedly on subsets of the data that share a senstitive attribute.

Parameters
Return type

Dict[str, float]

ratio_per_sensitive_attribute(per_sens_res)#

Compute the ratios in the metrics per sensitive attribute.

Parameters

per_sens_res (Dict[str, float]) – dictionary of the results

Returns

dictionary of ratios

Return type

Dict[str, float]