fair_forge.metrics

Classes

DependencyTarget(…)

The variable that is compared to the predictions in order to check how similar they are.

GroupMetric(…)

Metric(…)

PerSens(…)

Aggregation methods for metrics that are computed per sensitive attributes.

RenyiCorrelation(…)

Renyi correlation.

Functions

cv(y_true, y_pred, *, groups)

Calder-Verwer.

per_sens_metrics(base_metrics[, per_sens, ...])

Create per-sensitive attribute metrics from base metrics.

prob_neg(y_true, y_pred)

Probability of negative prediction.

prob_pos(y_true, y_pred)

Probability of positive prediction.

tnr(y_true, y_pred)

True Negative Rate (TNR) or Specificity.

tpr(y_true, y_pred)

True Positive Rate (TPR) or Sensitivity.

class fair_forge.metrics.DependencyTarget(*values)[source]

Bases: Enum

The variable that is compared to the predictions in order to check how similar they are.

S = 's'
Y = 'y'
class fair_forge.metrics.GroupMetric(*args, **kwargs)[source]

Bases: Protocol

class fair_forge.metrics.Metric(*args, **kwargs)[source]

Bases: Protocol

class fair_forge.metrics.PerSens(*values)[source]

Bases: Flag

Aggregation methods for metrics that are computed per sensitive attributes.

ALL = 31

All aggregations.

DIFF = 2

Difference of the per-group results.

DIFF_RATIO = 19

Equivalent to INDIVIDUAL | DIFF | RATIO.

INDIVIDUAL = 1

Individual per-group results.

MAX = 4

Maximum of the per-group results.

MIN = 8

Minimum of the per-group results.

MIN_MAX = 12

Equivalent to MIN | MAX.

RATIO = 16

Ratio of the per-group results.

class fair_forge.metrics.RenyiCorrelation(base: DependencyTarget = DependencyTarget.S)[source]

Bases: GroupMetric

Renyi correlation. Measures how dependent two random variables are.

As defined in this paper: https://link.springer.com/content/pdf/10.1007/BF02024507.pdf , titled “On Measures of Dependence” by Alfréd Rényi.

base: DependencyTarget = 's'
fair_forge.metrics.cv(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) Float[source]

Calder-Verwer.

fair_forge.metrics.per_sens_metrics(base_metrics: ~collections.abc.Sequence[~fair_forge.metrics.Metric], per_sens: ~fair_forge.metrics.PerSens = <PerSens.DIFF_RATIO: 19>, remove_score_suffix: bool = True) list[GroupMetric][source]

Create per-sensitive attribute metrics from base metrics.

fair_forge.metrics.prob_neg(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64[source]

Probability of negative prediction.

fair_forge.metrics.prob_pos(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64[source]

Probability of positive prediction.

fair_forge.metrics.tnr(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64[source]

True Negative Rate (TNR) or Specificity.

fair_forge.metrics.tpr(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64[source]

True Positive Rate (TPR) or Sensitivity.