fair_forge.metrics

Module Attributes

Float

Union of common float types.

LabelType

A type for specifying which labels to use (class or group labels).

Functions

as_group_metric(base_metrics[, agg, ...])

Turn a sequence of metrics into a list of group metrics.

cv(y_true, y_pred, *, groups)

Calder-Verwer.

prob_neg(y_true, y_pred, *[, sample_weight])

Probability of negative prediction.

prob_pos(y_true, y_pred, *[, sample_weight])

Probability of positive prediction.

tnr(y_true, y_pred, *[, sample_weight])

True Negative Rate (TNR) or Specificity.

tpr(y_true, y_pred, *[, sample_weight])

True Positive Rate (TPR) or Sensitivity.

Classes

GroupMetric(*args, **kwargs)

Metric(*args, **kwargs)

MetricAgg(*values)

Aggregation methods for metrics that are computed per group.

RenyiCorrelation([base])

Renyi correlation.

type fair_forge.metrics.Float = float | numpy.float16 | numpy.float32 | numpy.float64

Union of common float types.

class fair_forge.metrics.GroupMetric(*args, **kwargs)[source]

Bases: Protocol

__call__(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) Float[source]

Call self as a function.

type fair_forge.metrics.LabelType = Literal['group', 'y']

A type for specifying which labels to use (class or group labels).

class fair_forge.metrics.Metric(*args, **kwargs)[source]

Bases: Protocol

__call__(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = Ellipsis) Float[source]

Call self as a function.

class fair_forge.metrics.MetricAgg(*values)[source]

Bases: Flag

Aggregation methods for metrics that are computed per group.

ALL = 31

All aggregations.

DIFF = 2

Difference of the per-group results.

DIFF_RATIO = 19

Equivalent to INDIVIDUAL | DIFF | RATIO.

INDIVIDUAL = 1

Individual per-group results.

MAX = 4

Maximum of the per-group results.

MIN = 8

Minimum of the per-group results.

MIN_MAX = 12

Equivalent to MIN | MAX.

RATIO = 16

Ratio of the per-group results.

class fair_forge.metrics.RenyiCorrelation(base: LabelType = 'group')[source]

Bases: GroupMetric

Renyi correlation. Measures how dependent two random variables are.

As defined in this paper: https://link.springer.com/content/pdf/10.1007/BF02024507.pdf , titled “On Measures of Dependence” by Alfréd Rényi.

__call__(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) float[source]

Call self as a function.

base: LabelType = 'group'

Which label to use as base to compute the correlation against.

fair_forge.metrics.as_group_metric(base_metrics: ~collections.abc.Sequence[~fair_forge.metrics.Metric], agg: ~fair_forge.metrics.MetricAgg = <MetricAgg.DIFF_RATIO: 19>, remove_score_suffix: bool = True) list[GroupMetric][source]

Turn a sequence of metrics into a list of group metrics.

fair_forge.metrics.cv(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) Float[source]

Calder-Verwer.

fair_forge.metrics.prob_neg(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]

Probability of negative prediction.

fair_forge.metrics.prob_pos(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]

Probability of positive prediction.

Example

>>> import fair_forge as ff
>>> y_true = np.array([0, 0, 0, 1], dtype=np.int32)
>>> y_pred = np.array([0, 1, 0, 1], dtype=np.int32)
>>> ff.metrics.prob_pos(y_true, y_pred)
np.float64(0.5)
fair_forge.metrics.tnr(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]

True Negative Rate (TNR) or Specificity.

fair_forge.metrics.tpr(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]

True Positive Rate (TPR) or Sensitivity.