fair_forge.metrics¶
Module Attributes
Union of common float types. |
|
A type for specifying which labels to use (class or group labels). |
Functions
|
Turn a sequence of metrics into a list of group metrics. |
|
Calder-Verwer. |
|
Probability of negative prediction. |
|
Probability of positive prediction. |
|
True Negative Rate (TNR) or Specificity. |
|
True Positive Rate (TPR) or Sensitivity. |
Classes
|
|
|
|
|
Aggregation methods for metrics that are computed per group. |
|
Renyi correlation. |
- type fair_forge.metrics.Float = float | numpy.float16 | numpy.float32 | numpy.float64¶
Union of common float types.
- type fair_forge.metrics.LabelType = Literal['group', 'y']¶
A type for specifying which labels to use (class or group labels).
- class fair_forge.metrics.MetricAgg(*values)[source]¶
Bases:
FlagAggregation methods for metrics that are computed per group.
- ALL = 31¶
All aggregations.
- DIFF = 2¶
Difference of the per-group results.
- DIFF_RATIO = 19¶
Equivalent to
INDIVIDUAL | DIFF | RATIO.
- INDIVIDUAL = 1¶
Individual per-group results.
- MAX = 4¶
Maximum of the per-group results.
- MIN = 8¶
Minimum of the per-group results.
- MIN_MAX = 12¶
Equivalent to
MIN | MAX.
- RATIO = 16¶
Ratio of the per-group results.
- class fair_forge.metrics.RenyiCorrelation(base: LabelType = 'group')[source]¶
Bases:
GroupMetricRenyi correlation. Measures how dependent two random variables are.
As defined in this paper: https://link.springer.com/content/pdf/10.1007/BF02024507.pdf , titled “On Measures of Dependence” by Alfréd Rényi.
- __call__(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) float[source]¶
Call self as a function.
- base: LabelType = 'group'¶
Which label to use as base to compute the correlation against.
- fair_forge.metrics.as_group_metric(base_metrics: ~collections.abc.Sequence[~fair_forge.metrics.Metric], agg: ~fair_forge.metrics.MetricAgg = <MetricAgg.DIFF_RATIO: 19>, remove_score_suffix: bool = True) list[GroupMetric][source]¶
Turn a sequence of metrics into a list of group metrics.
- fair_forge.metrics.cv(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) Float[source]¶
Calder-Verwer.
- fair_forge.metrics.prob_neg(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]¶
Probability of negative prediction.
- fair_forge.metrics.prob_pos(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, sample_weight: ndarray[tuple[Any, ...], dtype[bool]] | None = None) float64[source]¶
Probability of positive prediction.
Example
>>> import fair_forge as ff >>> y_true = np.array([0, 0, 0, 1], dtype=np.int32) >>> y_pred = np.array([0, 1, 0, 1], dtype=np.int32) >>> ff.metrics.prob_pos(y_true, y_pred) np.float64(0.5)