fair_forge.metrics¶
Classes
The variable that is compared to the predictions in order to check how similar they are. |
|
|
|
|
|
|
Aggregation methods for metrics that are computed per sensitive attributes. |
Renyi correlation. |
Functions
|
Calder-Verwer. |
|
Create per-sensitive attribute metrics from base metrics. |
|
Probability of negative prediction. |
|
Probability of positive prediction. |
|
True Negative Rate (TNR) or Specificity. |
|
True Positive Rate (TPR) or Sensitivity. |
- class fair_forge.metrics.DependencyTarget(*values)[source]¶
Bases:
Enum
The variable that is compared to the predictions in order to check how similar they are.
- S = 's'¶
- Y = 'y'¶
- class fair_forge.metrics.PerSens(*values)[source]¶
Bases:
Flag
Aggregation methods for metrics that are computed per sensitive attributes.
- ALL = 31¶
All aggregations.
- DIFF = 2¶
Difference of the per-group results.
- DIFF_RATIO = 19¶
Equivalent to
INDIVIDUAL | DIFF | RATIO
.
- INDIVIDUAL = 1¶
Individual per-group results.
- MAX = 4¶
Maximum of the per-group results.
- MIN = 8¶
Minimum of the per-group results.
- MIN_MAX = 12¶
Equivalent to
MIN | MAX
.
- RATIO = 16¶
Ratio of the per-group results.
- class fair_forge.metrics.RenyiCorrelation(base: DependencyTarget = DependencyTarget.S)[source]¶
Bases:
GroupMetric
Renyi correlation. Measures how dependent two random variables are.
As defined in this paper: https://link.springer.com/content/pdf/10.1007/BF02024507.pdf , titled “On Measures of Dependence” by Alfréd Rényi.
- base: DependencyTarget = 's'¶
- fair_forge.metrics.cv(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]], *, groups: ndarray[tuple[Any, ...], dtype[int32]]) Float [source]¶
Calder-Verwer.
- fair_forge.metrics.per_sens_metrics(base_metrics: ~collections.abc.Sequence[~fair_forge.metrics.Metric], per_sens: ~fair_forge.metrics.PerSens = <PerSens.DIFF_RATIO: 19>, remove_score_suffix: bool = True) list[GroupMetric] [source]¶
Create per-sensitive attribute metrics from base metrics.
- fair_forge.metrics.prob_neg(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64 [source]¶
Probability of negative prediction.
- fair_forge.metrics.prob_pos(y_true: ndarray[tuple[Any, ...], dtype[int32]], y_pred: ndarray[tuple[Any, ...], dtype[int32]]) float64 [source]¶
Probability of positive prediction.