fedbiomed.common.metrics

Module: fedbiomed.common.metrics

Provide test metrics, both MetricTypes to use in TrainingArgs but also calculation routines.

Classes

MetricTypes

CLASS
MetricTypes(idx, metric_category)

Bases: _BaseEnum

List of Performance metrics used to evaluate the model.

Source code in fedbiomed/common/metrics.py
def __init__(self, idx: int, metric_category: _MetricCategory) -> None:
    self._idx = idx
    self._metric_category = metric_category

Attributes

ACCURACY class-attribute
ACCURACY = (0, _MetricCategory.CLASSIFICATION_LABELS)
EXPLAINED_VARIANCE class-attribute
EXPLAINED_VARIANCE = (6, _MetricCategory.REGRESSION)
F1_SCORE class-attribute
F1_SCORE = (1, _MetricCategory.CLASSIFICATION_LABELS)
MEAN_ABSOLUTE_ERROR class-attribute
MEAN_ABSOLUTE_ERROR = (5, _MetricCategory.REGRESSION)
MEAN_SQUARE_ERROR class-attribute
MEAN_SQUARE_ERROR = (4, _MetricCategory.REGRESSION)
PRECISION class-attribute
PRECISION = (2, _MetricCategory.CLASSIFICATION_LABELS)
RECALL class-attribute
RECALL = (3, _MetricCategory.CLASSIFICATION_LABELS)

Functions

get_all_metrics()
staticmethod
Source code in fedbiomed/common/metrics.py
@staticmethod
def get_all_metrics() -> List[str]:
    return [metric.name for metric in MetricTypes]
get_metric_type_by_name(metric_name)
staticmethod
Source code in fedbiomed/common/metrics.py
@staticmethod
def get_metric_type_by_name(metric_name: str):
    for metric in MetricTypes:
        if metric.name == metric_name:
            return metric
metric_category()
Source code in fedbiomed/common/metrics.py
def metric_category(self) -> _MetricCategory:
    return self._metric_category

Metrics

CLASS
Metrics()

Bases: object

Class of performance metrics used in validation evaluation.

Attrs

metrics: Provided metrics in form of { MetricTypes : skleran.metrics }

Source code in fedbiomed/common/metrics.py
def __init__(self):
    """Constructs metric class with provided metric types: metric function

    Attrs:
        metrics: Provided metrics in form of `{ MetricTypes : skleran.metrics }`
    """

    self.metrics = {
        MetricTypes.ACCURACY.name: self.accuracy,
        MetricTypes.PRECISION.name: self.precision,
        MetricTypes.RECALL.name: self.recall,
        MetricTypes.F1_SCORE.name: self.f1_score,
        MetricTypes.MEAN_SQUARE_ERROR.name: self.mse,
        MetricTypes.MEAN_ABSOLUTE_ERROR.name: self.mae,
        MetricTypes.EXPLAINED_VARIANCE.name: self.explained_variance,
    }

Attributes

metrics instance-attribute
metrics = {
    MetricTypes.ACCURACY.name: self.accuracy,
    MetricTypes.PRECISION.name: self.precision,
    MetricTypes.RECALL.name: self.recall,
    MetricTypes.F1_SCORE.name: self.f1_score,
    MetricTypes.MEAN_SQUARE_ERROR.name: self.mse,
    MetricTypes.MEAN_ABSOLUTE_ERROR.name: self.mae,
    MetricTypes.EXPLAINED_VARIANCE.name: self.explained_variance,
}

Functions

accuracy(y_true, y_pred, kwargs)
staticmethod

Evaluate the accuracy score

Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.accuracy_score

{}

Returns:

Type Description
float

Accuracy score

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def accuracy(y_true: Union[np.ndarray, list],
             y_pred: Union[np.ndarray, list],
             **kwargs: dict) -> float:
    """ Evaluate the accuracy score

    Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.accuracy_score`][sklearn.metrics.accuracy_score]

    Returns:
        Accuracy score

    Raises:
        FedbiomedMetricError: raised if above sklearn method raises
    """

    try:
        y_true, y_pred, _, _ = Metrics._configure_multiclass_parameters(y_true, y_pred, kwargs, 'ACCURACY')
        return metrics.accuracy_score(y_true, y_pred, **kwargs)
    except Exception as e:
        msg = ErrorNumbers.FB611.value + " Exception raised from SKLEARN metrics: " + str(e)
        raise FedbiomedMetricError(msg)
evaluate(y_true, y_pred, metric, kwargs)

Perform evaluation based on given metric.

This method configures given y_pred and y_true to make them compatible with default evaluation methods.

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
metric MetricTypes

An instance of MetricTypes to chose metric that will be used for evaluation

required
**kwargs dict

The arguments specifics to each type of metrics.

{}

Returns:

Type Description
Union[int, float]

Result of the evaluation function

Raises:

Type Description
FedbiomedMetricError

in case of invalid metric, y_true and y_pred types

Source code in fedbiomed/common/metrics.py
def evaluate(self,
             y_true: Union[np.ndarray, list],
             y_pred: Union[np.ndarray, list],
             metric: MetricTypes,
             **kwargs: dict) -> Union[int, float]:
    """Perform evaluation based on given metric.

    This method configures given y_pred and y_true to make them compatible with default evaluation methods.

    Args:
        y_true: True values
        y_pred: Predicted values
        metric: An instance of MetricTypes to chose metric that will be used for evaluation
        **kwargs: The arguments specifics to each type of metrics.

    Returns:
        Result of the evaluation function

    Raises:
        FedbiomedMetricError: in case of invalid metric, y_true and y_pred types
    """
    if not isinstance(metric, MetricTypes):
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Metric should instance of `MetricTypes`")

    if y_true is not None and not isinstance(y_true, (np.ndarray, list)):
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: The argument `y_true` should an instance "
                                   f"of `np.ndarray`, but got {type(y_true)} ")

    if y_pred is not None and not isinstance(y_pred, (np.ndarray, list)):
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: The argument `y_pred` should an instance "
                                   f"of `np.ndarray`, but got {type(y_true)} ")

    y_true, y_pred = self._configure_y_true_pred_(y_true=y_true, y_pred=y_pred, metric=metric)
    result = self.metrics[metric.name](y_true, y_pred, **kwargs)

    return result
explained_variance(y_true, y_pred, kwargs)
staticmethod

Evaluate the Explained variance regression score.

Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html]

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.explained_variance_score

{}

Returns:

Type Description
float

EV score (float or ndarray of floats)

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def explained_variance(y_true: Union[np.ndarray, list],
                       y_pred: Union[np.ndarray, list],
                       **kwargs: dict) -> float:
    """Evaluate the Explained variance regression score.

    Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html]

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.explained_variance_score`]
            [sklearn.metrics.explained_variance_score]

    Returns:
        EV score (float or ndarray of floats)

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """

    # Set multiouput as raw_values is it is not defined by researcher
    if len(y_true.shape) > 1:
        multi_output = kwargs.get('multioutput', 'raw_values')
    else:
        multi_output = None

    kwargs.pop('multioutput', None)

    try:
        return metrics.explained_variance_score(y_true, y_pred, multioutput=multi_output, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `EXPLAINED_VARIANCE`"
                                   f" {str(e)}")
f1_score(y_true, y_pred, kwargs)
staticmethod

Evaluate the F1 score.

Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.f1_score

{}

Returns:

Type Description
float

f1_score (float or array of float, shape = [n_unique_labels])

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def f1_score(y_true: Union[np.ndarray, list],
             y_pred: Union[np.ndarray, list],
             **kwargs: dict) -> float:
    """Evaluate the F1 score.

    Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.f1_score`][sklearn.metrics.f1_score]

    Returns:
        f1_score (float or array of float, shape = [n_unique_labels])

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """

    # Get average and pob_label argument based on multiclass status
    y_true, y_pred, average, pos_label = Metrics._configure_multiclass_parameters(y_true,
                                                                                  y_pred,
                                                                                  kwargs,
                                                                                  'F1_SCORE')

    kwargs.pop("average", None)
    kwargs.pop("pos_label", None)

    try:
        return metrics.f1_score(y_true, y_pred, average=average, pos_label=pos_label, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `F1_SCORE` {str(e)}")
mae(y_true, y_pred, kwargs)
staticmethod

Evaluate the mean absolute error.

Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.mean_absolute_error

{}

Returns:

Type Description
float

MAE score (float or ndarray of floats)

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def mae(y_true: Union[np.ndarray, list],
        y_pred: Union[np.ndarray, list],
        **kwargs: dict) -> float:
    """Evaluate the mean absolute error.

    Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.mean_absolute_error`][sklearn.metrics.mean_absolute_error]

    Returns:
        MAE score (float or ndarray of floats)

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """
    # Set multiouput as raw_values is it is not defined by researcher
    if len(y_true.shape) > 1:
        multi_output = kwargs.get('multioutput', 'raw_values')
    else:
        multi_output = None

    kwargs.pop('multioutput', None)

    try:
        return metrics.mean_absolute_error(y_true, y_pred, multioutput=multi_output, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `MEAN_ABSOLUTE_ERROR`"
                                   f" {str(e)}")
mse(y_true, y_pred, kwargs)
staticmethod

Evaluate the mean squared error.

Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.mean_squared_error

{}

Returns:

Type Description
float

MSE score (float or ndarray of floats)

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def mse(y_true: Union[np.ndarray, list],
        y_pred: Union[np.ndarray, list],
        **kwargs: dict) -> float:
    """Evaluate the mean squared error.

    Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.mean_squared_error`][sklearn.metrics.mean_squared_error]

    Returns:
        MSE score (float or ndarray of floats)

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """

    # Set multiouput as raw_values is it is not defined by researcher
    if len(y_true.shape) > 1:
        multi_output = kwargs.get('multioutput', 'raw_values')
    else:
        multi_output = None

    kwargs.pop('multioutput', None)

    try:
        return metrics.mean_squared_error(y_true, y_pred, multioutput=multi_output, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `MEAN_SQUARED_ERROR`"
                                   f" {str(e)}")
precision(y_true, y_pred, kwargs)
staticmethod

Evaluate the precision score [source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html]

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.precision_score

{}

Returns:

Type Description
float

precision (float, or array of float of shape (n_unique_labels,))

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def precision(y_true: Union[np.ndarray, list],
              y_pred: Union[np.ndarray, list],
              **kwargs: dict) -> float:
    """Evaluate the precision score
    [source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html]

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.precision_score`][sklearn.metrics.precision_score]

    Returns:
        precision (float, or array of float of shape (n_unique_labels,))

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """
    # Get average and pob_label argument based on multiclass status
    y_true, y_pred, average, pos_label = Metrics._configure_multiclass_parameters(y_true,
                                                                                  y_pred,
                                                                                  kwargs,
                                                                                  'PRECISION')

    kwargs.pop("average", None)
    kwargs.pop("pos_label", None)

    try:
        return metrics.precision_score(y_true, y_pred, average=average, pos_label=pos_label, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `PRECISION` "
                                   f"calculation: {str(e)}")
recall(y_true, y_pred, kwargs)
staticmethod

Evaluate the recall. [source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html]

Parameters:

Name Type Description Default
y_true Union[np.ndarray, list]

True values

required
y_pred Union[np.ndarray, list]

Predicted values

required
**kwargs dict

Extra arguments from sklearn.metrics.recall_score

{}

Returns:

Type Description
float

recall (float (if average is not None) or array of float of shape (n_unique_labels,))

Raises:

Type Description
FedbiomedMetricError

raised if above sklearn method for computing precision raises

Source code in fedbiomed/common/metrics.py
@staticmethod
def recall(y_true: Union[np.ndarray, list],
           y_pred: Union[np.ndarray, list],
           **kwargs: dict) -> float:
    """Evaluate the recall.
    [source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html]

    Args:
        y_true: True values
        y_pred: Predicted values
        **kwargs: Extra arguments from [`sklearn.metrics.recall_score`][sklearn.metrics.recall_score]

    Returns:
        recall (float (if average is not None) or array of float of shape (n_unique_labels,))

    Raises:
        FedbiomedMetricError: raised if above sklearn method for computing precision raises
    """

    # Get average and pob_label argument based on multiclass status
    y_true, y_pred, average, pos_label = Metrics._configure_multiclass_parameters(y_true, y_pred, kwargs, 'RECALL')

    kwargs.pop("average", None)
    kwargs.pop("pos_label", None)

    try:
        return metrics.recall_score(y_true, y_pred, average=average, pos_label=pos_label, **kwargs)
    except Exception as e:
        raise FedbiomedMetricError(f"{ErrorNumbers.FB611.value}: Error during calculation of `RECALL` "
                                   f"calculation: {str(e)}")