fedbiomed.common.models

Module: fedbiomed.common.models

The fedbiomed.common.models module includes model abstraction classes that can be used with plain framework specific models.

Please visit Declearn repository for the "TorchVector" and "NumpyVector" classes used in this module.

Classes

BaseSkLearnModel

CLASS
BaseSkLearnModel(model)

Bases: Model

Wrapper of Scikit learn models.

This class implements all abstract methods from the Model API, but adds some scikit-learn-specific ones that need implementing by its children.

Attributes:

Name Type Description
model BaseEstimator

Wrapped model

_is_declearn_optim bool

Switch that allows the use of Declearn's optimizers

param_list List[str]

List that contains layer attributes. Should be set when calling set_init_params method

Class attributes:

Name Type Description
default_lr_init ClassVar[float]

Default value for setting learning rate to the scikit learn model. Needed for computing gradients. Set with set_learning_rate setter

default_lr ClassVar[str]

Default value for setting learning rate schedule to the scikit learn model. Needed for computing gradients. Set with set_learning_rate setter

is_classification ClassVar[bool]

Boolean flag indicating whether the wrapped model is designed for classification or for regression supervised-learning tasks.

Parameters:

Name Type Description Default
model BaseEstimator

Model object as an instance of BaseEstimator

required

Raises:

Type Description
FedbiomedModelError

if model is not as scikit learn BaseEstimator object

Source code in fedbiomed/common/models/_sklearn.py
def __init__(
    self,
    model: BaseEstimator,
) -> None:
    """Instantiate the wrapper over a scikit-learn BaseEstimator.

    Args:
        model: Model object as an instance of [BaseEstimator][sklearn.base.BaseEstimator]

    Raises:
        FedbiomedModelError: if model is not as scikit learn [BaseEstimator][sklearn.base.BaseEstimator] object
    """
    super().__init__(model)
    self._is_declearn_optim: bool = False  # TODO: to be changed when implementing declearn optimizers
    self._gradients: Dict[str, np.ndarray] = {}
    self.param_list: List[str] = []

Attributes

default_lr class-attribute
default_lr: ClassVar[str] = 'constant'
default_lr_init class-attribute
default_lr_init: ClassVar[float] = 0.1
is_classification class-attribute
is_classification: ClassVar[bool]
model class-attribute
model: BaseEstimator
param_list instance-attribute
param_list: List[str] = []

Functions

apply_updates(updates)

Apply incoming updates to the wrapped model's parameters.

Parameters:

Name Type Description Default
updates Union[Dict[str, np.ndarray], NumpyVector]

Model parameters' updates to add/apply existing model parameters.

required
Source code in fedbiomed/common/models/_sklearn.py
def apply_updates(self, updates: Union[Dict[str, np.ndarray], NumpyVector]) -> None:
    """Apply incoming updates to the wrapped model's parameters.

    Args:
        updates: Model parameters' updates to add/apply existing model parameters.
    """
    for key, val in self._get_iterator_model_params(updates):
        wgt = getattr(self.model, key)
        setattr(self.model, key, wgt + val)
disable_internal_optimizer()
abstractmethod

Abstract method to apply;

Disables scikit learn internal optimizer by setting arbitrary learning rate parameters to the scikit learn model, in order to then compute its gradients.

''' warning "Call it only if using declearn optimizers" Method implementation will depend on the attribute used to set up these arbitrary arguments.

Source code in fedbiomed/common/models/_sklearn.py
@abstractmethod
def disable_internal_optimizer(self) -> None:
    """Abstract method to apply;

    Disables scikit learn internal optimizer by setting arbitrary learning rate parameters to the
    scikit learn model, in order to then compute its gradients.

    ''' warning "Call it only if using `declearn` optimizers"
            Method implementation will depend on the attribute used to set up
            these arbitrary arguments.
    """
export(filename)

Export the wrapped model to a dump file.

Parameters:

Name Type Description Default
filename str

path to the file where the model will be saved.

required

!!! info "Notes": This method is designed to save the model to a local dump file for easy re-use by the same user, possibly outside of Fed-BioMed. It is not designed to produce trustworthy data dumps and is not used to exchange models and their weights as part of the federated learning process.

!!! warning "Warning": This method uses joblib.dump, which relies on pickle and is therefore hard to trust by third-party loading methods.

Source code in fedbiomed/common/models/_sklearn.py
def export(self, filename: str) -> None:
    """Export the wrapped model to a dump file.

    Args:
        filename: path to the file where the model will be saved.

    !!! info "Notes":
        This method is designed to save the model to a local dump
        file for easy re-use by the same user, possibly outside of
        Fed-BioMed. It is not designed to produce trustworthy data
        dumps and is not used to exchange models and their weights
        as part of the federated learning process.

    !!! warning "Warning":
        This method uses `joblib.dump`, which relies on pickle and
        is therefore hard to trust by third-party loading methods.
    """
    with open(filename, "wb") as file:
        joblib.dump(self.model, file)
flatten()

Gets weights as flatten vector

Returns:

Name Type Description
to_list List[float]

Convert np.ndarray to a list if it is True.

Source code in fedbiomed/common/models/_sklearn.py
def flatten(self) -> List[float]:
    """Gets weights as flatten vector

    Returns:
        to_list: Convert np.ndarray to a list if it is True.
    """

    weights = self.get_weights()
    flatten = []
    for _, w in weights.items():
        w_: List[float] = list(w.flatten().astype(float))
        flatten.extend(w_)

    return flatten
get_gradients(as_vector=False)

Gets computed gradients

Parameters:

Name Type Description Default
as_vector bool

Whether to wrap returned gradients into a declearn Vector.

False

Raises:

Type Description
FedbiomedModelError

raised if gradients have not been computed yet (ie model has not been trained)

Returns:

Type Description
Union[Dict[str, np.ndarray], NumpyVector]

Gradients, as a dictionary mapping parameters' names to their gradient's numpy array, or as a declearn NumpyVector wrapping such a dict.

Source code in fedbiomed/common/models/_sklearn.py
def get_gradients(
    self,
    as_vector: bool = False,
) -> Union[Dict[str, np.ndarray], NumpyVector]:
    """Gets computed gradients

    Args:
        as_vector: Whether to wrap returned gradients into a declearn Vector.

    Raises:
        FedbiomedModelError: raised if gradients have not been computed yet (ie model has not been trained)

    Returns:
        Gradients, as a dictionary mapping parameters' names to their gradient's
            numpy array, or as a declearn NumpyVector wrapping such a dict.
    """
    if self._gradients is None:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}. Cannot get gradients if model has not been trained beforehand!"
        )
    gradients = self._gradients
    if as_vector:
        return NumpyVector(gradients)
    return gradients
get_learning_rate()
abstractmethod

Retrieves learning rate of the model. Method implementation will depend on the attribute used to set up these arbitrary arguments

Returns:

Type Description
List[float]

Initial learning rate value(s); a single value if only on learning rate has been used, and a list of several learning rates, one for each layer of the model.

Source code in fedbiomed/common/models/_sklearn.py
@abstractmethod
def get_learning_rate(self) -> List[float]:
    """Retrieves learning rate of the model. Method implementation will
    depend on the attribute used to set up these arbitrary arguments

    Returns:
        Initial learning rate value(s); a single value if only on learning rate has been used, and
            a list of several learning rates, one for each layer of the model.
    """
get_params(value=None)

Gets scikit learn model hyperparameters.

Please refer to [baseEstimator documentation] [https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html] get_params method for further details

Parameters:

Name Type Description Default
value Any

if specified, returns a specific hyperparameter, otherwise, returns a dictionary with all the hyperparameters. Defaults to None.

None

Returns:

Type Description
Dict[str, Any]

Dictionary mapping model hyperparameter names to their values

Source code in fedbiomed/common/models/_sklearn.py
def get_params(self, value: Any = None) -> Dict[str, Any]:
    """Gets scikit learn model hyperparameters.

    Please refer to [`baseEstimator documentation`]
    [https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html] `get_params` method
    for further details

    Args:
        value: if specified, returns a specific hyperparameter, otherwise, returns a dictionary
            with all the hyperparameters. Defaults to None.

    Returns:
        Dictionary mapping model hyperparameter names to their values
    """
    if value is not None:
        return self.model.get_params().get(value)
    return self.model.get_params()
get_weights(as_vector=False)

Returns model's parameters, optionally as a declearn NumpyVector.

Parameters:

Name Type Description Default
as_vector bool

Whether to wrap returned weights into a declearn Vector.

False

Raises:

Type Description
FedbiomedModelError

If the list of parameters are not defined.

Returns:

Type Description
Union[Dict[str, np.ndarray], NumpyVector]

Model weights, as a dictionary mapping parameters' names to their numpy array, or as a declearn NumpyVector wrapping such a dict.

Source code in fedbiomed/common/models/_sklearn.py
def get_weights(
    self,
    as_vector: bool = False,
) -> Union[Dict[str, np.ndarray], NumpyVector]:
    """Returns model's parameters, optionally as a declearn NumpyVector.

    Args:
        as_vector: Whether to wrap returned weights into a declearn Vector.

    Raises:
        FedbiomedModelError: If the list of parameters are not defined.

    Returns:
        Model weights, as a dictionary mapping parameters' names to their
            numpy array, or as a declearn NumpyVector wrapping such a dict.
    """
    if not self.param_list:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}. Attribute `param_list` is empty. You should "
            f"have initialized the model beforehand (try calling `set_init_params`)"
        )
    # Gather copies of the model weights.
    weights = {}  # type: Dict[str, np.ndarray]
    try:
        for key in self.param_list:
            val = getattr(self.model, key)
            if not isinstance(val, np.ndarray):
                raise FedbiomedModelError(
                    f"{ErrorNumbers.FB622.value}: SklearnModel parameter is not a numpy array."
                )
            weights[key] = val.copy()
    except AttributeError as err:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}. Unable to access weights of BaseEstimator "
            f"model {self.model} (details {err}"
        ) from err
    # Optionally encapsulate into a NumpyVector, else return as a dict.
    if as_vector:
        return NumpyVector(weights)
    return weights
init_training()

Initialises the training by setting up attributes.

Raises:

Type Description
FedbiomedModelError

raised if param_list has not been defined

Source code in fedbiomed/common/models/_sklearn.py
def init_training(self):
    """Initialises the training by setting up attributes.

    Raises:
        FedbiomedModelError: raised if `param_list` has not been defined
    """
    if not self.param_list:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}. Attribute `param_list` is empty. You should "
            f"have initialized the model beforehand (try calling `set_init_params`)"
        )
    if self._is_declearn_optim:
        self.disable_internal_optimizer()
predict(inputs)

Computes prediction given input data.

Parameters:

Name Type Description Default
inputs np.ndarray

input data

required

Returns:

Type Description
np.ndarray

Model predictions

Source code in fedbiomed/common/models/_sklearn.py
def predict(self, inputs: np.ndarray) -> np.ndarray:
    """Computes prediction given input data.

    Args:
        inputs: input data

    Returns:
        Model predictions
    """
    return self.model.predict(inputs)
set_init_params(model_args)
abstractmethod

Zeroes scikit learn model parameters.

Should be used before any training, as it sets the scikit learn model parameters and makes them accessible through the use of attributes. Model parameter attribute names will depend on the scikit learn model wrapped.

Parameters:

Name Type Description Default
model_args Dict

dictionary that contains specifications for setting initial model

required
Source code in fedbiomed/common/models/_sklearn.py
@abstractmethod
def set_init_params(self, model_args: Dict) -> None:
    """Zeroes scikit learn model parameters.

    Should be used before any training, as it sets the scikit learn model parameters
    and makes them accessible through the use of attributes. Model parameter attribute names
    will depend on the scikit learn model wrapped.

    Args:
        model_args: dictionary that contains specifications for setting initial model
    """
set_params(params)

Sets scikit learn model hyperparameters.

Please refer to BaseEstimator [https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html] set_params method for further details

Parameters:

Name Type Description Default
params Any

new hyperparameters to set up the model.

{}

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: dictionary containing new hyperparameters values

Source code in fedbiomed/common/models/_sklearn.py
def set_params(self, **params: Any) -> Dict[str, Any]:
    """Sets scikit learn model hyperparameters.

    Please refer to [BaseEstimator][sklearn.base.BaseEstimator]
    [https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html] `set_params` method
    for further details

    Args:
        params: new hyperparameters to set up the model.

    Returns:
        Dict[str, Any]: dictionary containing new hyperparameters values
    """
    self.model.set_params(**params)
    return params
set_weights(weights)

Assign new values to the model's trainable weights.

Parameters:

Name Type Description Default
weights Union[Dict[str, np.ndarray], NumpyVector]

Model weights, as a dict mapping parameters' names to their numpy array, or as a declearn NumpyVector wrapping such a dict.

required
Source code in fedbiomed/common/models/_sklearn.py
def set_weights(
    self,
    weights: Union[Dict[str, np.ndarray], NumpyVector],
) -> None:
    """Assign new values to the model's trainable weights.

    Args:
        weights: Model weights, as a dict mapping parameters' names to their
            numpy array, or as a declearn NumpyVector wrapping such a dict.
    """
    for key, val in self._get_iterator_model_params(weights):
        setattr(self.model, key, val.copy())
train(inputs, targets, stdout=None, kwargs)

Trains scikit learn model and internally computes gradients

Parameters:

Name Type Description Default
inputs np.ndarray

inputs data.

required
targets np.ndarray

targets, to be fit with inputs data

required
stdout Optional[List[List[str]]]

list of console outputs that have been collected during training, that contains losses values. Used to plot model losses. Defaults to None.

None

Raises:

Type Description
FedbiomedModelError

raised if training has not been initialized

Source code in fedbiomed/common/models/_sklearn.py
def train(
    self,
    inputs: np.ndarray,
    targets: np.ndarray,
    stdout: Optional[List[List[str]]] = None,
    **kwargs,
) -> None:
    """Trains scikit learn model and internally computes gradients

    Args:
        inputs: inputs data.
        targets: targets, to be fit with inputs data
        stdout: list of console outputs that have been collected
            during training, that contains losses values. Used to plot model losses. Defaults to None.

    Raises:
        FedbiomedModelError: raised if training has not been initialized
    """
    batch_size = inputs.shape[0]
    w_init = self.get_weights(as_vector=False)  # type: Dict[str, np.ndarray]
    w_updt = {key: np.zeros_like(val) for key, val in w_init.items()}
    # Iterate over the batch; accumulate sample-wise gradients (and loss).
    for idx in range(batch_size):
        # Compute updated weights based on the sample. Capture loss prints.
        with capture_stdout() as console:
            self.model.partial_fit(inputs[idx : idx + 1], targets[idx])
        if stdout is not None:
            stdout.append(console)
        # Accumulate updated weights (weights + sum of gradients).
        # Reset the model's weights and iteration counter.
        for key in self.param_list:
            w_updt[key] += getattr(self.model, key)
            setattr(self.model, key, w_init[key])
        self.model.n_iter_ -= 1
    # Compute the batch-averaged gradients (scaled by eta_t).
    # Note: w_init: {w_t}, w_updt: {w_t - eta_t sum_{s=1}^B(grad_s)}
    #       hence eta_t * avg(grad_s) = w_init - (w_updt / B)
    self._gradients = {
        key: w_init[key] - (w_updt[key] / batch_size) for key in self.param_list
    }
    # When using a declearn Optimizer, negate the learning rate.
    if self._is_declearn_optim:
        lrate = self.get_learning_rate()[0]
        for key, val in self._gradients.items():
            val /= lrate
    # Finally, increment the model's iteration counter.
    self.model.n_iter_ += 1
unflatten(weights_vector)

Unflatten vectorized model weights

Parameters:

Name Type Description Default
weights_vector List[float]

Vectorized model weights to convert dict

required

Returns:

Type Description
Dict[str, np.ndarray]

Model dictionary

Source code in fedbiomed/common/models/_sklearn.py
def unflatten(
        self,
        weights_vector: List[float]
) -> Dict[str, np.ndarray]:
    """Unflatten vectorized model weights

    Args:
        weights_vector: Vectorized model weights to convert dict

    Returns:
        Model dictionary
    """

    super().unflatten(weights_vector)

    weights_vector = np.array(weights_vector)
    weights = self.get_weights()
    pointer = 0

    params = {}
    for key, w in weights.items():
        num_param = w.size
        params[key] = weights_vector[pointer: pointer + num_param].reshape(w.shape)

        pointer += num_param

    return params

MLPSklearnModel

Bases: BaseSkLearnModel

BaseSklearnModel abstract subclass for multi-layer perceptron models.

Attributes

model class-attribute
model: Union[MLPClassifier, MLPRegressor]

Functions

disable_internal_optimizer()
Source code in fedbiomed/common/models/_sklearn.py
def disable_internal_optimizer(self):
    self.model.learning_rate_init = self.default_lr_init
    self.model.learning_rate = self.default_lr
    self._is_declearn_optim = True
get_learning_rate()
Source code in fedbiomed/common/models/_sklearn.py
def get_learning_rate(self) -> List[float]:
    return [self.model.learning_rate_init]

Model

CLASS
Model(model)

Bases: Generic[_MT, _DT, _VT]

Model abstraction, that wraps and handles both native models

Attributes:

Name Type Description
model Any

native model, written with frameworks supported by Fed-BioMed.

model_args Optional[Dict[str, Any]]

model arguments stored as a dictionary, that provides additional arguments for building/using models. Defaults to None.

Parameters:

Name Type Description Default
model _MT

native model wrapped, of child-class-specific type.

required
Source code in fedbiomed/common/models/_model.py
def __init__(self, model: _MT):
    """Constructor of Model abstract class

    Args:
        model: native model wrapped, of child-class-specific type.
    """
    if not isinstance(model, self._model_type):
        err_msg = (
            f"{ErrorNumbers.FB622.value}: unproper 'model' input type: "
            f"expected '{self._model_type}', but 'got {type(model)}'."
        )
        logger.critical(err_msg)
        raise FedbiomedModelError(err_msg)
    self.model: Any = model
    self.model_args: Optional[Dict[str, Any]] = None

Attributes

model instance-attribute
model: Any = model
model_args instance-attribute
model_args: Optional[Dict[str, Any]] = None

Functions

apply_updates(updates)
abstractmethod

Applies updates to the model.

Parameters:

Name Type Description Default
updates Any

model updates.

required
Source code in fedbiomed/common/models/_model.py
@abstractmethod
def apply_updates(self, updates: Any):
    """Applies updates to the model.

    Args:
        updates (Any): model updates.
    """
export(filename)
abstractmethod

Export the wrapped model to a dump file.

Parameters:

Name Type Description Default
filename str

path to the file where the model will be saved.

required

!!! info "Notes": This method is designed to save the model to a local dump file for easy re-use by the same user, possibly outside of Fed-BioMed. It is not designed to produce trustworthy data dumps and is not used to exchange models and their weights as part of the federated learning process.

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def export(self, filename: str) -> None:
    """Export the wrapped model to a dump file.

    Args:
        filename: path to the file where the model will be saved.

    !!! info "Notes":
        This method is designed to save the model to a local dump
        file for easy re-use by the same user, possibly outside of
        Fed-BioMed. It is not designed to produce trustworthy data
        dumps and is not used to exchange models and their weights
        as part of the federated learning process.
    """
flatten()
abstractmethod

Flattens model weights

Returns:

Type Description
List[float]

List of model weights as float.

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def flatten(self) -> List[float]:
    """Flattens model weights

    Returns:
        List of model weights as float.
    """
get_gradients(as_vector=False)
abstractmethod

Return computed gradients attached to the model.

Parameters:

Name Type Description Default
as_vector bool

Whether to wrap returned gradients into a declearn Vector.

False

Returns:

Type Description
Union[Dict[str, Any], _VT]

Gradients, as a dictionary mapping parameters' names to their gradient's value, or as a declearn Vector structure wrapping such a dict.

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def get_gradients(self, as_vector: bool = False) -> Union[Dict[str, Any], _VT]:
    """Return computed gradients attached to the model.

    Args:
        as_vector: Whether to wrap returned gradients into a declearn Vector.

    Returns:
        Gradients, as a dictionary mapping parameters' names to their gradient's
            value, or as a declearn Vector structure wrapping such a dict.
    """
get_weights(as_vector=False)
abstractmethod

Return a copy of the model's trainable weights.

Parameters:

Name Type Description Default
as_vector bool

Whether to wrap returned weights into a declearn Vector.

False

Returns:

Type Description
Union[Dict[str, _DT], _VT]

Model weights, as a dictionary mapping parameters' names to their value, or as a declearn Vector structure wrapping such a dict.

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def get_weights(self, as_vector: bool = False) -> Union[Dict[str, _DT], _VT]:
    """Return a copy of the model's trainable weights.

    Args:
        as_vector: Whether to wrap returned weights into a declearn Vector.

    Returns:
        Model weights, as a dictionary mapping parameters' names to their
            value, or as a declearn Vector structure wrapping such a dict.
    """
init_training()
abstractmethod

Initializes parameters before model training

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def init_training(self):
    """Initializes parameters before model training"""
predict(inputs)
abstractmethod

Returns model predictions given input values

Parameters:

Name Type Description Default
inputs Any

input values.

required

Returns:

Name Type Description
Any Any

predictions.

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def predict(self, inputs: Any) -> Any:
    """Returns model predictions given input values

    Args:
        inputs (Any): input values.

    Returns:
        Any: predictions.
    """
reload(filename)

Import and replace the wrapped model from a dump file.

Parameters:

Name Type Description Default
filename str

path to the file where the model has been exported.

required

!!! info "Notes": This method is designed to load the model from a local dump file, that might not be in a trustworthy format. It should therefore only be used to re-load data exported locally and not received from someone else, including other FL peers.

Raises:

Type Description
FedbiomedModelError

if the reloaded instance is of unproper type.

Source code in fedbiomed/common/models/_model.py
def reload(self, filename: str) -> None:
    """Import and replace the wrapped model from a dump file.

    Args:
        filename: path to the file where the model has been exported.

    !!! info "Notes":
        This method is designed to load the model from a local dump
        file, that might not be in a trustworthy format. It should
        therefore only be used to re-load data exported locally and
        not received from someone else, including other FL peers.

    Raises:
        FedbiomedModelError: if the reloaded instance is of unproper type.
    """
    model = self._reload(filename)
    if not isinstance(model, self._model_type):
        err_msg = (
            f"{ErrorNumbers.FB622.value}: unproper type for imported model"
            f": expected '{self._model_type}', but 'got {type(model)}'."
        )
        logger.critical(err_msg)
        raise FedbiomedModelError(err_msg)
    self.model = model
set_weights(weights)
abstractmethod

Assign new values to the model's trainable weights.

Parameters:

Name Type Description Default
weights Union[Dict[str, _DT], _VT]

Model weights, as a dict mapping parameters' names to their value, or as a declearn Vector structure wrapping such a dict.

required
Source code in fedbiomed/common/models/_model.py
@abstractmethod
def set_weights(self, weights: Union[Dict[str, _DT], _VT]) -> None:
    """Assign new values to the model's trainable weights.

    Args:
        weights: Model weights, as a dict mapping parameters' names to their
            value, or as a declearn Vector structure wrapping such a dict.
    """
train(inputs, targets, kwargs)
abstractmethod

Trains model given inputs and targets data

Warning

Please run init_training method before running train method, so to initialize parameters needed for model training"

Warning

This function may not update weights. You may need to call apply_updates to apply updates to the model

Parameters:

Name Type Description Default
inputs Any

input (training) data.

required
targets Any

target values.

required
Source code in fedbiomed/common/models/_model.py
@abstractmethod
def train(self, inputs: Any, targets: Any, **kwargs) -> None:
    """Trains model given inputs and targets data

    !!! warning "Warning"
        Please run `init_training` method before running `train` method,
        so to initialize parameters needed for model training"

    !!! warning "Warning"
        This function may not update weights. You may need to call `apply_updates`
        to apply updates to the model

    Args:
        inputs (Any): input (training) data.
        targets (Any): target values.
    """
unflatten(weights_vector)
abstractmethod

Revert flatten model weights back model-dict form.

Parameters:

Name Type Description Default
weights_vector List[float]

Vectorized model weights to convert dict

required

Returns:

Type Description
None

Model dictionary

Source code in fedbiomed/common/models/_model.py
@abstractmethod
def unflatten(
        self,
        weights_vector: List[float]
) -> None:
    """Revert flatten model weights back model-dict form.

    Args:
        weights_vector: Vectorized model weights to convert dict

    Returns:
        Model dictionary
    """

    if not isinstance(weights_vector, list) or not all([isinstance(w, float) for w in weights_vector]):
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622} `weights_vector should be 1D list of float containing flatten model parameters`"
        )

SGDClassifierSKLearnModel

Bases: SGDSkLearnModel

BaseSkLearnModel subclass for SGDClassifier models.

Attributes

is_classification class-attribute
is_classification = True
model class-attribute
model: SGDClassifier

Functions

set_init_params(model_args)

Initialize the model's trainable parameters.

Source code in fedbiomed/common/models/_sklearn.py
def set_init_params(self, model_args: Dict[str, Any]) -> None:
    """Initialize the model's trainable parameters."""
    # Set up zero-valued start weights, for binary of multiclass classif.
    n_classes = model_args["n_classes"]
    if n_classes == 2:
        init_params = {
            "intercept_": np.zeros((1,)),
            "coef_": np.zeros((1, model_args["n_features"])),
        }
    else:
        init_params = {
            "intercept_": np.zeros((n_classes,)),
            "coef_": np.zeros((n_classes, model_args["n_features"])),
        }
    # Assign these initialization parameters and retain their names.
    self.param_list = list(init_params)
    for key, val in init_params.items():
        setattr(self.model, key, val)
    # Also initialize the "classes_" slot with unique predictable labels.
    # FIXME: this assumes target values are integers in range(n_classes).
    setattr(self.model, "classes_", np.arange(n_classes))

SGDRegressorSKLearnModel

Bases: SGDSkLearnModel

BaseSkLearnModel subclass for SGDRegressor models.

Attributes

is_classification class-attribute
is_classification = False
model class-attribute
model: SGDRegressor

Functions

set_init_params(model_args)

Initialize the model's trainable parameters.

Source code in fedbiomed/common/models/_sklearn.py
def set_init_params(self, model_args: Dict[str, Any]):
    """Initialize the model's trainable parameters."""
    init_params = {
        "intercept_": np.array([0.0]),
        "coef_": np.array([0.0] * model_args["n_features"]),
    }
    self.param_list = list(init_params)
    for key, val in init_params.items():
        setattr(self.model, key, val)

SkLearnModel

CLASS
SkLearnModel(model)

Sklearn model builder.

It wraps one of Fed-BioMed BaseSkLearnModel object children, by passing a (BaseEstimator)(https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html) object to the constructor, as shown below.

Usage

    from sklearn.linear_model import SGDClassifier
    model = SkLearnModel(SGDClassifier)
    model.set_weights(some_weights)
    type(model.model)
    # Output: <class 'sklearn.linear_model._stochastic_gradient.SGDClassifier'>

Attributes:

Name Type Description
_instance BaseSkLearnModel

instance of BaseSkLearnModel

Parameters:

Name Type Description Default
model Type[BaseEstimator]

non-initialized BaseEstimator object

required

Raises:

Type Description
FedbiomedModelError

raised if model does not belong to the implemented models.

FedbiomedModelError

raised if __name__ attribute does not belong to object. This may happen when passing an instantiated object instead of the class object (e.g. instance of SGDClassifier() instead of SGDClassifier object)

Source code in fedbiomed/common/models/_sklearn.py
def __init__(self, model: Type[BaseEstimator]):
    """Constructor of the model builder.

    Args:
        model: non-initialized [BaseEstimator][sklearn.base.BaseEstimator] object

    Raises:
        FedbiomedModelError: raised if model does not belong to the implemented models.
        FedbiomedModelError: raised if `__name__` attribute does not belong to object. This may happen
            when passing an instantiated object instead of the class object (e.g. instance of
            SGDClassifier() instead of SGDClassifier object)
    """
    if not isinstance(model, type):
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}: 'SkLearnModel' received a '{type(model)}' instance as 'model' "
            "input while it was expecting a scikit-learn BaseEstimator subclass constructor."
        )
    if not issubclass(model, BaseEstimator):
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}: 'SkLearnModel' received a 'model' class that is not "
            f"a scikit-learn BaseEstimator subclass: '{model}'."
        )
    if model.__name__ not in SKLEARN_MODELS:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}: 'SkLearnModel' received '{model}' as 'model' class, "
            f"support for which has not yet been implemented in Fed-BioMed."
        )
    self._instance: BaseSkLearnModel = SKLEARN_MODELS[model.__name__](model())

TorchModel

CLASS
TorchModel(model)

Bases: Model

PyTorch model wrapper that ease the handling of a pytorch model

Attributes:

Name Type Description
model torch.nn.Module

torch.nn.Module. Pytorch model wrapped.

init_params Dict[str, torch.Tensor]

OrderedDict. Model initial parameters. Set when calling init_training

Source code in fedbiomed/common/models/_torch.py
def __init__(self, model: torch.nn.Module) -> None:
    """Instantiates the wrapper over a torch Module instance."""
    super().__init__(model)
    self.init_params: Dict[str, torch.Tensor] = {}

Attributes

init_params instance-attribute
init_params: Dict[str, torch.Tensor] = {}
model class-attribute
model: torch.nn.Module

Functions

add_corrections_to_gradients(corrections)

Adds values to attached gradients in the model

Parameters:

Name Type Description Default
corrections Union[TorchVector, Dict[str, torch.Tensor]]

corrections to be added to model's gradients

required
Source code in fedbiomed/common/models/_torch.py
def add_corrections_to_gradients(
    self,
    corrections: Union[TorchVector, Dict[str, torch.Tensor]],
) -> None:
    """Adds values to attached gradients in the model

    Args:
        corrections: corrections to be added to model's gradients
    """
    iterator = self._get_iterator_model_params(corrections)
    for name, update in iterator:
        param = self.model.get_parameter(name)
        if param.grad is not None:
            param.grad.add_(update.to(param.grad.device))
apply_updates(updates)

Apply incoming updates to the wrapped model's parameters.

Parameters:

Name Type Description Default
updates Union[TorchVector, Dict[str, torch.Tensor]]

model updates to be added to the model.

required
Source code in fedbiomed/common/models/_torch.py
def apply_updates(
    self,
    updates: Union[TorchVector, Dict[str, torch.Tensor]],
) -> None:
    """Apply incoming updates to the wrapped model's parameters.

    Args:
        updates: model updates to be added to the model.
    """
    iterator = self._get_iterator_model_params(updates)
    with torch.no_grad():
        for name, update in iterator:
            param = self.model.get_parameter(name)
            param.add_(update.to(param.device))
export(filename)

Export the wrapped model to a dump file.

Parameters:

Name Type Description Default
filename str

path to the file where the model will be saved.

required

!!! info "Notes": This method is designed to save the model to a local dump file for easy re-use by the same user, possibly outside of Fed-BioMed. It is not designed to produce trustworthy data dumps and is not used to exchange models and their weights as part of the federated learning process.

!!! warning "Warning": This method uses torch.save, which relies on pickle and is therefore hard to trust by third-party loading methods.

Source code in fedbiomed/common/models/_torch.py
def export(self, filename: str) -> None:
    """Export the wrapped model to a dump file.

    Args:
        filename: path to the file where the model will be saved.

    !!! info "Notes":
        This method is designed to save the model to a local dump
        file for easy re-use by the same user, possibly outside of
        Fed-BioMed. It is not designed to produce trustworthy data
        dumps and is not used to exchange models and their weights
        as part of the federated learning process.

    !!! warning "Warning":
        This method uses `torch.save`, which relies on pickle and
        is therefore hard to trust by third-party loading methods.
    """
    torch.save(self.model, filename)
flatten()

Gets weights as flatten vector

Returns:

Name Type Description
to_list List[float]

Convert np.ndarray to a list if it is True.

Source code in fedbiomed/common/models/_torch.py
def flatten(self) -> List[float]:
    """Gets weights as flatten vector

    Returns:
        to_list: Convert np.ndarray to a list if it is True.
    """

    params: List[float] = torch.nn.utils.parameters_to_vector(
        self.model.parameters()
    ).tolist()

    return params
get_gradients(as_vector=False)

Return the gradients attached to the model, opt. as a declearn TorchVector.

Parameters:

Name Type Description Default
as_vector bool

Whether to wrap returned gradients into a declearn Vector.

False

Returns:

Type Description
Union[Dict[str, torch.Tensor], TorchVector]

Gradients, as a dictionary mapping parameters' names to their gradient's torch tensor, or as a declearn TorchVector wrapping such a dict.

Source code in fedbiomed/common/models/_torch.py
def get_gradients(
    self,
    as_vector: bool = False,
) -> Union[Dict[str, torch.Tensor], TorchVector]:
    """Return the gradients attached to the model, opt. as a declearn TorchVector.

    Args:
        as_vector: Whether to wrap returned gradients into a declearn Vector.

    Returns:
        Gradients, as a dictionary mapping parameters' names to their gradient's
            torch tensor, or as a declearn TorchVector wrapping such a dict.
    """
    gradients = {
        name: param.grad.detach().clone()
        for name, param in self.model.named_parameters()
        if (param.requires_grad and param.grad is not None)
    }
    if len(gradients) < len(list(self.model.named_parameters())):
        # FIXME: this will be triggered when having some frozen weights even if training was properly conducted
        logger.warning(
            "Warning: can not retrieve all gradients from the model. Are you sure you have "
            "trained the model beforehand?"
        )
    if as_vector:
        return TorchVector(gradients)
    return gradients
get_weights(as_vector=False, only_trainable=False)

Return the model's parameters, optionally as a declearn TorchVector.

Parameters:

Name Type Description Default
only_trainable bool

whether to gather weights only on trainable layers (ie non-frozen layers) or all layers (trainable and frozen). Defaults to False, (trainable and frozen ones)

False
as_vector bool

Whether to wrap returned weights into a declearn Vector.

False

Returns:

Type Description
Union[Dict[str, torch.Tensor], TorchVector]

Model weights, as a dictionary mapping parameters' names to their torch tensor, or as a declearn TorchVector wrapping such a dict.

Source code in fedbiomed/common/models/_torch.py
def get_weights(
    self,
    as_vector: bool = False,
    only_trainable: bool = False,
) -> Union[Dict[str, torch.Tensor], TorchVector]:
    """Return the model's parameters, optionally as a declearn TorchVector.

    Args:
        only_trainable (bool, optional): whether to gather weights only on trainable layers (ie
            non-frozen layers) or all layers (trainable and frozen). Defaults to False, (trainable and
            frozen ones)
        as_vector: Whether to wrap returned weights into a declearn Vector.

    Returns:
        Model weights, as a dictionary mapping parameters' names to their
            torch tensor, or as a declearn TorchVector wrapping such a dict.
    """
    parameters = {
        name: param.detach().clone()
        for name, param in self.model.named_parameters()
        if param.requires_grad or not only_trainable
    }
    if as_vector:
        return TorchVector(parameters)
    return parameters
init_training()

Initializes and sets attributes before the training.

Initializes init_params as a copy of the initial parameters of the model

Source code in fedbiomed/common/models/_torch.py
def init_training(self) -> None:
    """Initializes and sets attributes before the training.

    Initializes `init_params` as a copy of the initial parameters of the model
    """
    # initial aggregated model parameters
    self.init_params = {
        key: param.data.detach().clone()
        for key, param in self.model.named_parameters()
    }
    self.model.train()  # pytorch switch for training
    self.model.zero_grad()
predict(inputs)

Computes prediction given input data.

Parameters:

Name Type Description Default
inputs torch.Tensor

input data

required

Returns:

Type Description
np.ndarray

Model predictions returned as a numpy array

Source code in fedbiomed/common/models/_torch.py
def predict(
    self,
    inputs: torch.Tensor,
) -> np.ndarray:
    """Computes prediction given input data.

    Args:
        inputs: input data

    Returns:
        Model predictions returned as a numpy array
    """
    self.model.eval()  # pytorch switch for model inference-mode
    with torch.no_grad():
        pred = self.model(inputs)
    return pred.cpu().numpy()
send_to_device(device)

Sends model to device

Parameters:

Name Type Description Default
device torch.device

device set for using GPU or CPU.

required
Source code in fedbiomed/common/models/_torch.py
def send_to_device(
    self,
    device: torch.device,
) -> None:
    """Sends model to device

    Args:
        device: device set for using GPU or CPU.
    """
    self.model.to(device)
set_weights(weights)

Sets model weights.

Parameters:

Name Type Description Default
weights Union[Dict[str, torch.Tensor], TorchVector]

Model weights, as a dict mapping parameters' names to their torch tensor, or as a declearn TorchVector wrapping such a dict.

required
Source code in fedbiomed/common/models/_torch.py
def set_weights(
    self,
    weights: Union[Dict[str, torch.Tensor], TorchVector],
) -> None:
    """Sets model weights.

    Args:
        weights: Model weights, as a dict mapping parameters' names to their
            torch tensor, or as a declearn TorchVector wrapping such a dict.
    """
    state_dict = dict(self._get_iterator_model_params(weights))
    incompatible = self.model.load_state_dict(state_dict, strict=False)
    if incompatible.missing_keys:
        logger.warning(
            "'TorchModel.set_weights' received inputs that did not cover all"
            "model parameters; missing weights: %s",
            incompatible.missing_keys
        )
    if incompatible.unexpected_keys:
        logger.warning(
            "'TorchModel.set_weights' received inputs with unexpected names: %s",
            incompatible.unexpected_keys
        )
train(inputs, targets, kwargs)
Source code in fedbiomed/common/models/_torch.py
def train(
    self,
    inputs: torch.Tensor,
    targets: torch.Tensor,
    **kwargs,
) -> None:
    # TODO: should we pass loss function here? and do the backward prop?
    if not self.init_params:
        raise FedbiomedModelError(
            f"{ErrorNumbers.FB622.value}. Training has not been initialized, please initialize it beforehand"
        )
unflatten(weights_vector)

Unflatten vectorized model weights using vector_to_parameters

This method does not manipulate current model weights modify model parameters.

Parameters:

Name Type Description Default
weights_vector List[float]

Vectorized model weights to convert dict

required

Returns:

Type Description
Dict[str, torch.Tensor]

Model dictionary

Source code in fedbiomed/common/models/_torch.py
def unflatten(
        self,
        weights_vector: List[float]
) -> Dict[str, torch.Tensor]:
    """Unflatten vectorized model weights using [`vector_to_parameters`][torch.nn.utils.vector_to_parameters]

    This method does not manipulate current model weights modify model parameters.

    Args:
        weights_vector: Vectorized model weights to convert dict

    Returns:
        Model dictionary
    """

    super().unflatten(weights_vector)

    # Copy model to make sure global model parameters won't be overwritten
    model = copy.deepcopy(self.model)
    vector = torch.as_tensor(weights_vector).type(torch.DoubleTensor)

    # Following operation updates model parameters of copied model object
    try:
        torch.nn.utils.vector_to_parameters(vector, model.parameters())
    except TypeError as e:
        FedbiomedModelError(
            f"{ErrorNumbers.FB622.value} Can not unflatten model parameters. {e}"
        )

    return TorchModel(model).get_weights()