DifferentialPrivacyServerSideFixedClipping#

class DifferentialPrivacyServerSideFixedClipping(strategy: Strategy, noise_multiplier: float, clipping_norm: float, num_sampled_clients: int)[source]#

Bases : Strategy

Strategy wrapper for central DP with server-side fixed clipping.

Paramètres:
  • strategy (Strategy) – The strategy to which DP functionalities will be added by this wrapper.

  • noise_multiplier (float) – The noise multiplier for the Gaussian mechanism for model updates. A value of 1.0 or higher is recommended for strong privacy.

  • clipping_norm (float) – The value of the clipping norm.

  • num_sampled_clients (int) – The number of clients that are sampled on each round.

Exemples

Create a strategy:

>>> strategy = fl.server.strategy.FedAvg( ... )

Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping wrapper

>>> dp_strategy = DifferentialPrivacyServerSideFixedClipping(
>>>     strategy, cfg.noise_multiplier, cfg.clipping_norm, cfg.num_sampled_clients
>>> )

Methods

aggregate_evaluate(server_round, results, ...)

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round, results, failures)

Compute the updates, clip, and pass them for aggregation.

configure_evaluate(server_round, parameters, ...)

Configure the next round of evaluation.

configure_fit(server_round, parameters, ...)

Configure the next round of training.

evaluate(server_round, parameters)

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager)

Initialize global model parameters using given strategy.

aggregate_evaluate(server_round: int, results: List[Tuple[ClientProxy, EvaluateRes]], failures: List[Tuple[ClientProxy, EvaluateRes] | BaseException]) Tuple[float | None, Dict[str, bool | bytes | float | int | str]][source]#

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round: int, results: List[Tuple[ClientProxy, FitRes]], failures: List[Tuple[ClientProxy, FitRes] | BaseException]) Tuple[Parameters | None, Dict[str, bool | bytes | float | int | str]][source]#

Compute the updates, clip, and pass them for aggregation.

Afterward, add noise to the aggregated parameters.

configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, EvaluateIns]][source]#

Configure the next round of evaluation.

configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) List[Tuple[ClientProxy, FitIns]][source]#

Configure the next round of training.

evaluate(server_round: int, parameters: Parameters) Tuple[float, Dict[str, bool | bytes | float | int | str]] | None[source]#

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager: ClientManager) Parameters | None[source]#

Initialize global model parameters using given strategy.