DifferentialPrivacyServerSideFixedClipping¶
- class DifferentialPrivacyServerSideFixedClipping(strategy: Strategy, noise_multiplier: float, clipping_norm: float, num_sampled_clients: int)[source]¶
Bases:
DifferentialPrivacyFixedClippingBase
Strategy wrapper for central DP with server-side fixed clipping.
- Parameters:
strategy (Strategy) – The strategy to which DP functionalities will be added by this wrapper.
noise_multiplier (float) – The noise multiplier for the Gaussian mechanism for model updates. A value of 1.0 or higher is recommended for strong privacy.
clipping_norm (float) – The value of the clipping norm.
num_sampled_clients (int) – The number of clients that are sampled on each round.
Examples
Create a strategy:
strategy = fl.serverapp.FedAvg( ... )
Wrap the strategy with the DifferentialPrivacyServerSideFixedClipping wrapper:
dp_strategy = DifferentialPrivacyServerSideFixedClipping( strategy, cfg.noise_multiplier, cfg.clipping_norm, cfg.num_sampled_clients )
Methods
aggregate_evaluate
(server_round, replies)Aggregate MetricRecords in the received Messages.
aggregate_train
(server_round, replies)Aggregate ArrayRecords and MetricRecords in the received Messages.
configure_evaluate
(server_round, arrays, ...)Configure the next round of federated evaluation.
configure_train
(server_round, arrays, ...)Configure the next round of training.
start
(grid, initial_arrays[, num_rounds, ...])Execute the federated learning strategy.
summary
()Log summary configuration of the strategy.
- aggregate_evaluate(server_round: int, replies: Iterable[Message]) MetricRecord | None ¶
Aggregate MetricRecords in the received Messages.
- aggregate_train(server_round: int, replies: Iterable[Message]) tuple[ArrayRecord | None, MetricRecord | None] [source]¶
Aggregate ArrayRecords and MetricRecords in the received Messages.
- configure_evaluate(server_round: int, arrays: ArrayRecord, config: ConfigRecord, grid: Grid) Iterable[Message] ¶
Configure the next round of federated evaluation.
- configure_train(server_round: int, arrays: ArrayRecord, config: ConfigRecord, grid: Grid) Iterable[Message] [source]¶
Configure the next round of training.
- start(grid: Grid, initial_arrays: ArrayRecord, num_rounds: int = 3, timeout: float = 3600, train_config: ConfigRecord | None = None, evaluate_config: ConfigRecord | None = None, evaluate_fn: Callable[[int, ArrayRecord], MetricRecord] | None = None) Result ¶
Execute the federated learning strategy.
Runs the complete federated learning workflow for the specified number of rounds, including training, evaluation, and optional centralized evaluation.
- Parameters:
grid (Grid) – The Grid instance used to send/receive Messages from nodes executing a ClientApp.
initial_arrays (ArrayRecord) – Initial model parameters (arrays) to be used for federated learning.
num_rounds (int (default: 3)) – Number of federated learning rounds to execute.
timeout (float (default: 3600)) – Timeout in seconds for waiting for node responses.
train_config (ConfigRecord, optional) – Configuration to be sent to nodes during training rounds. If unset, an empty ConfigRecord will be used.
evaluate_config (ConfigRecord, optional) – Configuration to be sent to nodes during evaluation rounds. If unset, an empty ConfigRecord will be used.
evaluate_fn (Callable[[int, ArrayRecord], MetricRecord], optional) – Optional function for centralized evaluation of the global model. Takes server round number and array record, returns a MetricRecord. If provided, will be called before the first round and after each round. Defaults to None.
- Returns:
Results containing final model arrays and also training metrics, evaluation metrics and global evaluation metrics (if provided) from all rounds.
- Return type:
Results
- summary() None ¶
Log summary configuration of the strategy.