DifferentialPrivacyClientSideFixedClippingยถ

class DifferentialPrivacyClientSideFixedClipping(strategy: Strategy, noise_multiplier: float, clipping_norm: float, num_sampled_clients: int)[์†Œ์Šค]ยถ

๊ธฐ๋ฐ˜ ํด๋ž˜์Šค: Strategy

Strategy wrapper for central DP with client-side fixed clipping.

Use fixedclipping_mod modifier at the client side.

In comparison to DifferentialPrivacyServerSideFixedClipping, which performs clipping on the server-side, DifferentialPrivacyClientSideFixedClipping expects clipping to happen on the client-side, usually by using the built-in fixedclipping_mod.

๋งค๊ฐœ๋ณ€์ˆ˜:
  • strategy (Strategy) โ€“ The strategy to which DP functionalities will be added by this wrapper.

  • noise_multiplier (float) โ€“ The noise multiplier for the Gaussian mechanism for model updates. A value of 1.0 or higher is recommended for strong privacy.

  • clipping_norm (float) โ€“ ํด๋ฆฌํ•‘ ๊ธฐ์ค€๊ฐ’์ž…๋‹ˆ๋‹ค.

  • num_sampled_clients (int) โ€“ The number of clients that are sampled on each round.

์˜ˆ์ œ

Create a strategy:

>>> strategy = fl.server.strategy.FedAvg(...)

Wrap the strategy with the DifferentialPrivacyClientSideFixedClipping wrapper:

>>> dp_strategy = DifferentialPrivacyClientSideFixedClipping(
>>>     strategy, cfg.noise_multiplier, cfg.clipping_norm, cfg.num_sampled_clients
>>> )

On the client, add the fixedclipping_mod to the client-side mods:

>>> app = fl.client.ClientApp(
>>>     client_fn=client_fn, mods=[fixedclipping_mod]
>>> )

๋ฉ”์†Œ๋“œ

aggregate_evaluate(server_round, results, ...)

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round, results, failures)

Add noise to the aggregated parameters.

configure_evaluate(server_round, parameters, ...)

Configure the next round of evaluation.

configure_fit(server_round, parameters, ...)

Configure the next round of training.

evaluate(server_round, parameters)

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager)

Initialize global model parameters using given strategy.

aggregate_evaluate(server_round: int, results: list[tuple[ClientProxy, EvaluateRes]], failures: list[tuple[ClientProxy, EvaluateRes] | BaseException]) tuple[float | None, dict[str, bool | bytes | float | int | str]][์†Œ์Šค]ยถ

Aggregate evaluation losses using the given strategy.

aggregate_fit(server_round: int, results: list[tuple[ClientProxy, FitRes]], failures: list[tuple[ClientProxy, FitRes] | BaseException]) tuple[Parameters | None, dict[str, bool | bytes | float | int | str]][์†Œ์Šค]ยถ

Add noise to the aggregated parameters.

configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, EvaluateIns]][์†Œ์Šค]ยถ

Configure the next round of evaluation.

configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, FitIns]][์†Œ์Šค]ยถ

Configure the next round of training.

evaluate(server_round: int, parameters: Parameters) tuple[float, dict[str, bool | bytes | float | int | str]] | None[์†Œ์Šค]ยถ

Evaluate model parameters using an evaluation function from the strategy.

initialize_parameters(client_manager: ClientManager) Parameters | None[์†Œ์Šค]ยถ

Initialize global model parameters using given strategy.