FedXgbBaggingยถ
- class FedXgbBagging(fraction_train: float = 1.0, fraction_evaluate: float = 1.0, min_train_nodes: int = 2, min_evaluate_nodes: int = 2, min_available_nodes: int = 2, weighted_by_key: str = 'num-examples', arrayrecord_key: str = 'arrays', configrecord_key: str = 'config', train_metrics_aggr_fn: Callable[[list[RecordDict], str], MetricRecord] | None = None, evaluate_metrics_aggr_fn: Callable[[list[RecordDict], str], MetricRecord] | None = None)[์์ค]ยถ
๊ธฐ๋ฐ ํด๋์ค:
FedAvg
Configurable FedXgbBagging strategy implementation.
๋ฉ์๋
aggregate_evaluate
(server_round, replies)Aggregate MetricRecords in the received Messages.
aggregate_train
(server_round, replies)Aggregate ArrayRecords and MetricRecords in the received Messages.
configure_evaluate
(server_round, arrays, ...)Configure the next round of federated evaluation.
configure_train
(server_round, arrays, ...)Configure the next round of federated training.
start
(grid, initial_arrays[, num_rounds, ...])Execute the federated learning strategy.
summary
()Log summary configuration of the strategy.
์์ฑ
current_bst
- aggregate_evaluate(server_round: int, replies: Iterable[Message]) MetricRecord | None ยถ
Aggregate MetricRecords in the received Messages.
- aggregate_train(server_round: int, replies: Iterable[Message]) tuple[ArrayRecord | None, MetricRecord | None] [์์ค]ยถ
Aggregate ArrayRecords and MetricRecords in the received Messages.
- configure_evaluate(server_round: int, arrays: ArrayRecord, config: ConfigRecord, grid: Grid) Iterable[Message] ยถ
Configure the next round of federated evaluation.
- configure_train(server_round: int, arrays: ArrayRecord, config: ConfigRecord, grid: Grid) Iterable[Message] [์์ค]ยถ
Configure the next round of federated training.
- start(grid: Grid, initial_arrays: ArrayRecord, num_rounds: int = 3, timeout: float = 3600, train_config: ConfigRecord | None = None, evaluate_config: ConfigRecord | None = None, evaluate_fn: Callable[[int, ArrayRecord], MetricRecord | None] | None = None) Result ยถ
Execute the federated learning strategy.
Runs the complete federated learning workflow for the specified number of rounds, including training, evaluation, and optional centralized evaluation.
- ๋งค๊ฐ๋ณ์:
grid (Grid) โ The Grid instance used to send/receive Messages from nodes executing a ClientApp.
initial_arrays (ArrayRecord) โ Initial model parameters (arrays) to be used for federated learning.
num_rounds (int (default: 3)) โ Number of federated learning rounds to execute.
timeout (float (default: 3600)) โ Timeout in seconds for waiting for node responses.
train_config (ConfigRecord, optional) โ Configuration to be sent to nodes during training rounds. If unset, an empty ConfigRecord will be used.
evaluate_config (ConfigRecord, optional) โ Configuration to be sent to nodes during evaluation rounds. If unset, an empty ConfigRecord will be used.
evaluate_fn (Callable[[int, ArrayRecord], Optional[MetricRecord]], optional) โ Optional function for centralized evaluation of the global model. Takes server round number and array record, returns a MetricRecord or None. If provided, will be called before the first round and after each round. Defaults to None.
- ๋ฐํ:
Results containing final model arrays and also training metrics, evaluation metrics and global evaluation metrics (if provided) from all rounds.
- ๋ฐํ ํ์:
Results
- summary() None ยถ
Log summary configuration of the strategy.