run_simulation¶
- run_simulation(server_app: ServerApp, client_app: ClientApp, num_supernodes: int, backend_name: str = 'ray', backend_config: dict[str, dict[str, int | float | str | bytes | bool | list[int] | list[float] | list[str] | list[bytes] | list[bool]]] | None = None, enable_tf_gpu_growth: bool = False, verbose_logging: bool = False) None [source]¶
Run a Flower App using the Simulation Engine.
- Paramètres:
server_app (ServerApp) – The ServerApp to be executed. It will send messages to different ClientApp instances running on different (virtual) SuperNodes.
client_app (ClientApp) – The ClientApp to be executed by each of the SuperNodes. It will receive messages sent by the ServerApp.
num_supernodes (int) – Number of nodes that run a ClientApp. They can be sampled by a Driver in the ServerApp and receive a Message describing what the ClientApp should perform.
backend_name (str (default: ray)) – A simulation backend that runs `ClientApp`s.
backend_config (Optional[BackendConfig]) – “A dictionary to configure a backend. Separate dictionaries to configure different elements of backend. Supported top-level keys are init_args for values parsed to initialisation of backend, client_resources to define the resources for clients, and actor to define the actor parameters. Values supported in <value> are those included by flwr.common.typing.ConfigsRecordValues.
enable_tf_gpu_growth (bool (default: False)) – A boolean to indicate whether to enable GPU growth on the main thread. This is desirable if you make use of a TensorFlow model on your ServerApp while having your ClientApp running on the same GPU. Without enabling this, you might encounter an out-of-memory error because TensorFlow, by default, allocates all GPU memory. Read more about how tf.config.experimental.set_memory_growth() works in the TensorFlow documentation: https://www.tensorflow.org/api/stable.
verbose_logging (bool (default: False)) – When disabled, only INFO, WARNING and ERROR log messages will be shown. If enabled, DEBUG-level logs will be displayed.