Federated Learning with PyTorch and Flower (Advanced Example)ΒΆ

View on GitHub

[!TIP] This example shows intermediate and advanced functionality of Flower. It you are new to Flower, it is recommended to start from the quickstart-pytorch example or the quickstart PyTorch tutorial.

This example shows how to extend your ClientApp and ServerApp capabilities compared to what’s shown in the quickstart-pytorch example. In particular, it will show how the ClientApp’s state (and object of type RecordSet) can be used to enable stateful clients, facilitating the design of personalized federated learning strategies, among others. The ServerApp in this example makes use of a custom strategy derived from the built-in FedAvg. In addition, it will also showcase how to:

  1. Save model checkpoints

  2. Save the metrics available at the strategy (e.g. accuracies, losses)

  3. Log training artefacts to Weights & Biases

  4. Implement a simple decaying learning rate schedule across rounds

The structure of this directory is as follows:

advanced-pytorch
β”œβ”€β”€ pytorch_example
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ client_app.py   # Defines your ClientApp
β”‚   β”œβ”€β”€ server_app.py   # Defines your ServerApp
β”‚   β”œβ”€β”€ strategy.py     # Defines a custom strategy
β”‚   └── task.py         # Defines your model, training and data loading
β”œβ”€β”€ pyproject.toml      # Project metadata like dependencies and configs
└── README.md

[!NOTE] By default this example will log metrics to Weights & Biases. For this, you need to ensure that your system has logged in. Often it’s as simple as executing wandb login on the terminal after installing wandb. Please, refer to this quickstart guide for more information.

This examples uses Flower Datasets with the Dirichlet Partitioner to partition the Fashion-MNIST dataset in a non-IID fashion into 50 partitions.

[!TIP] You can use Flower Datasets built-in visualization tools to easily generate plots like the one above.

Install dependencies and projectΒΆ

Install the dependencies defined in pyproject.toml as well as the pytorch_example package.

pip install -e .

Run the projectΒΆ

You can run your Flower project in both simulation and deployment mode without making changes to the code. If you are starting with Flower, we recommend you using the simulation mode as it requires fewer components to be launched manually. By default, flwr run will make use of the Simulation Engine.

When you run the project, the strategy will create a directory structure in the form of outputs/date/time and store two JSON files: config.json containing the run-config that the ServerApp receives; and results.json containing the results (accuracies, losses) that are generated at the strategy.

By default, the metrics: {centralized_accuracy, centralized_loss, federated_evaluate_accuracy, federated_evaluate_loss} will be logged to Weights & Biases (they are also stored to the results.json previously mentioned). Upon executing flwr run you’ll see a URL linking to your Weight&Biases dashboard wher you can see the metrics.

Run with the Simulation EngineΒΆ

With default parameters, 25% of the total 50 nodes (see num-supernodes in pyproject.toml) will be sampled for fit and 50% for an evaluate round. By default ClientApp objects will run on CPU.

[!TIP] To run your ClientApps on GPU or to adjust the degree or parallelism of your simulation, edit the [tool.flwr.federations.local-simulation] section in the pyproject.tom.

flwr run .

# To disable W&B
flwr run . --run-config use-wandb=false

You can run the app using another federation (see pyproject.toml). For example, if you have a GPU available, select the local-sim-gpu federation:

flwr run . local-sim-gpu

You can also override some of the settings for your ClientApp and ServerApp defined in pyproject.toml. For example:

flwr run . --run-config "num-server-rounds=5 fraction-fit=0.5"

Run with the Deployment EngineΒΆ

[!NOTE] An update to this example will show how to run this Flower application with the Deployment Engine and TLS certificates, or with Docker.