Federated HuggingFace Transformers using Flower and PyTorch#

View on GitHub

This introductory example to using HuggingFace Transformers with Flower with PyTorch. This example has been extended from the quickstart-pytorch example. The training script closely follows the HuggingFace course, so you are encouraged to check that out for a detailed explanation of the transformer pipeline.

Like quickstart-pytorch, running this example in itself is also meant to be quite easy.

Project Setup#

Start by cloning the example project. We prepared a single-line command that you can copy into your shell which will checkout the example for you:

git clone --depth=1 https://github.com/adap/flower.git && mv flower/examples/quickstart-huggingface . && rm -rf flower && cd quickstart-huggingface

This will create a new directory called quickstart-huggingface containing the following files:

-- pyproject.toml
-- requirements.txt
-- client.py
-- server.py
-- README.md

Installing Dependencies#

Project dependencies (such as torch and flwr) are defined in pyproject.toml and requirements.txt. We recommend Poetry to install those dependencies and manage your virtual environment (Poetry installation) or pip, but feel free to use a different way of installing dependencies and managing virtual environments if you have other preferences.


poetry install
poetry shell

Poetry will install all your dependencies in a newly created virtual environment. To verify that everything works correctly you can run the following command:

poetry run python3 -c "import flwr"

If you don’t see any errors you’re good to go!


Write the command below in your terminal to install the dependencies according to the configuration file requirements.txt.

pip install -r requirements.txt

Run Federated Learning with Flower#

Afterwards you are ready to start the Flower server as well as the clients. You can simply start the server in a terminal as follows:

python3 server.py

Now you are ready to start the Flower clients which will participate in the learning. To do so simply open two more terminal windows and run the following commands.

Start client 1 in the first terminal:

python3 client.py --partition-id 0

Start client 2 in the second terminal:

python3 client.py --partition-id 1

You will see that PyTorch is starting a federated training.