--- tags: [mods, monitoring] dataset: [CIFAR-10] framework: [wandb, tensorboard] --- # Using custom mods ๐Ÿงช [View on GitHub](https://github.com/adap/flower/blob/main/examples/custom-mods) > ๐Ÿงช = This example covers experimental features that might change in future versions of Flower > Please consult the regular PyTorch code examples ([quickstart](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch), [advanced](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)) to learn how to use Flower with PyTorch. The following steps describe how to write custom Flower Mods and use them in a simple example. ## Writing custom Flower Mods As described [in the documentation](https://flower.ai/docs/framework/how-to-use-built-in-mods.html#what-are-mods), Flower Mods in their simplest form can be described as: ```python def basic_mod(msg: Message, context: Context, app: ClientApp) -> Message: # Do something with incoming Message (or Context) # before passing to the inner ``ClientApp`` reply = app(msg, context) # Do something with outgoing Message (or Context) # before returning return reply ``` and used when defining the `ClientApp` as: ```python app = ClientApp( client_fn=client_fn, mods=[basic_mod], ) ``` The mods in this example do not modify the `Message` object that the `ClientApp` is communicating to the `ServerApp`. Instead, the mods only log the _metrics_ returned by the client's `fit()` method to Weight & Biases or into TensorBoard . ## Set up the project ### Clone the project Start by cloning the example project: ```shell git clone --depth=1 https://github.com/adap/flower.git _tmp \ && mv _tmp/examples/custom-mods . \ && rm -rf _tmp \ && cd custom-mods ``` This will create a new directory called `custom-mods` with the following structure: ```shell custom-mods โ”œโ”€โ”€ README.md โ”œโ”€โ”€ custom_mods โ”‚ โ”œโ”€โ”€ __init__.py โ”‚ โ”œโ”€โ”€ client_app.py # Defines your ClientApp โ”‚ โ”œโ”€โ”€ mods.py # Defines a Weights & Biases and TensorBoard mod โ”‚ โ”œโ”€โ”€ server_app.py # Defines your ServerApp โ”‚ โ””โ”€โ”€ task.py # Defines your model, training and data loading โ””โ”€โ”€ pyproject.toml # Project metadata like dependencies and configs ``` ### Install dependencies and project Install the dependencies defined in `pyproject.toml` as well as the `custom_mods` package. ```bash pip install -e . ``` ## Run the project > \[!TIP\] > By default the `ClientApp` uses the TensorBoard mod, if you would like to enable the Weight & Biases mod, please edit the line at the bottom of `custom_mods/client_app.py` and pass the `get_wandb_mod` mod to the constructor of your `ClientApp`. You can run your Flower project in both _simulation_ and _deployment_ mode without making changes to the code. If you are starting with Flower, we recommend you using the _simulation_ mode as it requires fewer components to be launched manually. By default, `flwr run` will make use of the Simulation Engine. ### Run with the Simulation Engine > \[!NOTE\] > Check the [Simulation Engine documentation](https://flower.ai/docs/framework/how-to-run-simulations.html) to learn more about Flower simulations and how to optimize them. ```bash flwr run . ``` You can also override some of the settings for your `ClientApp` and `ServerApp` defined in `pyproject.toml`. For example: ```bash flwr run . --run-config "num-server-rounds=5 learning-rate=0.05" ``` ### Check the results For W&B, you will need to login to the [website](https://wandb.ai). For TensorBoard, you will need to run the following command in your terminal: ```bash tensorboard --logdir ``` Where `` needs to be replaced by the directory passed as an argument to the wrapper function (`.runs_history/` by default). ### Run with the Deployment Engine Follow this [how-to guide](https://flower.ai/docs/framework/how-to-run-flower-with-deployment-engine.html) to run the same app in this example but with Flower's Deployment Engine. After that, you might be intersted in setting up [secure TLS-enabled communications](https://flower.ai/docs/framework/how-to-enable-tls-connections.html) and [SuperNode authentication](https://flower.ai/docs/framework/how-to-authenticate-supernodes.html) in your federation. If you are already familiar with how the Deployment Engine works, you may want to learn how to run it using Docker. Check out the [Flower with Docker](https://flower.ai/docs/framework/docker/index.html) documentation.