Flower AI Summit 2026·April 15–16·London

@camlsys/fed-audio-tagging

0
0
flwr new @camlsys/fed-audio-tagging

Federated Learning for Urban Audio Tagging

This is a federated learning application built with PyTorch, TorchAudio, and Flower, simulating a fleet of distributed edge microphones collaboratively learning an urban sound classification model.

Each client represents an independent audio device (e.g., smart-city microphone, industrial sensor, retail environment recorder) that trains locally on its own audio data. The global model is aggregated using FedAvg, without sharing raw audio recordings.

This application uses:

  • A lightweight CNN for audio tagging
  • Log-Mel spectrogram features (TorchAudio)
  • Flower Datasets (FDS) for partitioning
  • The Hugging Face version of UrbanSound8K
  • Flower’s Simulation or Deployment Engine

Fetch the App

Install Flower:

pip install flwr

Fetch the app:

flwr new @camlsys/fed-audio-tagging

This will create a new directory called fed-audio-tagging with the following structure:

fed-audio-tagging
├── fedaudio
│   ├── __init__.py
│   ├── client_app.py   # Defines your ClientApp
│   ├── server_app.py   # Defines your ServerApp
│   └── task.py         # Defines your model, training and data loading
├── pyproject.toml      # Project metadata like dependencies and configs
└── README.md

Run the App

You can run your Flower App in both simulation and deployment mode without making changes to the code. If you are starting with Flower, we recommend you using the simulation mode as it requires fewer components to be launched manually. By default, flwr run will make use of the Simulation Engine.

Run with the Simulation Engine

TIP

Check the Simulation Engine documentation to learn more about Flower simulations, how to use more virtual SuperNodes, and how to configure CPU/GPU usage in your ClientApp.

Install the dependencies defined in pyproject.toml as well as the fedaudio package.

cd fed-audio-tagging && pip install -e .

Install ffmpeg:

# If using Conda
conda install -c conda-forge ffmpeg

# Or on Ubuntu
sudo apt install ffmpeg

Run with default settings:

flwr run .  # CPU only

You can also override some of the settings for your ClientApp and ServerApp defined in pyproject.toml. For example:

flwr run . --run-config "num-server-rounds=5 learning-rate=0.05"

Run with the Deployment Engine

To run this App using Flower's Deployment Engine we recommend first creating some demo data using Flower Datasets. For example:

# Install Flower datasets
pip install "flwr-datasets['audio']"

# Create dataset partitions and save them to disk
flwr-datasets create danavery/urbansound8K --num-partitions 2 --out-dir demo_data

The above command will create two IID partitions of the Places365 dataset and save them in a demo_data directory. Next, you can pass one partition to each of your SuperNodes like this:

flower-supernode \
    --insecure \
    --superlink <SUPERLINK-FLEET-API> \
    --node-config="data-path=/path/to/demo_data/partition_0"

Finally, ensure the environment of each SuperNode has all dependencies installed. Then, launch the run via flwr run but pointing to a SuperLink connection that specifies the SuperLink your SuperNode is connected to:

flwr run . <SUPERLINK-CONNECTION> --stream

TIP

Follow this how-to guide to run the same app in this example but with Flower's Deployment Engine. After that, you might be interested in setting up secure TLS-enabled communications and SuperNode authentication in your federation.