@gfedops/fedops-multimodal
flwr new @gfedops/fedops-multimodalFedOps Multimodal
This Flower app packages FedOps-Multimodal for multimodal hateful-memes classification. The app is built for Flower Hub first, with Flower as the primary runtime and FedOps kept optional.
FedOps-Multimodal addresses two things:
- unstable local multimodal representations when clients have different modality availability
- unstable global classifier updates under heterogeneous multimodal federated learning
Prerequisites
- Python 3.10 or higher
- pip
Fetch the App
Install Flower:
pip install flwr
Fetch the app:
flwr new @gfedops/fedops-multimodal
This creates a local directory called fedops-multimodal:
fedops-multimodal/ ├── data/ │ └── hateful_memes_output/ ├── fedops_multimodal/ │ ├── __init__.py │ ├── client_app.py │ ├── server_app.py │ ├── task.py │ ├── prepare_data.py │ ├── core/ │ └── fedops_optional/ ├── pyproject.toml └── README.md
Run the App
You can run this Flower app in both simulation and deployment mode without changing the code. By default, flwr run . uses the Simulation Engine.
Run with the Simulation Engine
Install the dependencies defined in pyproject.toml as well as the fedops_multimodal package:
cd fedops-multimodal && pip install -e .
Run with default settings:
flwr run .
You can also override the run configuration. For example:
flwr run . --run-config "num-server-rounds=30 local-epochs=10 beta=0.3 learning-rate=0.05"
The default simulation path works immediately after install. If artifacts are not present (for example in a fresh Flower Hub FAB fetch), the app auto-creates the default tiny artifact set under data/hateful_memes_output on first run.
Default simulation settings in this app:
- num-server-rounds=30
- local-epochs=3
- batch-size=16
- learning-rate=0.05
If you want a longer local training run without changing the packaged defaults, use:
flwr run . --run-config "local-epochs=10"
Run with the Deployment Engine
To run the app with Flower's Deployment Engine, start one SuperLink and at least two SuperNodes. Using the bundled tiny bootstrap data:
APP_DIR="$(pwd)"
Start a local SuperLink:
flower-superlink --insecure
Start SuperNode 0:
flower-supernode \ --insecure \ --superlink 127.0.0.1:9092 \ --clientappio-api-address 127.0.0.1:9094 \ --node-config "data-path=\"$APP_DIR/data/hateful_memes_output\" client-id=0"
Start SuperNode 1:
flower-supernode \ --insecure \ --superlink 127.0.0.1:9092 \ --clientappio-api-address 127.0.0.1:9095 \ --node-config "data-path=\"$APP_DIR/data/hateful_memes_output\" client-id=1"
Create a local Flower connection:
mkdir -p ~/.flwr cat > ~/.flwr/config.toml <<'EOF_CFG' [superlink.local-deployment] address = "127.0.0.1:9093" insecure = true EOF_CFG
Launch the run:
flwr run . local-deployment --stream \ --run-config 'alpha=0.1 sample-missing-rate=0.2 modality-missing-rate=0.2 beta=0.3 num-server-rounds=30 local-epochs=3 batch-size=8'
For real experiments, replace "$APP_DIR/data/hateful_memes_output" with your full prepared artifact root and assign each node its own client-id.
For the bundled deployment smoke path, the documented command uses:
- num-server-rounds=30
- local-epochs=3
- batch-size=8
Data
The default local bootstrap data is a tiny two-client subset of the Hateful Memes artifacts prepared for:
- alpha=0.1
- sample-missing-rate=0.2
- modality-missing-rate=0.2
The demo bundle contains:
- two train clients: 0 and 1
- shared evaluation splits: dev and test
- image features from MobileNetV2 (1 x 1280 per sample in the shipped bundle)
- text features from MobileBERT (sequence_length x 512)
The shipped train clients are intentionally heterogeneous:
- client 0: 2 train samples, all negative (0.0 positive ratio)
- client 1: 31 train samples, all positive (1.0 positive ratio)
- dev: 398 samples, 0.4975 positive ratio
- test: 815 samples, 0.4822 positive ratio
The generated simulation file is:
- data/hateful_memes_output/simulation_feature/hateful_memes/mm_ps02_pm02_alpha01.json
That means the demo uses the same alpha=0.1, p_s=0.2, p_m=0.2 configuration naming as the full artifact pipeline. For the two shipped demo clients, the retained smoke-test subset keeps both image and text available after filtering, so the most visible heterogeneity in the bundled review path is client-size skew and label skew. The full prepared artifacts expose the broader missing-modality setting.
Data Source
For full-data preparation, this app delegates raw Hateful Memes setup to fedops-dataset. The default public source used there is the Hugging Face dataset mirror:
- https://huggingface.co/datasets/neuralcatcher/hateful_memes
In code, that source is configured as the default Hateful Memes repo id neuralcatcher/hateful_memes inside fedops-dataset. The bootstrap data in this app is only a small local subset for Flower Hub review; it is not the full raw dataset.
Prepare Full Data
The bootstrap data is enough for Flower Hub review. For full experiments, use the app-owned preparation command:
fedops-multimodal-prepare-data \ --repo-root /path/to/fed-multimodal \ --data-root /path/to/fed-multimodal/fed_multimodal/data \ --output-dir /path/to/fed-multimodal/fed_multimodal/output \ --alpha 0.1 \ --sample-missing-rate 0.2 \ --modality-missing-rate 0.2
Optional FedOps Runtime (FedMAP-Style Procedure)
If you want to run this app in standalone FedOps runtime (same operational pattern as the FedMAP guide), use the following steps.
Reference guide pattern:
- https://gachon-cclab.github.io/docs/FedOps-Aggregation-method/How-use-FedMAP/
1. Install optional FedOps dependencies
pip install -e .[fedops]
2. Prepare client data artifacts
Use this app's prepared artifacts under:
- data/hateful_memes_output
Or prepare full artifacts with:
fedops-multimodal-prepare-data \ --repo-root /path/to/fed-multimodal \ --data-root /path/to/fed-multimodal/fed_multimodal/data \ --output-dir /path/to/fed-multimodal/fed_multimodal/output \ --alpha 0.1 \ --sample-missing-rate 0.2 \ --modality-missing-rate 0.2
3. Create FedOps task
In FedOps platform, create a task with:
- task_id: fedmstest
- model_type: Pytorch
- strategy target: fedops.server.fedops_multimodal_strategy.FedOpsMultimodalFedAvgStrategy
The local optional config is already aligned:
- fedops_multimodal/fedops_optional/conf/config.yaml
4. Server pod code patch (K8s FedOps server)
Place strategy code in server package path:
/usr/local/lib/python3.10/site-packages/fedops/server/fedops_multimodal_strategy.py
Copy the content from:
fedops_multimodal/fedops_optional/strategy.py
5. Start server process
Run inside the server environment/pod:
python -m fedops_multimodal.fedops_optional.server_main
6. Start client processes (FedMAP-style client + manager)
On each client node/device, run both processes in separate terminals.
Client node A:
python -m fedops_multimodal.fedops_optional.client_main client_id=0 python -m fedops_multimodal.fedops_optional.client_manager_main
Client node B:
python -m fedops_multimodal.fedops_optional.client_main client_id=1 python -m fedops_multimodal.fedops_optional.client_manager_main
Repeat for more clients by assigning the correct client_id.
7. Monitor and verify
- Monitor server logs from FedOps server management.
- Monitor client and manager logs on each client node.
- Confirm rounds progress and global model updates.
Flower Hub review should focus on the Flower-native flwr run . path; FedOps runtime above is optional.