Flower AI Summit 2026·April 15–16·London
@gfedops/flowertune-gccl-medical-llm
0
0
0
0
flwr new @gfedops/flowertune-gccl-medical-llmFlowerTune GCCL Medical LLM
Federated instruction tuning with ContactDoctor/Bio-Medical-Llama-3-8B on medalpaca/medical_meadow_medical_flashcards using Flower Simulation Engine.
Quickstart
pip install -e . huggingface-cli login flwr run .
Project Structure
flowertune-gccl-medical-llm/ ├── flowertune_gccl_medical_llm/ │ ├── __init__.py │ ├── client_app.py │ ├── server_app.py │ ├── dataset.py │ ├── models.py │ └── strategy.py ├── pyproject.toml └── README.md
Configuration
All app settings are defined in pyproject.toml under [tool.flwr.app.config].
Key defaults:
- model.name = "ContactDoctor/Bio-Medical-Llama-3-8B"
- model.quantization = 4 (bitsandbytes 4-bit)
- model.lora.peft-lora-r = 32
- model.lora.peft-lora-alpha = 64
- strategy.fraction-fit = 0.1
- num-server-rounds = 100
- dataset.name = "medalpaca/medical_meadow_medical_flashcards"
You can override run-time values:
flwr run . --run-config "num-server-rounds=10 strategy.fraction-fit=0.2"
Method
- PEFT LoRA fine-tuning via peft + trl (SFTTrainer)
- FedAvg-based aggregation strategy
- Cosine annealing LR schedule between:
- train.learning-rate-max
- train.learning-rate-min
- IID partitioning with flwr-datasets (num-supernodes clients)
Outputs
- Global PEFT checkpoints are saved on server rounds:
- every train.save-every-round
- and final round
- Saved path format:
- results/<timestamp>/peft_<round>
- Communication budget tracking is appended to:
- comm_cost.txt
Compatibility Notes
This app was validated with:
- flwr==1.26.1
- torch==2.3.1
- transformers==4.43.1
- trl==0.8.1
- bitsandbytes==0.43.3
- accelerate==0.31.0
If DataCollatorForCompletionOnlyLM is missing in your trl version, the app falls back to default collator behavior.
Flower Hub
After publishing, users can fetch this app via:
flwr new @gfedops/flowertune-gccl-medical-llm