Flower AI Summit 2026·April 15–16·London
@gfedops/flowertune-gccl-medical-llm2
0
0
0
0
flwr new @gfedops/flowertune-gccl-medical-llm2FlowerTune GCCL Medical LLM2
Federated instruction tuning with ContactDoctor/Bio-Medical-Llama-3-8B on medalpaca/medical_meadow_medical_flashcards using Flower Simulation Engine.
Quickstart
pip install -e . huggingface-cli login flwr run .
Project Structure
flowertune-gccl-medical-llm2/ ├── flowertune_gccl_medical_llm2/ │ ├── __init__.py │ ├── client_app.py │ ├── server_app.py │ ├── dataset.py │ ├── models.py │ └── strategy.py ├── pyproject.toml └── README.md
Configuration
All run settings live in pyproject.toml under [tool.flwr.app.config].
Key defaults:
- model.name = "ContactDoctor/Bio-Medical-Llama-3-8B"
- model.quantization = 4
- model.lora.peft-lora-r = 32
- model.lora.peft-lora-alpha = 64
- dataset.name = "medalpaca/medical_meadow_medical_flashcards"
- strategy.fraction-fit = 0.1
- num-server-rounds = 10
Example override:
flwr run . --run-config "num-server-rounds=20 strategy.fraction-fit=0.2"
Method
- LoRA-based PEFT fine-tuning with trl.SFTTrainer
- FedAvg aggregation
- Cosine annealing learning rate schedule
- IID partitioning via flwr-datasets
Outputs
- Global PEFT checkpoints: results/<timestamp>/peft_<round>
- Communication budget log: comm_cost.txt
Compatibility
Validated dependency set:
- flwr[simulation]>=1.16.0
- torch==2.3.1
- transformers==4.47.0
- trl==0.8.1
- bitsandbytes==0.45.0
- peft==0.6.2
Flower Hub
After publish:
flwr new @gfedops/flowertune-gccl-medical-llm2