Flower AI Summit 2026·April 15–16·London
@gfedops/flowertune-gccl-general-nlp
0
0
0
0
flwr new @gfedops/flowertune-gccl-general-nlpFlowerTune GCCL General NLP
Federated instruction tuning with GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct on vicgalle/alpaca-gpt4 using Flower Simulation Engine.
Quickstart
pip install -e . huggingface-cli login flwr run .
Project Structure
flowertune-gccl-general-nlp/ ├── flowertune_gccl_general_nlp/ │ ├── __init__.py │ ├── client_app.py │ ├── server_app.py │ ├── dataset.py │ ├── models.py │ └── strategy.py ├── pyproject.toml └── README.md
Configuration
All run settings live in pyproject.toml under [tool.flwr.app.config].
Key defaults:
- model.name = "GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct"
- model.quantization = 4
- model.lora.peft-lora-r = 8
- model.lora.peft-lora-alpha = 16
- dataset.name = "vicgalle/alpaca-gpt4"
- strategy.fraction-fit = 0.1
- num-server-rounds = 10
Example override:
flwr run . --run-config "num-server-rounds=20 strategy.fraction-fit=0.2"
Method
- LoRA-based PEFT fine-tuning with trl.SFTTrainer
- FedAvg aggregation
- Cosine annealing learning rate schedule
- IID partitioning via flwr-datasets
Outputs
- Global PEFT checkpoints: results/<timestamp>/peft_<round>
- Communication budget log: comm_cost.txt
Compatibility
Validated dependency set:
- flwr[simulation]>=1.13.0
- torch==2.3.1
- transformers==4.47.0
- trl==0.8.1
- bitsandbytes==0.45.0
- peft==0.14.0
Flower Hub
After publish:
flwr new @gfedops/flowertune-gccl-general-nlp