Flower AI Summit 2026·April 15–16·London

@gfedops/flowertune-gccl-general-nlp2

1
0
flwr new @gfedops/flowertune-gccl-general-nlp2

FlowerTune GCCL General NLP2

Federated instruction tuning with internlm/internlm3-8b-instruct on vicgalle/alpaca-gpt4 using Flower Simulation Engine.

Quickstart

pip install -e .
huggingface-cli login
flwr run .

Project Structure

flowertune-gccl-general-nlp2/
├── flowertune_gccl_general_nlp2/
│   ├── __init__.py
│   ├── client_app.py
│   ├── server_app.py
│   ├── dataset.py
│   ├── models.py
│   └── strategy.py
├── pyproject.toml
└── README.md

Configuration

All run settings live in pyproject.toml under [tool.flwr.app.config].

Key defaults:

  • model.name = "internlm/internlm3-8b-instruct"
  • model.quantization = 4
  • model.lora.peft-lora-r = 32
  • model.lora.peft-lora-alpha = 64
  • dataset.name = "vicgalle/alpaca-gpt4"
  • strategy.fraction-fit = 0.1
  • num-server-rounds = 10

Example override:

flwr run . --run-config "num-server-rounds=20 strategy.fraction-fit=0.2"

Method

  • LoRA-based PEFT fine-tuning with trl.SFTTrainer
  • FedAvg aggregation
  • Cosine annealing learning rate schedule
  • IID partitioning via flwr-datasets

Outputs

  • Global PEFT checkpoints: results/<timestamp>/peft_<round>
  • Communication budget log: comm_cost.txt

Compatibility

Validated dependency set:

  • flwr[simulation]>=1.16.0
  • torch==2.3.1
  • transformers==4.47.0
  • trl==0.8.1
  • bitsandbytes==0.45.0
  • peft==0.6.2

Flower Hub

After publish:

flwr new @gfedops/flowertune-gccl-general-nlp2