Flower AI Summit 2026ยทApril 15โ16ยทLondon
Quickstart
flwr new @zexili/flwr-nlpReadme
FlowerTune LLM (General NLP) โ Federated Instruction Tuning
This Flower app performs federated instruction tuning for a pretrained LLM on the General NLP task of the FlowerTune LLM leaderboard.
- Dataset (default): vicgalle/alpaca-gpt4 via Flower Datasets
- Fine-tuning: LoRA (๐ค PEFT)
- Orchestration: Flower Simulation Engine
- Aggregation: FedAvg
All app settings are configured in pyproject.toml under [tool.flwr.app.config].
Quickstart
1) Create environment & install
conda create -n flwr-tune python=3.9 conda activate flwr-tune pip install -e .
2) Run
flwr run
Tip: If the base model requires gated access on Hugging Face, make sure you have access and have authenticated with hf auth login.
Project structure
flowertune-llm-general-nlp/
โโ flwr_nlp/
โ โโ client_app.py
โ โโ server_app.py
โ โโ dataset.py
โ โโ models.py
โ โโ strategy.py
โโ pyproject.toml
โโ README.md
Notes
- This repository is intentionally minimal for leaderboard submission: it contains only the federated fine-tuning app code and required configuration.
- Logs, checkpoints, and downstream evaluation code are excluded.