Flower AI Summit 2026ยทApril 15โ€“16ยทLondon

@zexili/flwr-nlp

No description available

Publisher@zexili
Downloads0
Runs0

Quickstart

flwr new @zexili/flwr-nlp

Readme

FlowerTune LLM (General NLP) โ€” Federated Instruction Tuning

This Flower app performs federated instruction tuning for a pretrained LLM on the General NLP task of the FlowerTune LLM leaderboard.

  • Dataset (default): vicgalle/alpaca-gpt4 via Flower Datasets
  • Fine-tuning: LoRA (๐Ÿค— PEFT)
  • Orchestration: Flower Simulation Engine
  • Aggregation: FedAvg

All app settings are configured in pyproject.toml under [tool.flwr.app.config].

Quickstart

1) Create environment & install

conda create -n flwr-tune python=3.9
conda activate flwr-tune

pip install -e .

2) Run

flwr run

Tip: If the base model requires gated access on Hugging Face, make sure you have access and have authenticated with hf auth login.

Project structure

flowertune-llm-general-nlp/
โ”œโ”€ flwr_nlp/
โ”‚  โ”œโ”€ client_app.py
โ”‚  โ”œโ”€ server_app.py
โ”‚  โ”œโ”€ dataset.py
โ”‚  โ”œโ”€ models.py
โ”‚  โ””โ”€ strategy.py
โ”œโ”€ pyproject.toml
โ””โ”€ README.md

Notes

  • This repository is intentionally minimal for leaderboard submission: it contains only the federated fine-tuning app code and required configuration.
  • Logs, checkpoints, and downstream evaluation code are excluded.