@mrs83/flowertune_phi-4-nlp
flwr new @mrs83/flowertune_phi-4-nlpFlowerTune LLM on General NLP Dataset
This directory conducts federated instruction tuning with a pretrained microsoft/phi-4 model on a General NLP dataset.
We use Flower Datasets to download, partition and preprocess the dataset. Flower's Simulation Engine is used to simulate the LLM fine-tuning process in federated way, which allows users to perform the training on a single GPU.
Methodology
This app performs federated LLM fine-tuning with DoRA using the 🤗PEFT library.
The clients' models are aggregated with FedAvg strategy.
microsoft/phi-4
- Precision: bf16 for model weights.
- Quantization: 4-bit quantization for reduced memory usage.
- DoRA Configuration:
- Rank (r): 4
- Alpha: 16
- Target Modules:
- qkv_proj
- o_proj
- gate_up_proj
- down_proj
- Training Configuration:
- Batch size: 4
- Maximum number of steps: 10
- Total number of rounds: 1
- Fraction fit per round: 0.1
- Learning Rate Scheduler:
- Cosine Annealing over rounds, where:
- Maximum LR: 5e-5
- Minimum LR: 5e-6
- Constant learning rate scheduler over steps
- Cosine Annealing over rounds, where:
- Strategy: FedAvg
Environments setup
Project dependencies are defined in pyproject.toml. Install them in an activated Python environment with:
pip install -e .
To run this on AMD ROCm, install with:
pip install -e ".[rocm]" --extra-index-url https://download.pytorch.org/whl/rocm7.1/
Running the challenge
You can run the experiment with default config values by running the following command:
flwr run
The default configs are defined in [tool.flwr.app.config] entry of pyproject.toml, and are loaded automatically.
Model saving
The global PEFT model checkpoints are saved every 1 round after aggregation on the sever side as default, which can be specified with train.save-every-round under [tool.flwr.app.config] entry in pyproject.toml.
Flower App by ethicalabs.ai - AI/ML research and development - HuggingFace