@aashmohd/diffusion_example_privacy
flwr new @aashmohd/diffusion_example_privacyFederated Diffusion Model Training with Flower (Quickstart Example)
This example demonstrates how to train a Diffusion Model (based on Segmind Tiny-SD) in a Federated Learning (FL) environment using Flower.
The training uses Low-Rank Adaptation (LoRA) to enable lightweight fine-tuning of a diffusion model in a distributed setup, even with limited compute resources.
The example uses the Oxford Flowers dataset, a collection of RGB flower images commonly used for training image-generation models.
In this example, the model is first fine-tuned under normal training conditions and then extended with different Differential Privacy (DP) techniques to demonstrate privacy-preserving learning in a federated setup.
Two levels of privacy protection are implemented:
Sample-level privacy:
Applied during training using Opacus, which enables Differentially Private Stochastic Gradient Descent (DP-SGD) through gradient clipping and noise injection. This protects the contribution of individual training samples.
Output-level privacy:
Applied to the model updates or outputs using Laplace and Gaussian noise mechanisms, adding controlled noise to reduce the risk of information leakage from shared results.
This combination shows how diffusion model fine-tuning can be performed in both standard and privacy-enhanced settings, highlighting the trade-off between model performance and privacy guarantees.
┌──────────────────────┐
│ Central Server │
│ (Global Model + FL) │
└─────────┬────────────┘
│
Sends global model
│
┌─────────────────┴─────────────────┐
│ │
┌───────────────┐ ┌───────────────┐
│ Client 1 │ ... │ Client N │
│ (Local Data) │ │ (Local Data) │
└───────┬───────┘ └───────┬───────┘
│ │
│ LoRA Fine-Tuning of Tiny-SD │
│ │
▼ ▼
Sample-Level Differential Privacy (Opacus DP-SGD)
• Gradient clipping
• Noise added to gradients
• Protects individual training samples
│ │
▼ ▼
Output-Level Differential Privacy
• Laplace or Gaussian noise added
• Protects shared model updates
│ │
└──────────── Sends updates ────────┘
│
▼
Server Aggregates Updates
(Federated Averaging)
│
▼
Updated Global Model
Note:
Using both Sample-level differential privacy (e.g., Opacus DP-SGD) and Output-level noise mechanisms (Laplace or Gaussian) together is generally not recommended. Applying noise at multiple stages can lead to excessive total noise, which may significantly degrade model performance, slow convergence, and reduce image quality. Additionally, combining mechanisms complicates privacy accounting, making it harder to accurately estimate the overall privacy guarantee. In practice, one well-configured privacy method is often more effective and easier to manage than stacking multiple noise-based protections.
Overview
In this example:
- Each client fine-tunes only the LoRA parameters of the Stable Diffusion UNet.
- The Oxford-Flowers dataset is partitioned among multiple clients using Flower Datasets.
- The server performs FedAvg aggregation on the LoRA weights after each round.
- After all rounds, the aggregated LoRA adapter is saved into final_lora_model/ and can be merged with the base model for image generation.
This provides a clean example of how federated diffusion fine-tuning can be performed using Diffusers + PEFT + Flower.
Set up the project
Fetch the app
Install Flower:
pip install flwr
Fetch the app:
flwr new @aashmohd/diffusion_example_privacy
This will create a new directory called diffusion_example_privacy containing the following files:
diffusion_example_privacy ├── diffusion_example_privacy │ ├── __init__.py │ ├── client_app.py # Defines your ClientApp logic │ ├── server_app.py # Defines the ServerApp and strategy │ └── task.py # Model setup, data loading, and training functions ├── pyproject.toml # Project metadata and dependencies └── README.md # This file
Install dependencies and project
Install the dependencies defined in pyproject.toml as well as the diffusion_example_privacy package.
pip install -e .
Run the Example
You can run your Flower project in both simulation and deployment mode without making changes to the code. If you are starting with Flower, we recommend you using the simulation mode as it requires fewer components to be launched manually. By default, flwr run will make use of the Simulation Engine.
Run with the Simulation Engine
[TIP] This example runs faster when the ClientApps have access to a GPU. If your system has one, you can make use of it by configuring the backend.client-resources component in your Flower Configuration. Check the Simulation Engine documentation to learn more about Flower simulations and how to optimize them.
# Run with the default federation (CPU only) flwr run .
You can add a new connection in your Flower Configuration (find if via flwr config list):
[superlink.local-gpu] options.num-supernodes = 2 options.backend.client-resources.num-cpus = 2 # each ClientApp assumes to use 2CPUs options.backend.client-resources.num-gpus = 0.5 # at most 2 ClientApp will run in a given GPU (lower it to increase parallelism)
And then run the app
# Run with the `local-gpu` settings flwr run . local-gpu
You can also override some of the settings for your ClientApp and ServerApp defined in pyproject.toml. For example
flwr run . --run-config "num-server-rounds=5 fraction-train=0.1"
To enable sample-level privacy with Opacus, either set the corresponding flag to true in the TOML configuration file or run the following command.
flwr run . --run-config "num-server-rounds=5 use-sample-dp=true"
To enable output-level differential privacy, set the corresponding flag to true in the TOML configuration file or run the command below. You can choose either the Laplace or Gaussian noise mechanism.
flwr run . --run-config "num-server-rounds=5 use_output_dp=true"
Result output
Example of training step results for each client and corresponding server logs:
02/28/2026 07:28:27:DEBUG:Initialising: RayBackend 02/28/2026 07:28:27:DEBUG:Backend config: {'name': 'ray', 'client_resources': {'num_cpus': 2, 'num_gpus': 1}, 'init_args': {'logging_level': 30, 'log_to_driver': True}, 'actor': {'tensorflow': 0}} model_index.json: 100%|████████████████████████| 584/584 [00:00<00:00, 3.10MB/s] Fetching 12 files: 0%| | 0/12 [00:00<?, ?it/s] special_tokens_map.json: 100%|█████████████████| 472/472 [00:00<00:00, 2.47MB/s] config.json: 100%|█████████████████████████████| 755/755 [00:00<00:00, 5.63MB/s] tokenizer_config.json: 100%|███████████████████| 737/737 [00:00<00:00, 7.07MB/s] vocab.json: 0.00B [00:00, ?B/s] merges.txt: 0.00B [00:00, ?B/s] merges.txt: 525kB [00:00, 8.23MB/s] | 0.00/691 [00:00<?, ?B/s] scheduler_config.json: 100%|███████████████████| 691/691 [00:00<00:00, 54.0kB/s] config.json: 100%|█████████████████████████████| 722/722 [00:00<00:00, 5.81MB/s] config.json: 100%|█████████████████████████████| 759/759 [00:00<00:00, 6.46MB/s] vocab.json: 1.06MB [00:00, 11.3MB/s] | 2/12 [00:00<00:02, 4.08it/s] pytorch_model.bin: 0%| | 0.00/246M [00:00<?, ?B/s] diffusion_pytorch_model.bin: 0%| | 0.00/647M [00:00<?, ?B/s] diffusion_pytorch_model.bin: 0%| | 0.00/167M [00:00<?, ?B/s] diffusion_pytorch_model.bin: 2%|▏ | 10.5M/647M [00:01<01:04, 9.83MB/s] pytorch_model.bin: 4%|▊ | 10.5M/246M [00:01<00:24, 9.68MB/s] diffusion_pytorch_model.bin: 5%|▍ | 31.5M/647M [00:01<00:18, 33.2MB/s] diffusion_pytorch_model.bin: 6%|▋ | 10.5M/167M [00:01<00:17, 9.00MB/s] pytorch_model.bin: 13%|██▌ | 31.5M/246M [00:01<00:06, 31.3MB/s] diffusion_pytorch_model.bin: 19%|█▉ | 31.5M/167M [00:01<00:04, 29.8MB/s] diffusion_pytorch_model.bin: 10%|▉ | 62.9M/647M [00:01<00:08, 66.9MB/s] pytorch_model.bin: 21%|████▎ | 52.4M/246M [00:01<00:03, 54.1MB/s] diffusion_pytorch_model.bin: 38%|███▊ | 62.9M/167M [00:01<00:01, 66.0MB/s] pytorch_model.bin: 30%|█████▉ | 73.4M/246M [00:01<00:02, 64.2MB/s] diffusion_pytorch_model.bin: 13%|█▎ | 83.9M/647M [00:01<00:08, 66.7MB/s] diffusion_pytorch_model.bin: 50%|█████ | 83.9M/167M [00:01<00:01, 69.5MB/s] diffusion_pytorch_model.bin: 16%|█▊ | 105M/647M [00:01<00:06, 85.9MB/s] pytorch_model.bin: 38%|███████▋ | 94.4M/246M [00:01<00:02, 68.6MB/s] diffusion_pytorch_model.bin: 19%|██▎ | 126M/647M [00:01<00:04, 106MB/s] pytorch_model.bin: 51%|███████████▏ | 126M/246M [00:01<00:01, 103MB/s] diffusion_pytorch_model.bin: 23%|██▋ | 147M/647M [00:02<00:04, 103MB/s] diffusion_pytorch_model.bin: 26%|███ | 168M/647M [00:02<00:03, 121MB/s] diffusion_pytorch_model.bin: 31%|███▋ | 199M/647M [00:02<00:03, 140MB/s] diffusion_pytorch_model.bin: 63%|██████▉ | 105M/167M [00:02<00:01, 45.8MB/s] diffusion_pytorch_model.bin: 34%|████ | 220M/647M [00:02<00:04, 104MB/s] diffusion_pytorch_model.bin: 39%|████▋ | 252M/647M [00:02<00:02, 138MB/s] diffusion_pytorch_model.bin: 42%|████▋ | 273M/647M [00:03<00:04, 93.3MB/s] pytorch_model.bin: 60%|████████████▌ | 147M/246M [00:03<00:02, 39.8MB/s] diffusion_pytorch_model.bin: 45%|████▉ | 294M/647M [00:03<00:03, 97.8MB/s] diffusion_pytorch_model.bin: 52%|██████▏ | 336M/647M [00:03<00:02, 135MB/s] diffusion_pytorch_model.bin: 55%|██████▌ | 357M/647M [00:03<00:02, 116MB/s] diffusion_pytorch_model.bin: 58%|███████ | 377M/647M [00:03<00:02, 126MB/s] diffusion_pytorch_model.bin: 63%|███████▌ | 409M/647M [00:04<00:01, 128MB/s] diffusion_pytorch_model.bin: 68%|████████▏ | 440M/647M [00:04<00:01, 153MB/s] diffusion_pytorch_model.bin: 69%|███████▌ | 115M/167M [00:04<00:02, 18.1MB/s] diffusion_pytorch_model.bin: 71%|████████▌ | 461M/647M [00:04<00:01, 138MB/s] diffusion_pytorch_model.bin: 75%|████████▎ | 126M/167M [00:09<00:05, 7.00MB/s] pytorch_model.bin: 68%|██████████████▎ | 168M/246M [00:09<00:08, 9.71MB/s] diffusion_pytorch_model.bin: 75%|████████▏ | 482M/647M [00:09<00:10, 15.7MB/s] diffusion_pytorch_model.bin: 81%|████████▉ | 136M/167M [00:09<00:03, 8.96MB/s] pytorch_model.bin: 72%|███████████████▏ | 178M/246M [00:09<00:05, 11.5MB/s]02/28/2026 07:28:38:DEBUG:Constructed ActorPool with: 2 actors 02/28/2026 07:28:38:DEBUG:Using InMemoryState diffusion_pytorch_model.bin: 78%|████████▌ | 503M/647M [00:09<00:07, 20.5MB/s] pytorch_model.bin: 77%|████████████████ | 189M/246M [00:09<00:04, 13.1MB/s] diffusion_pytorch_model.bin: 94%|██████████▎| 157M/167M [00:09<00:00, 13.1MB/s] diffusion_pytorch_model.bin: 100%|███████████| 167M/167M [00:09<00:00, 16.9MB/s] diffusion_pytorch_model.bin: 88%|█████████▋ | 566M/647M [00:10<00:02, 37.2MB/s] diffusion_pytorch_model.bin: 92%|██████████▏| 598M/647M [00:10<00:00, 52.4MB/s] pytorch_model.bin: 81%|████████████████▉ | 199M/246M [00:10<00:03, 12.9MB/s] diffusion_pytorch_model.bin: 100%|███████████| 647M/647M [00:10<00:00, 59.9MB/s] pytorch_model.bin: 85%|█████████████████▉ | 210M/246M [00:11<00:02, 13.3MB/s] pytorch_model.bin: 94%|███████████████████▋ | 231M/246M [00:11<00:00, 20.6MB/s] pytorch_model.bin: 100%|█████████████████████| 246M/246M [00:12<00:00, 20.2MB/s] Fetching 12 files: 100%|████████████████████████| 12/12 [00:12<00:00, 1.07s/it] Loading pipeline components...: 80%|██████████▍ | 4/5 [00:01<00:00, 3.20it/s](pid=548) 2026-02-28 07:28:43.847685: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered Loading pipeline components...: 100%|█████████████| 5/5 [00:03<00:00, 1.60it/s] INFO : Starting federated diffusion training for 3 rounds... 02/28/2026 07:28:45:INFO: Starting federated diffusion training for 3 rounds... INFO : Using base model: segmind/tiny-sd 02/28/2026 07:28:45:INFO: Using base model: segmind/tiny-sd INFO : Training LoRA parameters only (108 layers) 02/28/2026 07:28:45:INFO: Training LoRA parameters only (108 layers) INFO : Starting FedAvg strategy: 02/28/2026 07:28:45:INFO:Starting FedAvg strategy: INFO : ├── Number of rounds: 3 02/28/2026 07:28:45:INFO: ├── Number of rounds: 3 INFO : ├── ArrayRecord (2.49 MB) 02/28/2026 07:28:45:INFO: ├── ArrayRecord (2.49 MB) INFO : ├── ConfigRecord (train): (empty!) 02/28/2026 07:28:45:INFO: ├── ConfigRecord (train): (empty!) INFO : ├── ConfigRecord (evaluate): (empty!) 02/28/2026 07:28:45:INFO: ├── ConfigRecord (evaluate): (empty!) INFO : ├──> Sampling: 02/28/2026 07:28:45:INFO: ├──> Sampling: INFO : │ ├──Fraction: train (1.00) | evaluate ( 1.00) 02/28/2026 07:28:45:INFO: │ ├──Fraction: train (1.00) | evaluate ( 1.00) INFO : │ ├──Minimum nodes: train (2) | evaluate (2) 02/28/2026 07:28:45:INFO: │ ├──Minimum nodes: train (2) | evaluate (2) INFO : │ └──Minimum available nodes: 2 02/28/2026 07:28:45:INFO: │ └──Minimum available nodes: 2 INFO : └──> Keys in records: 02/28/2026 07:28:45:INFO: └──> Keys in records: INFO : ├── Weighted by: 'num-examples' 02/28/2026 07:28:45:INFO: ├── Weighted by: 'num-examples' INFO : ├── ArrayRecord key: 'arrays' 02/28/2026 07:28:45:INFO: ├── ArrayRecord key: 'arrays' INFO : └── ConfigRecord key: 'config' 02/28/2026 07:28:45:INFO: └── ConfigRecord key: 'config' INFO : 02/28/2026 07:28:45:INFO: INFO : 02/28/2026 07:28:45:INFO: INFO : [ROUND 1/3] 02/28/2026 07:28:45:INFO:[ROUND 1/3] INFO : configure_train: Sampled 2 nodes (out of 2) 02/28/2026 07:28:45:INFO:configure_train: Sampled 2 nodes (out of 2) Generating train split: 0%| | 0/7169 [00:00<?, ? examples/s] Generating train split: 14%|█▍ | 1000/7169 [00:00<00:02, 2428.16 examples/s] Generating train split: 28%|██▊ | 2000/7169 [00:00<00:01, 3982.45 examples/s] Generating train split: 42%|████▏ | 3000/7169 [00:00<00:00, 5197.62 examples/s] Generating train split: 56%|█████▌ | 4000/7169 [00:00<00:00, 5706.13 examples/s] Generating train split: 70%|██████▉ | 5000/7169 [00:00<00:00, 5819.96 examples/s] Generating train split: 100%|██████████| 7169/7169 [00:01<00:00, 6130.34 examples/s] Generating test split: 0%| | 0/1020 [00:00<?, ? examples/s] Generating test split: 100%|██████████| 1020/1020 [00:00<00:00, 5383.44 examples/s] Map: 0%| | 0/3584 [00:00<?, ? examples/s] Map: 28%|██▊ | 1000/3584 [00:05<00:13, 186.83 examples/s] Map: 0%| | 0/3585 [00:00<?, ? examples/s] Map: 84%|████████▎ | 3000/3584 [00:14<00:02, 208.57 examples/s] [repeated 4x across cluster] Map: 100%|██████████| 3584/3584 [00:17<00:00, 212.17 examples/s] Map: 100%|██████████| 3584/3584 [00:17<00:00, 205.34 examples/s] Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] Loading pipeline components...: 20%|██ | 1/5 [00:00<00:02, 1.37it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.51it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.56it/s] Map: 84%|████████▎ | 3000/3585 [00:14<00:02, 205.62 examples/s] INFO : aggregate_train: Received 2 results and 0 failures 02/28/2026 07:33:31:INFO:aggregate_train: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.28225478910112073} 02/28/2026 07:33:31:INFO: └──> Aggregated MetricRecord: {'loss': 0.28225478910112073} INFO : configure_evaluate: Sampled 2 nodes (out of 2) 02/28/2026 07:33:31:INFO:configure_evaluate: Sampled 2 nodes (out of 2) Map: 100%|██████████| 3585/3585 [00:17<00:00, 202.79 examples/s] [repeated 2x across cluster] Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] [repeated 2x across cluster] Loading pipeline components...: 60%|██████ | 3/5 [00:01<00:01, 1.79it/s] [repeated 4x across cluster] Loading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.42it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.03it/s] INFO : aggregate_evaluate: Received 2 results and 0 failures 02/28/2026 07:34:13:INFO:aggregate_evaluate: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.5157486884130371} 02/28/2026 07:34:13:INFO: └──> Aggregated MetricRecord: {'loss': 0.5157486884130371} INFO : 02/28/2026 07:34:13:INFO: INFO : [ROUND 2/3] 02/28/2026 07:34:13:INFO:[ROUND 2/3] INFO : configure_train: Sampled 2 nodes (out of 2) 02/28/2026 07:34:13:INFO:configure_train: Sampled 2 nodes (out of 2) Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] [repeated 2x across cluster] Loading pipeline components...: 60%|██████ | 3/5 [00:02<00:01, 1.37it/s] [repeated 5x across cluster] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 3.03it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.71it/s] INFO : aggregate_train: Received 2 results and 0 failures 02/28/2026 07:38:24:INFO:aggregate_train: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.28093226803181315} 02/28/2026 07:38:24:INFO: └──> Aggregated MetricRecord: {'loss': 0.28093226803181315} INFO : configure_evaluate: Sampled 2 nodes (out of 2) 02/28/2026 07:38:24:INFO:configure_evaluate: Sampled 2 nodes (out of 2) Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] [repeated 2x across cluster] Loading pipeline components...: 80%|████████ | 4/5 [00:01<00:00, 2.47it/s] [repeated 5x across cluster] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 3.12it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:02<00:00, 2.48it/s] INFO : aggregate_evaluate: Received 2 results and 0 failures 02/28/2026 07:39:05:INFO:aggregate_evaluate: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.516112885872523} 02/28/2026 07:39:05:INFO: └──> Aggregated MetricRecord: {'loss': 0.516112885872523} INFO : 02/28/2026 07:39:05:INFO: INFO : [ROUND 3/3] 02/28/2026 07:39:05:INFO:[ROUND 3/3] INFO : configure_train: Sampled 2 nodes (out of 2) 02/28/2026 07:39:05:INFO:configure_train: Sampled 2 nodes (out of 2) Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] [repeated 2x across cluster] Loading pipeline components...: 80%|████████ | 4/5 [00:01<00:00, 2.25it/s] [repeated 5x across cluster] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.58it/s] INFO : aggregate_train: Received 2 results and 0 failures 02/28/2026 07:43:12:INFO:aggregate_train: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.2749157878648085} 02/28/2026 07:43:12:INFO: └──> Aggregated MetricRecord: {'loss': 0.2749157878648085} INFO : configure_evaluate: Sampled 2 nodes (out of 2) 02/28/2026 07:43:12:INFO:configure_evaluate: Sampled 2 nodes (out of 2) Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s] [repeated 2x across cluster] Loading pipeline components...: 60%|██████ | 3/5 [00:01<00:01, 1.89it/s] [repeated 4x across cluster] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.68it/s] Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00, 2.79it/s] INFO : aggregate_evaluate: Received 2 results and 0 failures 02/28/2026 07:43:54:INFO:aggregate_evaluate: Received 2 results and 0 failures INFO : └──> Aggregated MetricRecord: {'loss': 0.5138557020160887} 02/28/2026 07:43:54:INFO: └──> Aggregated MetricRecord: {'loss': 0.5138557020160887} INFO : 02/28/2026 07:43:54:INFO: INFO : Strategy execution finished in 908.96s 02/28/2026 07:43:54:INFO:Strategy execution finished in 908.96s INFO : 02/28/2026 07:43:54:INFO: INFO : Final results: 02/28/2026 07:43:54:INFO:Final results: INFO : 02/28/2026 07:43:54:INFO: INFO : Global Arrays: 02/28/2026 07:43:54:INFO: Global Arrays: INFO : ArrayRecord (2.486 MB) 02/28/2026 07:43:54:INFO: ArrayRecord (2.486 MB) INFO : 02/28/2026 07:43:54:INFO: INFO : Aggregated ClientApp-side Train Metrics: 02/28/2026 07:43:54:INFO: Aggregated ClientApp-side Train Metrics: INFO : { 1: {'loss': '2.8225e-01'}, 02/28/2026 07:43:54:INFO: { 1: {'loss': '2.8225e-01'}, INFO : 2: {'loss': '2.8093e-01'}, 02/28/2026 07:43:54:INFO: 2: {'loss': '2.8093e-01'}, INFO : 3: {'loss': '2.7492e-01'}} 02/28/2026 07:43:54:INFO: 3: {'loss': '2.7492e-01'}} INFO : 02/28/2026 07:43:54:INFO: INFO : Aggregated ClientApp-side Evaluate Metrics: 02/28/2026 07:43:54:INFO: Aggregated ClientApp-side Evaluate Metrics: INFO : { 1: {'loss': '5.1575e-01'}, 02/28/2026 07:43:54:INFO: { 1: {'loss': '5.1575e-01'}, INFO : 2: {'loss': '5.1611e-01'}, 02/28/2026 07:43:54:INFO: 2: {'loss': '5.1611e-01'}, INFO : 3: {'loss': '5.1386e-01'}} 02/28/2026 07:43:54:INFO: 3: {'loss': '5.1386e-01'}} INFO : 02/28/2026 07:43:54:INFO: INFO : ServerApp-side Evaluate Metrics: 02/28/2026 07:43:54:INFO: ServerApp-side Evaluate Metrics: INFO : {} 02/28/2026 07:43:54:INFO: {} INFO : 02/28/2026 07:43:54:INFO: Saved final LoRA model at: final_lora_model Loading pipeline components...: 100%|█████████████| 5/5 [00:01<00:00, 3.24it/s] 100%|███████████████████████████████████████████| 30/30 [00:10<00:00, 2.82it/s] Loading pipeline components...: 100%|█████████████| 5/5 [00:01<00:00, 3.13it/s] 100%|███████████████████████████████████████████| 30/30 [00:11<00:00, 2.69it/s] 02/28/2026 07:44:23:DEBUG:ServerApp finished running. 02/28/2026 07:44:23:DEBUG:Triggered stop event for Simulation Engine. 02/28/2026 07:44:23:DEBUG:Terminated 2 actors 02/28/2026 07:44:24:DEBUG:Terminated RayBackend 02/28/2026 07:44:24:DEBUG:Stopping Simulation Engine now.