Changelog

Unreleased

v1.14.0 (2024-12-20)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel Nata Nugraha, Dimitris Stripelis, Heng Pan, Javier, Meng Yan, Mohammad Naseri, Robert Steiner, Taner Topal, Vidit Khandelwal, Yan Gao

What’s new?

  • Introduce flwr stop command (#4647, #4629, #4694, #4646, #4634, #4700, #4684, #4642, #4682, #4683, #4639, #4668, #4658, #4693, #4704, #4729)

    The flwr stop command is now available to stop a submitted run. You can use it as follows:

    • flwr stop <run-id>

    • flwr stop <run-id> [<app>] [<federation>]

    This command instructs the SuperLink to terminate the specified run. While the execution of ServerApp and ClientApp processes will not be interrupted instantly, they will be informed of the stopped run and will gracefully terminate when they next communicate with the SuperLink.

  • Add JSON format output for CLI commands (#4610, #4613, #4710, #4621, #4612, #4619, #4611, #4620, #4712, #4633, #4632, #4711, #4714, #4734, #4738)

    The flwr run, flwr ls, and flwr stop commands now support JSON-formatted output using the --format json flag. This makes it easier to parse and integrate CLI output with other tools. Feel free to check the “How to Use CLI JSON output” guide for details!

  • Document Microsoft Azure deployment (#4625)

    A new how-to guide shows a simple Flower deployment for federated learning on Microsoft Azure VM instances.

  • Introduce OIDC user authentication infrastructure (#4630, #4244, #4602, #4618, #4717, #4719, #4745)

    Flower has supported SuperNode authentication since Flower 1.9. This release adds initial extension points for user authentication via OpenID Connect (OIDC).

  • Update FedRep baseline (#4681)

    We have started the process of migrating some baselines from using start_simulation to be launched via flwr run. We chose FedRep as the first baseline to migrate due to its very impressive results. New baselines can be created following a flwr run-compatible format by starting from the flwr new template for baselines. We welcome contributions! Read more in the how to contribute a baseline documentation.

  • Revamp simulation series tutorial (#4663, #4696)

    We have updated the Step-by-step Tutorial Series for Simulations. It now shows how to create and run Flower Apps via flwr run. The videos walk you through the process of creating custom strategies, effectively make use of metrics between ClientApp and ServerApp, create global model checkpoints, log metrics to Weights & Biases, and more.

  • Improve connection reliability (#4649, #4636, #4637)

    Connections between ServerApp<>SuperLink, ClientApp<>SuperNode, and SuperLink<>Simulation are now more robust against network issues.

  • Fix flwr new issue on Windows (#4653)

    The flwr new command now works correctly on Windows by setting UTF-8 encoding, ensuring compatibility across all platforms when creating and transferring files.

  • Update examples and flwr new templates (#4725, #4724, #4589, #4690, #4708, #4689, #4740, #4741, #4744)

    Code examples and flwr new templates have been updated to improve compatibility and usability. Notable changes include removing unnecessary numpy dependencies, upgrading the mlx version, and enhancing the authentication example. A link to previous tutorial versions has also been added for reference.

  • Improve documentation (#4713, #4624, #4606, #4596, #4695, #4654, #4656, #4603, #4727, #4723, #4598, #4661, #4655, #4659)

    Documentation has been improved with updated docstrings, typo fixes, and new contributions guidance. Automated updates ensure source texts for translations stay current.

  • Update infrastructure and CI/CD (#4614, #4686, #4587, #4715, #4728, #4679, #4675, #4680, #4676)

  • Bugfixes (#4677, #4671, #4670, #4674, #4687, #4605, #4736)

  • General improvements (#4631, #4660, #4599, #4672, #4705, #4688, #4691, #4706, #4709, #4623, #4697, #4597, #4721, #4730, #4720, #4747, #4716, #4752)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Incompatible changes

  • Remove context property from Client and NumPyClient (#4652)

    Now that Context is available as an argument in client_fn and server_fn, the context property is removed from Client and NumPyClient. This feature has been deprecated for several releases and is now removed.

v1.13.1 (2024-11-26)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Heng Pan, Javier, Robert Steiner

What’s new?

  • Fix SimulationEngine Executor for SuperLink (#4563, #4568, #4570)

    Resolved an issue that prevented SuperLink from functioning correctly when using the SimulationEngine executor.

  • Improve FAB build and install (#4571)

    An updated FAB build and install process produces smaller FAB files and doesn’t rely on pip install any more. It also resolves an issue where all files were unnecessarily included in the FAB file. The flwr CLI commands now correctly pack only the necessary files, such as .md, .toml and .py, ensuring more efficient and accurate packaging.

  • Update embedded-devices example (#4381)

    The example now uses the flwr run command and the Deployment Engine.

  • Update Documentation (#4566, #4569, #4560, #4556, #4581, #4537, #4562, #4582)

    Enhanced documentation across various aspects, including updates to translation workflows, Docker-related READMEs, and recommended datasets. Improvements also include formatting fixes for dataset partitioning docs and better references to resources in the datasets documentation index.

  • Update Infrastructure and CI/CD (#4577, #4578, #4558, #4551, #3356, #4559, #4575)

  • General improvements (#4557, #4564, #4573, #4561, #4579, #4572)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

v1.13.0 (2024-11-20)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Dimitris Stripelis, Heng Pan, Javier, Mohammad Naseri, Robert Steiner, Waris Gill, William Lindskog, Yan Gao, Yao Xu, wwjang

What’s new?

  • Introduce flwr ls command (#4460, #4459, #4477)

    The flwr ls command is now available to display details about all runs (or one specific run). It supports the following usage options:

    • flwr ls --runs [<app>] [<federation>]: Lists all runs.

    • flwr ls --run-id <run-id> [<app>] [<federation>]: Displays details for a specific run.

    This command provides information including the run ID, FAB ID and version, run status, elapsed time, and timestamps for when the run was created, started running, and finished.

  • Fuse SuperLink and SuperExec (#4358, #4403, #4406, #4357, #4359, #4354, #4229, #4283, #4352)

    SuperExec has been integrated into SuperLink, enabling SuperLink to directly manage ServerApp processes (flwr-serverapp). The flwr CLI now targets SuperLink’s Exec API. Additionally, SuperLink introduces two isolation modes for running ServerApps: subprocess (default) and process, which can be specified using the --isolation {subprocess,process} flag.

  • Introduce flwr-serverapp command (#4394, #4370, #4367, #4350, #4364, #4400, #4363, #4401, #4388, #4402)

    The flwr-serverapp command has been introduced as a CLI entry point that runs a ServerApp process. This process communicates with SuperLink to load and execute the ServerApp object, enabling isolated execution and more flexible deployment.

  • Improve simulation engine and introduce flwr-simulation command (#4433, #4486, #4448, #4427, #4438, #4421, #4430, #4462)

    The simulation engine has been significantly improved, resulting in dramatically faster simulations. Additionally, the flwr-simulation command has been introduced to enhance maintainability and provide a dedicated entry point for running simulations.

  • Improve SuperLink message management (#4378, #4369)

    SuperLink now validates the destination node ID of instruction messages and checks the TTL (time-to-live) for reply messages. When pulling reply messages, an error reply will be generated and returned if the corresponding instruction message does not exist, has expired, or if the reply message exists but has expired.

  • Introduce FedDebug baseline (#3783)

    FedDebug is a framework that enhances debugging in Federated Learning by enabling interactive inspection of the training process and automatically identifying clients responsible for degrading the global model’s performance—all without requiring testing data or labels. Learn more in the FedDebug baseline documentation.

  • Update documentation (#4511, #4010, #4396, #4499, #4269, #3340, #4482, #4387, #4342, #4492, #4474, #4500, #4514, #4236, #4112, #3367, #4501, #4373, #4409, #4356, #4520, #4524, #4525, #4526, #4527, #4528, #4545, #4522, #4534, #4513, #4529, #4441, #4530, #4470, #4553, #4531, #4554, #4555, #4552, #4533)

    Many documentation pages and tutorials have been updated to improve clarity, fix typos, incorporate user feedback, and stay aligned with the latest features in the framework. Key updates include adding a guide for designing stateful ClientApp objects, updating the comprehensive guide for setting up and running Flower’s Simulation Engine, updating the XGBoost, scikit-learn, and JAX quickstart tutorials to use flwr run, updating DP guide, removing outdated pages, updating Docker docs, and marking legacy functions as deprecated. The Secure Aggregation Protocols page has also been updated.

  • Update examples and templates (#4510, #4368, #4121, #4329, #4382, #4248, #4395, #4386, #4408)

    Multiple examples and templates have been updated to enhance usability and correctness. The updates include the 30-minute-tutorial, quickstart-jax, quickstart-pytorch, advanced-tensorflow examples, and the FlowerTune template.

  • Improve Docker support (#4506, #4424, #4224, #4413, #4414, #4336, #4420, #4407, #4422, #4532, #4540)

    Docker images and configurations have been updated, including updating Docker Compose files to version 1.13.0, refactoring the Docker build matrix for better maintainability, updating docker/build-push-action to 6.9.0, and improving Docker documentation.

  • Allow app installation without internet access (#4479, #4475)

    The flwr build command now includes a wheel file in the FAB, enabling Flower app installation in environments without internet access via flwr install.

  • Improve flwr log command (#4391, #4411, #4390, #4397)

  • Refactor SuperNode for better maintainability and efficiency (#4439, #4348, #4512, #4485)

  • Support NumPy 2.0 (#4440)

  • Update infrastructure and CI/CD (#4466, #4419, #4338, #4334, #4456, #4446, #4415)

  • Bugfixes (#4404, #4518, #4452, #4376, #4493, #4436, #4410, #4442, #4375, #4515)

  • General improvements (#4454, #4365, #4423, #4516, #4509, #4498, #4371, #4449, #4488, #4478, #4392, #4483, #4517, #4330, #4458, #4347, #4429, #4463, #4496, #4508, #4444, #4417, #4504, #4418, #4480, #4455, #4468, #4385, #4487, #4393, #4489, #4389, #4507, #4469, #4340, #4353, #4494, #4461, #4362, #4473, #4405, #4416, #4453, #4491, #4539, #4542, #4538, #4543, #4541, #4550, #4481)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Deprecations

  • Deprecate Python 3.9

    Flower is deprecating support for Python 3.9 as several of its dependencies are phasing out compatibility with this version. While no immediate changes have been made, users are encouraged to plan for upgrading to a supported Python version.

Incompatible changes

  • Remove flower-superexec command (#4351)

    The flower-superexec command, previously used to launch SuperExec, is no longer functional as SuperExec has been merged into SuperLink. Starting an additional SuperExec is no longer necessary when SuperLink is initiated.

  • Remove flower-server-app command (#4490)

    The flower-server-app command has been removed. To start a Flower app, please use the flwr run command instead.

  • Remove app argument from flower-supernode command (#4497)

    The usage of flower-supernode <app-dir> has been removed. SuperNode will now load the FAB delivered by SuperLink, and it is no longer possible to directly specify an app directory.

  • Remove support for non-app simulations (#4431)

    The simulation engine (via flower-simulation) now exclusively supports passing an app.

  • Rename CLI arguments for flower-superlink command (#4412)

    The --driver-api-address argument has been renamed to --serverappio-api-address in the flower-superlink command to reflect the renaming of the Driver service to the ServerAppIo service.

  • Rename CLI arguments for flwr-serverapp and flwr-clientapp commands (#4495)

    The CLI arguments have been renamed for clarity and consistency. Specifically, --superlink for flwr-serverapp is now --serverappio-api-address, and --supernode for flwr-clientapp is now --clientappio-api-address.

v1.12.0 (2024-10-14)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Audris, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Heng Pan, Javier, Jiahao Tan, Julian Rußmeyer, Mohammad Naseri, Ray Sun, Robert Steiner, Yan Gao, xiliguguagua

What’s new?

  • Introduce SuperExec log streaming (#3577, #3584, #4242, #3611, #3613)

    Flower now supports log streaming from a remote SuperExec using the flwr log command. This new feature allows you to monitor logs from SuperExec in real time via flwr log <run-id> (or flwr log <run-id> <app-dir> <federation>).

  • Improve flwr new templates (#4291, #4292, #4293, #4294, #4295)

    The flwr new command templates for MLX, NumPy, sklearn, JAX, and PyTorch have been updated to improve usability and consistency across frameworks.

  • Migrate ID handling to use unsigned 64-bit integers (#4170, #4237, #4243)

    Node IDs, run IDs, and related fields have been migrated from signed 64-bit integers (sint64) to unsigned 64-bit integers (uint64). To support this change, the uint64 type is fully supported in all communications. You may now use uint64 values in config and metric dictionaries. For Python users, that means using int values larger than the maximum value of sint64 but less than the maximum value of uint64.

  • Add Flower architecture explanation (#3270)

    A new Flower architecture explainer page introduces Flower components step-by-step. Check out the EXPLANATIONS section of the Flower documentation if you’re interested.

  • Introduce FedRep baseline (#3790)

    FedRep is a federated learning algorithm that learns shared data representations across clients while allowing each to maintain personalized local models, balancing collaboration and individual adaptation. Read all the details in the paper: “Exploiting Shared Representations for Personalized Federated Learning” (arxiv)

  • Improve FlowerTune template and LLM evaluation pipelines (#4286, #3769, #4272, #4257, #4220, #4282, #4171, #4228, #4258, #4296, #4287, #4217, #4249, #4324, #4219, #4327)

    Refined evaluation pipelines, metrics, and documentation for the upcoming FlowerTune LLM Leaderboard across multiple domains including Finance, Medical, and general NLP. Stay tuned for the official launch—we welcome all federated learning and LLM enthusiasts to participate in this exciting challenge!

  • Enhance Docker Support and Documentation (#4191, #4251, #4190, #3928, #4298, #4192, #4136, #4187, #4261, #4177, #4176, #4189, #4297, #4226)

    Upgraded Ubuntu base image to 24.04, added SBOM and gcc to Docker images, and comprehensively updated Docker documentation including quickstart guides and distributed Docker Compose instructions.

  • Introduce Flower glossary (#4165, #4235)

    Added the Federated Learning glossary to the Flower repository, located under the flower/glossary/ directory. This resource aims to provide clear definitions and explanations of key FL concepts. Community contributions are highly welcomed to help expand and refine this knowledge base — this is probably the easiest way to become a Flower contributor!

  • Implement Message Time-to-Live (TTL) (#3620, #3596, #3615, #3609, #3635)

    Added comprehensive TTL support for messages in Flower’s SuperLink. Messages are now automatically expired and cleaned up based on configurable TTL values, available through the low-level API (and used by default in the high-level API).

  • Improve FAB handling (#4303, #4264, #4305, #4304)

    An 8-character hash is now appended to the FAB file name. The flwr install command installs FABs with a more flattened folder structure, reducing it from 3 levels to 1.

  • Update documentation (#3341, #3338, #3927, #4152, #4151, #3993)

    Updated quickstart tutorials (PyTorch Lightning, TensorFlow, Hugging Face, Fastai) to use the new flwr run command and removed default title from documentation base template. A new blockchain example has been added to FAQ.

  • Update example projects (#3716, #4007, #4130, #4234, #4206, #4188, #4247, #4331)

    Refreshed multiple example projects including vertical FL, PyTorch (advanced), Pandas, Secure Aggregation, and XGBoost examples. Optimized Hugging Face quickstart with a smaller language model and removed legacy simulation examples.

  • Update translations (#4070, #4316, #4252, #4256, #4210, #4263, #4259)

  • General improvements (#4239, 4276, 4204, 4184, 4227, 4183, 4202, 4250, 4267, 4246, 4240, 4265, 4238, 4275, 4318, #4178, #4315, #4241, #4289, #4290, #4181, #4208, #4225, #4314, #4174, #4203, #4274, #3154, #4201, #4268, #4254, #3990, #4212, #2938, #4205, #4222, #4313, #3936, #4278, #4319, #4332, #4333)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Incompatible changes

  • Drop Python 3.8 support and update minimum version to 3.9 (#4180, #4213, #4193, #4199, #4196, #4195, #4198, #4194)

    Python 3.8 support was deprecated in Flower 1.9, and this release removes support. Flower now requires Python 3.9 or later (Python 3.11 is recommended). CI and documentation were updated to use Python 3.9 as the minimum supported version. Flower now supports Python 3.9 to 3.12.

v1.11.1 (2024-09-11)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Heng Pan, Javier, Robert Steiner, Yan Gao

Improvements

  • Implement keys/values/items methods for TypedDict (#4146)

  • Fix parsing of --executor-config if present (#4125)

  • Adjust framework name in templates docstrings (#4127)

  • Update flwr new Hugging Face template (#4169)

  • Fix flwr new FlowerTune template (#4123)

  • Add buffer time after ServerApp thread initialization (#4119)

  • Handle unsuitable resources for simulation (#4143)

  • Update example READMEs (#4117)

  • Update SuperNode authentication docs (#4160)

Incompatible changes

None

v1.11.0 (2024-08-30)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Danny, Edoardo Gabrielli, Heng Pan, Javier, Meng Yan, Michal Danilowski, Mohammad Naseri, Robert Steiner, Steve Laskaridis, Taner Topal, Yan Gao

What’s new?

  • Deliver Flower App Bundle (FAB) to SuperLink and SuperNodes (#4006, #3945, #3999, #4027, #3851, #3946, #4003, #4029, #3942, #3957, #4020, #4044, #3852, #4019, #4031, #4036, #4049, #4017, #3943, #3944, #4011, #3619)

    Dynamic code updates are here! flwr run can now ship and install the latest version of your ServerApp and ClientApp to an already-running federation (SuperLink and SuperNodes).

    How does it work? flwr run bundles your Flower app into a single FAB (Flower App Bundle) file. It then ships this FAB file, via the SuperExec, to both the SuperLink and those SuperNodes that need it. This allows you to keep SuperExec, SuperLink and SuperNodes running as permanent infrastructure, and then ship code updates (including completely new projects!) dynamically.

    flwr run is all you need.

  • Introduce isolated ClientApp execution (#3970, #3976, #4002, #4001, #4034, #4037, #3977, #4042, #3978, #4039, #4033, #3971, #4035, #3973, #4032)

    The SuperNode can now run your ClientApp in a fully isolated way. In an enterprise deployment, this allows you to set strict limits on what the ClientApp can and cannot do.

    flower-supernode supports three --isolation modes:

    • Unset: The SuperNode runs the ClientApp in the same process (as in previous versions of Flower). This is the default mode.

    • --isolation=subprocess: The SuperNode starts a subprocess to run the ClientApp.

    • --isolation=process: The SuperNode expects an externally-managed process to run the ClientApp. This external process is not managed by the SuperNode, so it has to be started beforehand and terminated manually. The common way to use this isolation mode is via the new flwr/clientapp Docker image.

  • Improve Docker support for enterprise deployments (#4050, #4090, #3784, #3998, #4094, #3722)

    Flower 1.11 ships many Docker improvements that are especially useful for enterprise deployments:

    • flwr/supernode comes with a new Alpine Docker image.

    • flwr/clientapp is a new image to be used with the --isolation=process option. In this mode, SuperNode and ClientApp run in two different Docker containers. flwr/supernode (preferably the Alpine version) runs the long-running SuperNode with --isolation=process. flwr/clientapp runs the ClientApp. This is the recommended way to deploy Flower in enterprise settings.

    • New all-in-one Docker Compose enables you to easily start a full Flower Deployment Engine on a single machine.

    • Completely new Docker documentation: https://flower.ai/docs/framework/docker/index.html

  • Improve SuperNode authentication (#4043, #4047, #4074)

    SuperNode auth has been improved in several ways, including improved logging, improved testing, and improved error handling.

  • Update flwr new templates (#3933, #3894, #3930, #3931, #3997, #3979, #3965, #4013, #4064)

    All flwr new templates have been updated to show the latest recommended use of Flower APIs.

  • Improve Simulation Engine (#4095, #3913, #4059, #3954, #4071, #3985, #3988)

    The Flower Simulation Engine comes with several updates, including improved run config support, verbose logging, simulation backend configuration via flwr run, and more.

  • Improve RecordSet (#4052, #3218, #4016)

    RecordSet is the core object to exchange model parameters, configuration values and metrics between ClientApp and ServerApp. This release ships several smaller improvements to RecordSet and related *Record types.

  • Update documentation (#3972, #3925, #4061, #3984, #3917, #3900, #4066, #3765, #4021, #3906, #4063, #4076, #3920, #3916)

    Many parts of the documentation, including the main tutorial, have been migrated to show new Flower APIs and other new Flower features like the improved Docker support.

  • Migrate code example to use new Flower APIs (#3758, #3701, #3919, #3918, #3934, #3893, #3833, #3922, #3846, #3777, #3874, #3873, #3935, #3754, #3980, #4089, #4046, #3314, #3316, #3295, #3313)

    Many code examples have been migrated to use new Flower APIs.

  • Update Flower framework, framework internals and quality infrastructure (#4018, #4053, #4098, #4067, #4105, #4048, #4107, #4069, #3915, #4101, #4108, #3914, #4068, #4041, #4040, #3986, #4026, #3961, #3975, #3983, #4091, #3982, #4079, #4073, #4060, #4106, #4080, #3974, #3996, #3991, #3981, #4093, #4100, #3939, #3955, #3940, #4038)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Deprecations

  • Deprecate accessing Context via Client.context (#3797)

    Now that both client_fn and server_fn receive a Context object, accessing Context via Client.context is deprecated. Client.context will be removed in a future release. If you need to access Context in your Client implementation, pass it manually when creating the Client instance in client_fn:

    def client_fn(context: Context) -> Client:
        return FlowerClient(context).to_client()
    

Incompatible changes

  • Update CLIs to accept an app directory instead of ClientApp and ServerApp (#3952, #4077, #3850)

    The CLI commands flower-supernode and flower-server-app now accept an app directory as argument (instead of references to a ClientApp or ServerApp). An app directory is any directory containing a pyproject.toml file (with the appropriate Flower config fields set). The easiest way to generate a compatible project structure is to use flwr new.

  • Disable flower-client-app CLI command (#4022)

    flower-client-app has been disabled. Use flower-supernode instead.

  • Use spaces instead of commas for separating config args (#4000)

    When passing configs (run config, node config) to Flower, you now need to separate key-value pairs using spaces instead of commas. For example:

    flwr run . --run-config "learning-rate=0.01 num_rounds=10"  # Works
    

    Previously, you could pass configs using commas, like this:

    flwr run . --run-config "learning-rate=0.01,num_rounds=10"  # Doesn't work
    
  • Remove flwr example CLI command (#4084)

    The experimental flwr example CLI command has been removed. Use flwr new to generate a project and then run it using flwr run.

v1.10.0 (2024-07-24)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Danny, Gustavo Bertoli, Heng Pan, Ikko Eltociear Ashimine, Javier, Jiahao Tan, Mohammad Naseri, Robert Steiner, Sebastian van der Voort, Taner Topal, Yan Gao

What’s new?

  • Introduce flwr run (beta) (#3810, #3826, #3880, #3807, #3800, #3814, #3811, #3809, #3819)

    Flower 1.10 ships the first beta release of the new flwr run command. flwr run can run different projects using flwr run path/to/project, it enables you to easily switch between different federations using flwr run . federation and it runs your Flower project using either local simulation or the new (experimental) SuperExec service. This allows Flower to scale federatated learning from fast local simulation to large-scale production deployment, seamlessly. All projects generated with flwr new are immediately runnable using flwr run. Give it a try: use flwr new to generate a project and then run it using flwr run.

  • Introduce run config (#3751, #3750, #3845, #3824, #3746, #3728, #3730, #3725, #3729, #3580, #3578, #3576, #3798, #3732, #3815)

    The new run config feature allows you to run your Flower project in different configurations without having to change a single line of code. You can now build a configurable ServerApp and ClientApp that read configuration values at runtime. This enables you to specify config values like learning-rate=0.01 in pyproject.toml (under the [tool.flwr.app.config] key). These config values can then be easily overridden via flwr run --run-config learning-rate=0.02, and read from Context using lr = context.run_config["learning-rate"]. Create a new project using flwr new to see run config in action.

  • Generalize client_fn signature to client_fn(context: Context) -> Client (#3779, #3697, #3694, #3696)

    The client_fn signature has been generalized to client_fn(context: Context) -> Client. It now receives a Context object instead of the (now depreacated) cid: str. Context allows accessing node_id, node_config and run_config, among other things. This enables you to build a configurable ClientApp that leverages the new run config system.

    The previous signature client_fn(cid: str) is now deprecated and support for it will be removed in a future release. Use client_fn(context: Context) -> Client everywhere.

  • Introduce new server_fn(context) (#3773, #3796, #3771)

    In addition to the new client_fn(context:Context), a new server_fn(context: Context) -> ServerAppComponents can now be passed to ServerApp (instead of passing, for example, Strategy, directly). This enables you to leverage the full Context on the server-side to build a configurable ServerApp.

  • Relaunch all flwr new templates (#3877, #3821, #3587, #3795, #3875, #3859, #3760)

    All flwr new templates have been significantly updated to showcase new Flower features and best practices. This includes using flwr run and the new run config feature. You can now easily create a new project using flwr new and, after following the instructions to install it, flwr run it.

  • Introduce flower-supernode (preview) (#3353)

    The new flower-supernode CLI is here to replace flower-client-app. flower-supernode brings full multi-app support to the Flower client-side. It also allows to pass --node-config to the SuperNode, which is accessible in your ClientApp via Context (using the new client_fn(context: Context) signature).

  • Introduce node config (#3782, #3780, #3695, #3886)

    A new node config feature allows you to pass a static configuration to the SuperNode. This configuration is read-only and available to every ClientApp running on that SuperNode. A ClientApp can access the node config via Context (context.node_config).

  • Introduce SuperExec (experimental) (#3605, #3723, #3731, #3589, #3604, #3622, #3838, #3720, #3606, #3602, #3603, #3555, #3808, #3724, #3658, #3629)

    This is the first experimental release of Flower SuperExec, a new service that executes your runs. It’s not ready for production deployment just yet, but don’t hesitate to give it a try if you’re interested.

  • Add new federated learning with tabular data example (#3568)

    A new code example exemplifies a federated learning setup using the Flower framework on the Adult Census Income tabular dataset.

  • Create generic adapter layer (preview) (#3538, #3536, #3540)

    A new generic gRPC adapter layer allows 3rd-party frameworks to integrate with Flower in a transparent way. This makes Flower more modular and allows for integration into other federated learning solutions and platforms.

  • Refactor Flower Simulation Engine (#3581, #3471, #3804, #3468, #3839, #3806, #3861, #3543, #3472, #3829, #3469)

    The Simulation Engine was significantly refactored. This results in faster and more stable simulations. It is also the foundation for upcoming changes that aim to provide the next level of performance and configurability in federated learning simulations.

  • Optimize Docker containers (#3591)

    Flower Docker containers were optimized and updated to use that latest Flower framework features.

  • Improve logging (#3776, #3789)

    Improved logging aims to be more concise and helpful to show you the details you actually care about.

  • Refactor framework internals (#3621, #3792, #3772, #3805, #3583, #3825, #3597, #3802, #3569)

    As always, many parts of the Flower framework and quality infrastructure were improved and updated.

Documentation improvements

Deprecations

  • Deprecate client_fn(cid: str)

    client_fn used to have a signature client_fn(cid: str) -> Client. This signature is now deprecated. Use the new signature client_fn(context: Context) -> Client instead. The new argument context allows accessing node_id, node_config, run_config and other Context features. When running using the simulation engine (or using flower-supernode with a custom --node-config partition-id=...), context.node_config["partition-id"] will return an int partition ID that can be used with Flower Datasets to load a different partition of the dataset on each simulated or deployed SuperNode.

  • Deprecate passing Server/ServerConfig/Strategy/ClientManager to ServerApp directly

    Creating ServerApp using ServerApp(config=config, strategy=strategy) is now deprecated. Instead of passing Server/ServerConfig/Strategy/ClientManager to ServerApp directly, pass them wrapped in a server_fn(context: Context) -> ServerAppComponents function, like this: ServerApp(server_fn=server_fn). ServerAppComponents can hold references to Server/ServerConfig/Strategy/ClientManager. In addition to that, server_fn allows you to access Context (for example, to read the run_config).

Incompatible changes

  • Remove support for client_ids in start_simulation (#3699)

    The (rarely used) feature that allowed passing custom client_ids to the start_simulation function was removed. This removal is part of a bigger effort to refactor the simulation engine and unify how the Flower internals work in simulation and deployment.

  • Remove flower-driver-api and flower-fleet-api (#3418)

    The two deprecated CLI commands flower-driver-api and flower-fleet-api were removed in an effort to streamline the SuperLink developer experience. Use flower-superlink instead.

v1.9.0 (2024-06-10)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Chong Shen Ng, Daniel J. Beutel, Daniel Nata Nugraha, Heng Pan, Javier, Mahdi Beitollahi, Robert Steiner, Taner Topal, Yan Gao, bapic, mohammadnaseri

What’s new?

Deprecations

  • Deprecate Python 3.8 support

    Python 3.8 will stop receiving security fixes in October 2024. Support for Python 3.8 is now deprecated and will be removed in an upcoming release.

  • Deprecate (experimental) flower-driver-api and flower-fleet-api (#3416, #3420)

    Flower 1.9 deprecates the two (experimental) commands flower-driver-api and flower-fleet-api. Both commands will be removed in an upcoming release. Use flower-superlink instead.

  • Deprecate --server in favor of --superlink (#3518)

    The commands flower-server-app and flower-client-app should use --superlink instead of the now deprecated --server. Support for --server will be removed in a future release.

Incompatible changes

  • Replace flower-superlink CLI option --certificates with --ssl-ca-certfile , --ssl-certfile and --ssl-keyfile (#3512, #3408)

    SSL-related flower-superlink CLI arguments were restructured in an incompatible way. Instead of passing a single --certificates flag with three values, you now need to pass three flags (--ssl-ca-certfile, --ssl-certfile and --ssl-keyfile) with one value each. Check out the SSL connections documentation page for details.

  • Remove SuperLink --vce option (#3513)

    Instead of separately starting a SuperLink and a ServerApp for simulation, simulations must now be started using the single flower-simulation command.

  • Merge --grpc-rere and --rest SuperLink options (#3527)

    To simplify the usage of flower-superlink, previously separate sets of CLI options for gRPC and REST were merged into one unified set of options. Consult the Flower CLI reference documentation for details.

v1.8.0 (2024-04-03)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Daniel J. Beutel, Daniel Nata Nugraha, Danny, Gustavo Bertoli, Heng Pan, Ikko Eltociear Ashimine, Jack Cook, Javier, Raj Parekh, Robert Steiner, Sebastian van der Voort, Taner Topal, Yan Gao, mohammadnaseri, tabdar-khan

What’s new?

  • Introduce Flower Next high-level API (stable) (#3002, #2934, #2958, #3173, #3174, #2923, #2691, #3079, #2961, #2924, #3166, #3031, #3057, #3000, #3113, #2957, #3183, #3180, #3035, #3189, #3185, #3190, #3191, #3195, #3197)

    The Flower Next high-level API is stable! Flower Next is the future of Flower - all new features (like Flower Mods) will be built on top of it. You can start to migrate your existing projects to Flower Next by using ServerApp and ClientApp (check out quickstart-pytorch or quickstart-tensorflow, a detailed migration guide will follow shortly). Flower Next allows you to run multiple projects concurrently (we call this multi-run) and execute the same project in either simulation environments or deployment environments without having to change a single line of code. The best part? It’s fully compatible with existing Flower projects that use Strategy, NumPyClient & co.

  • Introduce Flower Next low-level API (preview) (#3062, #3034, #3069)

    In addition to the Flower Next high-level API that uses Strategy, NumPyClient & co, Flower 1.8 also comes with a preview version of the new Flower Next low-level API. The low-level API allows for granular control of every aspect of the learning process by sending/receiving individual messages to/from client nodes. The new ServerApp supports registering a custom main function that allows writing custom training loops for methods like async FL, cyclic training, or federated analytics. The new ClientApp supports registering train, evaluate and query functions that can access the raw message received from the ServerApp. New abstractions like RecordSet, Message and Context further enable sending multiple models, multiple sets of config values and metrics, stateful computations on the client node and implementations of custom SMPC protocols, to name just a few.

  • Introduce Flower Mods (preview) (#3054, #2911, #3083)

    Flower Modifiers (we call them Mods) can intercept messages and analyze, edit or handle them directly. Mods can be used to develop pluggable modules that work across different projects. Flower 1.8 already includes mods to log the size of a message, the number of parameters sent over the network, differential privacy with fixed clipping and adaptive clipping, local differential privacy and secure aggregation protocols SecAgg and SecAgg+. The Flower Mods API is released as a preview, but researchers can already use it to experiment with arbirtrary SMPC protocols.

  • Fine-tune LLMs with LLM FlowerTune (#3029, #3089, #3092, #3100, #3114, #3162, #3172)

    We are introducing LLM FlowerTune, an introductory example that demonstrates federated LLM fine-tuning of pre-trained Llama2 models on the Alpaca-GPT4 dataset. The example is built to be easily adapted to use different models and/or datasets. Read our blog post LLM FlowerTune: Federated LLM Fine-tuning with Flower for more details.

  • Introduce built-in Differential Privacy (preview) (#2798, #2959, #3038, #3147, #2909, #2893, #2892, #3039, #3074)

    Built-in Differential Privacy is here! Flower supports both central and local differential privacy (DP). Central DP can be configured with either fixed or adaptive clipping. The clipping can happen either on the server-side or the client-side. Local DP does both clipping and noising on the client-side. A new documentation page explains Differential Privacy approaches and a new how-to guide describes how to use the new Differential Privacy components in Flower.

  • Introduce built-in Secure Aggregation (preview) (#3120, #3110, #3108)

    Built-in Secure Aggregation is here! Flower now supports different secure aggregation protocols out-of-the-box. The best part? You can add secure aggregation to your Flower projects with only a few lines of code. In this initial release, we inlcude support for SecAgg and SecAgg+, but more protocols will be implemented shortly. We’ll also add detailed docs that explain secure aggregation and how to use it in Flower. You can already check out the new code example that shows how to use Flower to easily combine Federated Learning, Differential Privacy and Secure Aggregation in the same project.

  • Introduce flwr CLI (preview) (#2942, #3055, #3111, #3130, #3136, #3094, #3059, #3049, #3142)

    A new flwr CLI command allows creating new Flower projects (flwr new) and then running them using the Simulation Engine (flwr run).

  • Introduce Flower Next Simulation Engine (#3024, #3061, #2997, #2783, #3184, #3075, #3047, #2998, #3009, #3008)

    The Flower Simulation Engine can now run Flower Next projects. For notebook environments, there’s also a new run_simulation function that can run ServerApp and ClientApp.

  • Handle SuperNode connection errors (#2969)

    A SuperNode will now try to reconnect indefinitely to the SuperLink in case of connection errors. The arguments --max-retries and --max-wait-time can now be passed to the flower-client-app command. --max-retries will define the number of tentatives the client should make before it gives up trying to reconnect to the SuperLink, and, --max-wait-time defines the time before the SuperNode gives up trying to reconnect to the SuperLink.

  • General updates to Flower Baselines (#2904, #2482, #2985, #2968)

    There’s a new FedStar baseline. Several other baselined have been updated as well.

  • Improve documentation and translations (#3050, #3044, #3043, #2986, #3041, #3046, #3042, #2978, #2952, #3167, #2953, #3045, #2654, #3082, #2990, #2989)

    As usual, we merged many smaller and larger improvements to the documentation. A special thank you goes to Sebastian van der Voort for landing a big documentation PR!

  • General updates to Flower Examples (3134, 2996, 2930, 2967, 2467, 2910, #2918, #2773, #3063, #3116, #3117)

    Two new examples show federated training of a Vision Transformer (ViT) and federated learning in a medical context using the popular MONAI library. quickstart-pytorch and quickstart-tensorflow demonstrate the new Flower Next ServerApp and ClientApp. Many other examples received considerable updates as well.

  • General improvements (#3171, 3099, 3003, 3145, 3017, 3085, 3012, 3119, 2991, 2970, 2980, 3086, 2932, 2928, 2941, 2933, 3181, 2973, 2992, 2915, 3040, 3022, 3032, 2902, 2931, 3005, 3132, 3115, 2944, 3064, 3106, 2974, 3178, 2993, 3186, 3091, 3125, 3093, 3013, 3033, 3133, 3068, 2916, 2975, 2984, 2846, 3077, 3143, 2921, 3101, 2927, 2995, 2972, 2912, 3065, 3028, 2922, 2982, 2914, 3179, 3080, 2994, 3187, 2926, 3018, 3144, 3011, #3152, #2836, #2929, #2943, #2955, #2954)

Incompatible changes

None

v1.7.0 (2024-02-05)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Aasheesh Singh, Adam Narozniak, Aml Hassan Esmil, Charles Beauville, Daniel J. Beutel, Daniel Nata Nugraha, Edoardo Gabrielli, Gustavo Bertoli, HelinLin, Heng Pan, Javier, M S Chaitanya Kumar, Mohammad Naseri, Nikos Vlachakis, Pritam Neog, Robert Kuska, Robert Steiner, Taner Topal, Yahia Salaheldin Shaaban, Yan Gao, Yasar Abbas

What’s new?

Incompatible changes

  • Deprecate start_numpy_client (#2563, #2718)

    Until now, clients of type NumPyClient needed to be started via start_numpy_client. In our efforts to consolidate framework APIs, we have introduced changes, and now all client types should start via start_client. To continue using NumPyClient clients, you simply need to first call the .to_client() method and then pass returned Client object to start_client. The examples and the documentation have been updated accordingly.

  • Deprecate legacy DP wrappers (#2749)

    Legacy DP wrapper classes are deprecated, but still functional. This is in preparation for an all-new pluggable version of differential privacy support in Flower.

  • Make optional arg --callable in flower-client a required positional arg (#2673)

  • Rename certificates to root_certificates in Driver (#2890)

  • Drop experimental Task fields (#2866, #2865)

    Experimental fields sa, legacy_server_message and legacy_client_message were removed from Task message. The removed fields are superseded by the new RecordSet abstraction.

  • Retire MXNet examples (#2724)

    The development of the MXNet fremework has ended and the project is now archived on GitHub. Existing MXNet examples won’t receive updates.

v1.6.0 (2023-11-28)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Aashish Kolluri, Adam Narozniak, Alessio Mora, Barathwaja S, Charles Beauville, Daniel J. Beutel, Daniel Nata Nugraha, Gabriel Mota, Heng Pan, Ivan Agarský, JS.KIM, Javier, Marius Schlegel, Navin Chandra, Nic Lane, Peterpan828, Qinbin Li, Shaz-hash, Steve Laskaridis, Taner Topal, William Lindskog, Yan Gao, cnxdeveloper, k3nfalt

What’s new?

  • Add experimental support for Python 3.12 (#2565)

  • Add new XGBoost examples (#2612, #2554, #2617, #2618, #2619, #2567)

    We have added a new xgboost-quickstart example alongside a new xgboost-comprehensive example that goes more in-depth.

  • Add Vertical FL example (#2598)

    We had many questions about Vertical Federated Learning using Flower, so we decided to add an simple example for it on the Titanic dataset alongside a tutorial (in the README).

  • Support custom ClientManager in start_driver() (#2292)

  • Update REST API to support create and delete nodes (#2283)

  • Update the Android SDK (#2187)

    Add gRPC request-response capability to the Android SDK.

  • Update the C++ SDK (#2537, #2528, #2523, #2522)

    Add gRPC request-response capability to the C++ SDK.

  • Make HTTPS the new default (#2591, #2636)

    Flower is moving to HTTPS by default. The new flower-server requires passing --certificates, but users can enable --insecure to use HTTP for prototyping. The same applies to flower-client, which can either use user-provided credentials or gRPC-bundled certificates to connect to an HTTPS-enabled server or requires opt-out via passing --insecure to enable insecure HTTP connections.

    For backward compatibility, start_client() and start_numpy_client() will still start in insecure mode by default. In a future release, insecure connections will require user opt-in by passing insecure=True.

  • Unify client API (#2303, #2390, #2493)

    Using the client_fn, Flower clients can interchangeably run as standalone processes (i.e. via start_client) or in simulation (i.e. via start_simulation) without requiring changes to how the client class is defined and instantiated. The to_client() function is introduced to convert a NumPyClient to a Client.

  • Add new Bulyan strategy (#1817, #1891)

    The new Bulyan strategy implements Bulyan by El Mhamdi et al., 2018

  • Add new XGB Bagging strategy (#2611)

  • Introduce WorkloadState (#2564, #2632)

  • Introduce WorkloadState (#2564, #2632)

  • Update Flower Baselines

  • General updates to Flower Examples (#2384, #2425, #2526, #2302, #2545)

  • General updates to Flower Baselines (#2301, #2305, #2307, #2327, #2435, #2462, #2463, #2461, #2469, #2466, #2471, #2472, #2470)

  • General updates to the simulation engine (#2331, #2447, #2448, #2294)

  • General updates to Flower SDKs (#2288, #2429, #2555, #2543, #2544, #2597, #2623)

  • General improvements (#2309, #2310, #2313, #2316, #2317, #2349, #2360, #2402, #2446, #2561, #2273, #2267, #2274, #2275, #2432, #2251, #2321, #1936, #2408, #2413, #2401, #2531, #2534, #2535, #2521, #2553, #2596)

    Flower received many improvements under the hood, too many to list here.

Incompatible changes

  • Remove support for Python 3.7 (#2280, #2299, #2304, #2306, #2355, #2356)

    Python 3.7 support was deprecated in Flower 1.5, and this release removes support. Flower now requires Python 3.8.

  • Remove experimental argument rest from start_client (#2324)

    The (still experimental) argument rest was removed from start_client and start_numpy_client. Use transport="rest" to opt into the experimental REST API instead.

v1.5.0 (2023-08-31)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Anass Anhari, Charles Beauville, Dana-Farber, Daniel J. Beutel, Daniel Nata Nugraha, Edoardo Gabrielli, Gustavo Bertoli, Heng Pan, Javier, Mahdi, Steven (Sīchàng), Taner Topal, achiverram28, danielnugraha, eunchung, ruthgal

What’s new?

  • Introduce new simulation engine (#1969, #2221, #2248)

    The new simulation engine has been rewritten from the ground up, yet it remains fully backwards compatible. It offers much improved stability and memory handling, especially when working with GPUs. Simulations transparently adapt to different settings to scale simulation in CPU-only, CPU+GPU, multi-GPU, or multi-node multi-GPU environments.

    Comprehensive documentation includes a new how-to run simulations guide, new simulation-pytorch and simulation-tensorflow notebooks, and a new YouTube tutorial series.

  • Restructure Flower Docs (#1824, #1865, #1884, #1887, #1919, #1922, #1920, #1923, #1924, #1962, #2006, #2133, #2203, #2215, #2122, #2223, #2219, #2232, #2233, #2234, #2235, #2237, #2238, #2242, #2231, #2243, #2227)

    Much effort went into a completely restructured Flower docs experience. The documentation on flower.ai/docs is now divided into Flower Framework, Flower Baselines, Flower Android SDK, Flower iOS SDK, and code example projects.

  • Introduce Flower Swift SDK (#1858, #1897)

    This is the first preview release of the Flower Swift SDK. Flower support on iOS is improving, and alongside the Swift SDK and code example, there is now also an iOS quickstart tutorial.

  • Introduce Flower Android SDK (#2131)

    This is the first preview release of the Flower Kotlin SDK. Flower support on Android is improving, and alongside the Kotlin SDK and code example, there is now also an Android quickstart tutorial.

  • Introduce new end-to-end testing infrastructure (#1842, #2071, #2072, #2068, #2067, #2069, #2073, #2070, #2074, #2082, #2084, #2093, #2109, #2095, #2140, #2137, #2165)

    A new testing infrastructure ensures that new changes stay compatible with existing framework integrations or strategies.

  • Deprecate Python 3.7

    Since Python 3.7 reached its end of life (EOL) on 2023-06-27, support for Python 3.7 is now deprecated and will be removed in an upcoming release.

  • Add new FedTrimmedAvg strategy (#1769, #1853)

    The new FedTrimmedAvg strategy implements Trimmed Mean by Dong Yin, 2018.

  • Introduce start_driver (#1697)

    In addition to start_server and using the raw Driver API, there is a new start_driver function that allows for running start_server scripts as a Flower driver with only a single-line code change. Check out the mt-pytorch code example to see a working example using start_driver.

  • Add parameter aggregation to mt-pytorch code example (#1785)

    The mt-pytorch example shows how to aggregate parameters when writing a driver script. The included driver.py and server.py have been aligned to demonstrate both the low-level way and the high-level way of building server-side logic.

  • Migrate experimental REST API to Starlette (2171)

    The (experimental) REST API used to be implemented in FastAPI, but it has now been migrated to use Starlette directly.

    Please note: The REST request-response API is still experimental and will likely change significantly over time.

  • Introduce experimental gRPC request-response API (#1867, #1901)

    In addition to the existing gRPC API (based on bidirectional streaming) and the experimental REST API, there is now a new gRPC API that uses a request-response model to communicate with client nodes.

    Please note: The gRPC request-response API is still experimental and will likely change significantly over time.

  • Replace the experimental start_client(rest=True) with the new start_client(transport="rest") (#1880)

    The (experimental) start_client argument rest was deprecated in favour of a new argument transport. start_client(transport="rest") will yield the same behaviour as start_client(rest=True) did before. All code should migrate to the new argument transport. The deprecated argument rest will be removed in a future release.

  • Add a new gRPC option (#2197)

    We now start a gRPC server with the grpc.keepalive_permit_without_calls option set to 0 by default. This prevents the clients from sending keepalive pings when there is no outstanding stream.

  • Improve example notebooks (#2005)

    There’s a new 30min Federated Learning PyTorch tutorial!

  • Example updates (#1772, #1873, #1981, #1988, #1984, #1982, #2112, #2144, #2174, #2225, #2183)

    Many examples have received significant updates, including simplified advanced-tensorflow and advanced-pytorch examples, improved macOS compatibility of TensorFlow examples, and code examples for simulation. A major upgrade is that all code examples now have a requirements.txt (in addition to pyproject.toml).

  • General improvements (#1872, #1866, #1884, #1837, #1477, #2171)

    Flower received many improvements under the hood, too many to list here.

Incompatible changes

None

v1.4.0 (2023-04-21)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Alexander Viala Bellander, Charles Beauville, Chenyang Ma (Danny), Daniel J. Beutel, Edoardo, Gautam Jajoo, Iacob-Alexandru-Andrei, JDRanpariya, Jean Charle Yaacoub, Kunal Sarkhel, L. Jiang, Lennart Behme, Max Kapsecker, Michał, Nic Lane, Nikolaos Episkopos, Ragy, Saurav Maheshkar, Semo Yang, Steve Laskaridis, Steven (Sīchàng), Taner Topal

What’s new?

  • Introduce support for XGBoost (FedXgbNnAvg strategy and example) (#1694, #1709, #1715, #1717, #1763, #1795)

    XGBoost is a tree-based ensemble machine learning algorithm that uses gradient boosting to improve model accuracy. We added a new FedXgbNnAvg strategy, and a code example that demonstrates the usage of this new strategy in an XGBoost project.

  • Introduce iOS SDK (preview) (#1621, #1764)

    This is a major update for anyone wanting to implement Federated Learning on iOS mobile devices. We now have a swift iOS SDK present under src/swift/flwr that will facilitate greatly the app creating process. To showcase its use, the iOS example has also been updated!

  • Introduce new “What is Federated Learning?” tutorial (#1657, #1721)

    A new entry-level tutorial in our documentation explains the basics of Fedetated Learning. It enables anyone who’s unfamiliar with Federated Learning to start their journey with Flower. Forward it to anyone who’s interested in Federated Learning!

  • Introduce new Flower Baseline: FedProx MNIST (#1513, #1680, #1681, #1679)

    This new baseline replicates the MNIST+CNN task from the paper Federated Optimization in Heterogeneous Networks (Li et al., 2018). It uses the FedProx strategy, which aims at making convergence more robust in heterogeneous settings.

  • Introduce new Flower Baseline: FedAvg FEMNIST (#1655)

    This new baseline replicates an experiment evaluating the performance of the FedAvg algorithm on the FEMNIST dataset from the paper LEAF: A Benchmark for Federated Settings (Caldas et al., 2018).

  • Introduce (experimental) REST API (#1594, #1690, #1695, #1712, #1802, #1770, #1733)

    A new REST API has been introduced as an alternative to the gRPC-based communication stack. In this initial version, the REST API only supports anonymous clients.

    Please note: The REST API is still experimental and will likely change significantly over time.

  • Improve the (experimental) Driver API (#1663, #1666, #1667, #1664, #1675, #1676, #1693, #1662, #1794)

    The Driver API is still an experimental feature, but this release introduces some major upgrades. One of the main improvements is the introduction of an SQLite database to store server state on disk (instead of in-memory). Another improvement is that tasks (instructions or results) that have been delivered will now be deleted. This greatly improves the memory efficiency of a long-running Flower server.

  • Fix spilling issues related to Ray during simulations (#1698)

    While running long simulations, ray was sometimes spilling huge amounts of data that would make the training unable to continue. This is now fixed! 🎉

  • Add new example using TabNet and Flower (#1725)

    TabNet is a powerful and flexible framework for training machine learning models on tabular data. We now have a federated example using Flower: quickstart-tabnet.

  • Add new how-to guide for monitoring simulations (#1649)

    We now have a documentation guide to help users monitor their performance during simulations.

  • Add training metrics to History object during simulations (#1696)

    The fit_metrics_aggregation_fn can be used to aggregate training metrics, but previous releases did not save the results in the History object. This is now the case!

  • General improvements (#1659, #1646, #1647, #1471, #1648, #1651, #1652, #1653, #1659, #1665, #1670, #1672, #1677, #1684, #1683, #1686, #1682, #1685, #1692, #1705, #1708, #1711, #1713, #1714, #1718, #1716, #1723, #1735, #1678, #1750, #1753, #1736, #1766, #1760, #1775, #1776, #1777, #1779, #1784, #1773, #1755, #1789, #1788, #1798, #1799, #1739, #1800, #1804, #1805)

    Flower received many improvements under the hood, too many to list here.

Incompatible changes

None

v1.3.0 (2023-02-06)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Alexander Viala Bellander, Charles Beauville, Daniel J. Beutel, JDRanpariya, Lennart Behme, Taner Topal

What’s new?

  • Add support for workload_id and group_id in Driver API (#1595)

    The (experimental) Driver API now supports a workload_id that can be used to identify which workload a task belongs to. It also supports a new group_id that can be used, for example, to indicate the current training round. Both the workload_id and group_id enable client nodes to decide whether they want to handle a task or not.

  • Make Driver API and Fleet API address configurable (#1637)

    The (experimental) long-running Flower server (Driver API and Fleet API) can now configure the server address of both Driver API (via --driver-api-address) and Fleet API (via --fleet-api-address) when starting:

    flower-server --driver-api-address "0.0.0.0:8081" --fleet-api-address "0.0.0.0:8086"

    Both IPv4 and IPv6 addresses are supported.

  • Add new example of Federated Learning using fastai and Flower (#1598)

    A new code example (quickstart-fastai) demonstrates federated learning with fastai and Flower. You can find it here: quickstart-fastai.

  • Make Android example compatible with flwr >= 1.0.0 and the latest versions of Android (#1603)

    The Android code example has received a substantial update: the project is compatible with Flower 1.0 (and later), the UI received a full refresh, and the project is updated to be compatible with newer Android tooling.

  • Add new FedProx strategy (#1619)

    This strategy is almost identical to FedAvg, but helps users replicate what is described in this paper. It essentially adds a parameter called proximal_mu to regularize the local models with respect to the global models.

  • Add new metrics to telemetry events (#1640)

    An updated event structure allows, for example, the clustering of events within the same workload.

  • Add new custom strategy tutorial section #1623

    The Flower tutorial now has a new section that covers implementing a custom strategy from scratch: Open in Colab

  • Add new custom serialization tutorial section (#1622)

    The Flower tutorial now has a new section that covers custom serialization: Open in Colab

  • General improvements (#1638, #1634, #1636, #1635, #1633, #1632, #1631, #1630, #1627, #1593, #1616, #1615, #1607, #1609, #1608, #1603, #1590, #1580, #1599, #1600, #1601, #1597, #1595, #1591, #1588, #1589, #1587, #1573, #1581, #1578, #1574, #1572, #1586)

    Flower received many improvements under the hood, too many to list here.

  • Updated documentation (#1629, #1628, #1620, #1618, #1617, #1613, #1614)

    As usual, the documentation has improved quite a bit. It is another step in our effort to make the Flower documentation the best documentation of any project. Stay tuned and as always, feel free to provide feedback!

Incompatible changes

None

v1.2.0 (2023-01-13)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Adam Narozniak, Charles Beauville, Daniel J. Beutel, Edoardo, L. Jiang, Ragy, Taner Topal, dannymcy

What’s new?

  • Introduce new Flower Baseline: FedAvg MNIST (#1497, #1552)

    Over the coming weeks, we will be releasing a number of new reference implementations useful especially to FL newcomers. They will typically revisit well known papers from the literature, and be suitable for integration in your own application or for experimentation, in order to deepen your knowledge of FL in general. Today’s release is the first in this series. Read more.

  • Improve GPU support in simulations (#1555)

    The Ray-based Virtual Client Engine (start_simulation) has been updated to improve GPU support. The update includes some of the hard-earned lessons from scaling simulations in GPU cluster environments. New defaults make running GPU-based simulations substantially more robust.

  • Improve GPU support in Jupyter Notebook tutorials (#1527, #1558)

    Some users reported that Jupyter Notebooks have not always been easy to use on GPU instances. We listened and made improvements to all of our Jupyter notebooks! Check out the updated notebooks here:

  • Introduce optional telemetry (#1533, #1544, #1584)

    After a request for feedback from the community, the Flower open-source project introduces optional collection of anonymous usage metrics to make well-informed decisions to improve Flower. Doing this enables the Flower team to understand how Flower is used and what challenges users might face.

    Flower is a friendly framework for collaborative AI and data science. Staying true to this statement, Flower makes it easy to disable telemetry for users who do not want to share anonymous usage metrics. Read more..

  • Introduce (experimental) Driver API (#1520, #1525, #1545, #1546, #1550, #1551, #1567)

    Flower now has a new (experimental) Driver API which will enable fully programmable, async, and multi-tenant Federated Learning and Federated Analytics applications. Phew, that’s a lot! Going forward, the Driver API will be the abstraction that many upcoming features will be built on - and you can start building those things now, too.

    The Driver API also enables a new execution mode in which the server runs indefinitely. Multiple individual workloads can run concurrently and start and stop their execution independent of the server. This is especially useful for users who want to deploy Flower in production.

    To learn more, check out the mt-pytorch code example. We look forward to you feedback!

    Please note: The Driver API is still experimental and will likely change significantly over time.

  • Add new Federated Analytics with Pandas example (#1469, #1535)

    A new code example (quickstart-pandas) demonstrates federated analytics with Pandas and Flower. You can find it here: quickstart-pandas.

  • Add new strategies: Krum and MultiKrum (#1481)

    Edoardo, a computer science student at the Sapienza University of Rome, contributed a new Krum strategy that enables users to easily use Krum and MultiKrum in their workloads.

  • Update C++ example to be compatible with Flower v1.2.0 (#1495)

    The C++ code example has received a substantial update to make it compatible with the latest version of Flower.

  • General improvements (#1491, #1504, #1506, #1514, #1522, #1523, #1526, #1528, #1547, #1549, #1560, #1564, #1566)

    Flower received many improvements under the hood, too many to list here.

  • Updated documentation (#1494, #1496, #1500, #1503, #1505, #1524, #1518, #1519, #1515)

    As usual, the documentation has improved quite a bit. It is another step in our effort to make the Flower documentation the best documentation of any project. Stay tuned and as always, feel free to provide feedback!

    One highlight is the new first time contributor guide: if you’ve never contributed on GitHub before, this is the perfect place to start!

Incompatible changes

None

v1.1.0 (2022-10-31)

Thanks to our contributors

We would like to give our special thanks to all the contributors who made the new version of Flower possible (in git shortlog order):

Akis Linardos, Christopher S, Daniel J. Beutel, George, Jan Schlicht, Mohammad Fares, Pedro Porto Buarque de Gusmão, Philipp Wiesner, Rob Luke, Taner Topal, VasundharaAgarwal, danielnugraha, edogab33

What’s new?

  • Introduce Differential Privacy wrappers (preview) (#1357, #1460)

    The first (experimental) preview of pluggable Differential Privacy wrappers enables easy configuration and usage of differential privacy (DP). The pluggable DP wrappers enable framework-agnostic and strategy-agnostic usage of both client-side DP and server-side DP. Head over to the Flower docs, a new explainer goes into more detail.

  • New iOS CoreML code example (#1289)

    Flower goes iOS! A massive new code example shows how Flower clients can be built for iOS. The code example contains both Flower iOS SDK components that can be used for many tasks, and one task example running on CoreML.

  • New FedMedian strategy (#1461)

    The new FedMedian strategy implements Federated Median (FedMedian) by Yin et al., 2018.

  • Log Client exceptions in Virtual Client Engine (#1493)

    All Client exceptions happening in the VCE are now logged by default and not just exposed to the configured Strategy (via the failures argument).

  • Improve Virtual Client Engine internals (#1401, #1453)

    Some internals of the Virtual Client Engine have been revamped. The VCE now uses Ray 2.0 under the hood, the value type of the client_resources dictionary changed to float to allow fractions of resources to be allocated.

  • Support optional Client/NumPyClient methods in Virtual Client Engine

    The Virtual Client Engine now has full support for optional Client (and NumPyClient) methods.

  • Provide type information to packages using flwr (#1377)

    The package flwr is now bundled with a py.typed file indicating that the package is typed. This enables typing support for projects or packages that use flwr by enabling them to improve their code using static type checkers like mypy.

  • Updated code example (#1344, #1347)

    The code examples covering scikit-learn and PyTorch Lightning have been updated to work with the latest version of Flower.

  • Updated documentation (#1355, #1558, #1379, #1380, #1381, #1332, #1391, #1403, #1364, #1409, #1419, #1444, #1448, #1417, #1449, #1465, #1467)

    There have been so many documentation updates that it doesn’t even make sense to list them individually.

  • Restructured documentation (#1387)

    The documentation has been restructured to make it easier to navigate. This is just the first step in a larger effort to make the Flower documentation the best documentation of any project ever. Stay tuned!

  • Open in Colab button (#1389)

    The four parts of the Flower Federated Learning Tutorial now come with a new Open in Colab button. No need to install anything on your local machine, you can now use and learn about Flower in your browser, it’s only a single click away.

  • Improved tutorial (#1468, #1470, #1472, #1473, #1474, #1475)

    The Flower Federated Learning Tutorial has two brand-new parts covering custom strategies (still WIP) and the distinction between Client and NumPyClient. The existing parts one and two have also been improved (many small changes and fixes).

Incompatible changes

None

v1.0.0 (2022-07-28)

Highlights

  • Stable Virtual Client Engine (accessible via start_simulation)

  • All Client/NumPyClient methods are now optional

  • Configurable get_parameters

  • Tons of small API cleanups resulting in a more coherent developer experience

Thanks to our contributors

We would like to give our special thanks to all the contributors who made Flower 1.0 possible (in reverse GitHub Contributors order):

@rtaiello, @g-pichler, @rob-luke, @andreea-zaharia, @kinshukdua, @nfnt, @tatiana-s, @TParcollet, @vballoli, @negedng, @RISHIKESHAVAN, @hei411, @SebastianSpeitel, @AmitChaulwar, @Rubiel1, @FANTOME-PAN, @Rono-BC, @lbhm, @sishtiaq, @remde, @Jueun-Park, @architjen, @PratikGarai, @mrinaald, @zliel, @MeiruiJiang, @sancarlim, @gubertoli, @Vingt100, @MakGulati, @cozek, @jafermarq, @sisco0, @akhilmathurs, @CanTuerk, @mariaboerner1987, @pedropgusmao, @tanertopal, @danieljanes.

Incompatible changes

  • All arguments must be passed as keyword arguments (#1338)

    Pass all arguments as keyword arguments, positional arguments are not longer supported. Code that uses positional arguments (e.g., start_client("127.0.0.1:8080", FlowerClient())) must add the keyword for each positional argument (e.g., start_client(server_address="127.0.0.1:8080", client=FlowerClient())).

  • Introduce configuration object ServerConfig in start_server and start_simulation (#1317)

    Instead of a config dictionary {"num_rounds": 3, "round_timeout": 600.0}, start_server and start_simulation now expect a configuration object of type flwr.server.ServerConfig. ServerConfig takes the same arguments that as the previous config dict, but it makes writing type-safe code easier and the default parameters values more transparent.

  • Rename built-in strategy parameters for clarity (#1334)

    The following built-in strategy parameters were renamed to improve readability and consistency with other API’s:

    • fraction_eval –> fraction_evaluate

    • min_eval_clients –> min_evaluate_clients

    • eval_fn –> evaluate_fn

  • Update default arguments of built-in strategies (#1278)

    All built-in strategies now use fraction_fit=1.0 and fraction_evaluate=1.0, which means they select all currently available clients for training and evaluation. Projects that relied on the previous default values can get the previous behaviour by initializing the strategy in the following way:

    strategy = FedAvg(fraction_fit=0.1, fraction_evaluate=0.1)

  • Add server_round to Strategy.evaluate (#1334)

    The Strategy method evaluate now receives the current round of federated learning/evaluation as the first parameter.

  • Add server_round and config parameters to evaluate_fn (#1334)

    The evaluate_fn passed to built-in strategies like FedAvg now takes three parameters: (1) The current round of federated learning/evaluation (server_round), (2) the model parameters to evaluate (parameters), and (3) a config dictionary (config).

  • Rename rnd to server_round (#1321)

    Several Flower methods and functions (evaluate_fn, configure_fit, aggregate_fit, configure_evaluate, aggregate_evaluate) receive the current round of federated learning/evaluation as their first parameter. To improve reaability and avoid confusion with random, this parameter has been renamed from rnd to server_round.

  • Move flwr.dataset to flwr_baselines (#1273)

    The experimental package flwr.dataset was migrated to Flower Baselines.

  • Remove experimental strategies (#1280)

    Remove unmaintained experimental strategies (FastAndSlow, FedFSv0, FedFSv1).

  • Rename Weights to NDArrays (#1258, #1259)

    flwr.common.Weights was renamed to flwr.common.NDArrays to better capture what this type is all about.

  • Remove antiquated force_final_distributed_eval from start_server (#1258, #1259)

    The start_server parameter force_final_distributed_eval has long been a historic artefact, in this release it is finally gone for good.

  • Make get_parameters configurable (#1242)

    The get_parameters method now accepts a configuration dictionary, just like get_properties, fit, and evaluate.

  • Replace num_rounds in start_simulation with new config parameter (#1281)

    The start_simulation function now accepts a configuration dictionary config instead of the num_rounds integer. This improves the consistency between start_simulation and start_server and makes transitioning between the two easier.

What’s new?

  • Support Python 3.10 (#1320)

    The previous Flower release introduced experimental support for Python 3.10, this release declares Python 3.10 support as stable.

  • Make all Client and NumPyClient methods optional (#1260, #1277)

    The Client/NumPyClient methods get_properties, get_parameters, fit, and evaluate are all optional. This enables writing clients that implement, for example, only fit, but no other method. No need to implement evaluate when using centralized evaluation!

  • Enable passing a Server instance to start_simulation (#1281)

    Similar to start_server, start_simulation now accepts a full Server instance. This enables users to heavily customize the execution of eperiments and opens the door to running, for example, async FL using the Virtual Client Engine.

  • Update code examples (#1291, #1286, #1282)

    Many code examples received small or even large maintenance updates, among them are

    • scikit-learn

    • simulation_pytorch

    • quickstart_pytorch

    • quickstart_simulation

    • quickstart_tensorflow

    • advanced_tensorflow

  • Remove the obsolete simulation example (#1328)

    Removes the obsolete simulation example and renames quickstart_simulation to simulation_tensorflow so it fits withs the naming of simulation_pytorch

  • Update documentation (#1223, #1209, #1251, #1257, #1267, #1268, #1300, #1304, #1305, #1307)

    One substantial documentation update fixes multiple smaller rendering issues, makes titles more succinct to improve navigation, removes a deprecated library, updates documentation dependencies, includes the flwr.common module in the API reference, includes support for markdown-based documentation, migrates the changelog from .rst to .md, and fixes a number of smaller details!

  • Minor updates

    • Add round number to fit and evaluate log messages (#1266)

    • Add secure gRPC connection to the advanced_tensorflow code example (#847)

    • Update developer tooling (#1231, #1276, #1301, #1310)

    • Rename ProtoBuf messages to improve consistency (#1214, #1258, #1259)

v0.19.0 (2022-05-18)

What’s new?

  • Flower Baselines (preview): FedOpt, FedBN, FedAvgM (#919, #1127, #914)

    The first preview release of Flower Baselines has arrived! We’re kickstarting Flower Baselines with implementations of FedOpt (FedYogi, FedAdam, FedAdagrad), FedBN, and FedAvgM. Check the documentation on how to use Flower Baselines. With this first preview release we’re also inviting the community to contribute their own baselines.

  • C++ client SDK (preview) and code example (#1111)

    Preview support for Flower clients written in C++. The C++ preview includes a Flower client SDK and a quickstart code example that demonstrates a simple C++ client using the SDK.

  • Add experimental support for Python 3.10 and Python 3.11 (#1135)

    Python 3.10 is the latest stable release of Python and Python 3.11 is due to be released in October. This Flower release adds experimental support for both Python versions.

  • Aggregate custom metrics through user-provided functions (#1144)

    Custom metrics (e.g., accuracy) can now be aggregated without having to customize the strategy. Built-in strategies support two new arguments, fit_metrics_aggregation_fn and evaluate_metrics_aggregation_fn, that allow passing custom metric aggregation functions.

  • User-configurable round timeout (#1162)

    A new configuration value allows the round timeout to be set for start_server and start_simulation. If the config dictionary contains a round_timeout key (with a float value in seconds), the server will wait at least round_timeout seconds before it closes the connection.

  • Enable both federated evaluation and centralized evaluation to be used at the same time in all built-in strategies (#1091)

    Built-in strategies can now perform both federated evaluation (i.e., client-side) and centralized evaluation (i.e., server-side) in the same round. Federated evaluation can be disabled by setting fraction_eval to 0.0.

  • Two new Jupyter Notebook tutorials (#1141)

    Two Jupyter Notebook tutorials (compatible with Google Colab) explain basic and intermediate Flower features:

    An Introduction to Federated Learning: Open in Colab

    Using Strategies in Federated Learning: Open in Colab

  • New FedAvgM strategy (Federated Averaging with Server Momentum) (#1076)

    The new FedAvgM strategy implements Federated Averaging with Server Momentum [Hsu et al., 2019].

  • New advanced PyTorch code example (#1007)

    A new code example (advanced_pytorch) demonstrates advanced Flower concepts with PyTorch.

  • New JAX code example (#906, #1143)

    A new code example (jax_from_centralized_to_federated) shows federated learning with JAX and Flower.

  • Minor updates

    • New option to keep Ray running if Ray was already initialized in start_simulation (#1177)

    • Add support for custom ClientManager as a start_simulation parameter (#1171)

    • New documentation for implementing strategies (#1097, #1175)

    • New mobile-friendly documentation theme (#1174)

    • Limit version range for (optional) ray dependency to include only compatible releases (>=1.9.2,<1.12.0) (#1205)

Incompatible changes

  • Remove deprecated support for Python 3.6 (#871)

  • Remove deprecated KerasClient (#857)

  • Remove deprecated no-op extra installs (#973)

  • Remove deprecated proto fields from FitRes and EvaluateRes (#869)

  • Remove deprecated QffedAvg strategy (replaced by QFedAvg) (#1107)

  • Remove deprecated DefaultStrategy strategy (#1142)

  • Remove deprecated support for eval_fn accuracy return value (#1142)

  • Remove deprecated support for passing initial parameters as NumPy ndarrays (#1142)

v0.18.0 (2022-02-28)

What’s new?

  • Improved Virtual Client Engine compatibility with Jupyter Notebook / Google Colab (#866, #872, #833, #1036)

    Simulations (using the Virtual Client Engine through start_simulation) now work more smoothly on Jupyter Notebooks (incl. Google Colab) after installing Flower with the simulation extra (pip install 'flwr[simulation]').

  • New Jupyter Notebook code example (#833)

    A new code example (quickstart_simulation) demonstrates Flower simulations using the Virtual Client Engine through Jupyter Notebook (incl. Google Colab).

  • Client properties (feature preview) (#795)

    Clients can implement a new method get_properties to enable server-side strategies to query client properties.

  • Experimental Android support with TFLite (#865)

    Android support has finally arrived in main! Flower is both client-agnostic and framework-agnostic by design. One can integrate arbitrary client platforms and with this release, using Flower on Android has become a lot easier.

    The example uses TFLite on the client side, along with a new FedAvgAndroid strategy. The Android client and FedAvgAndroid are still experimental, but they are a first step towards a fully-fledged Android SDK and a unified FedAvg implementation that integrated the new functionality from FedAvgAndroid.

  • Make gRPC keepalive time user-configurable and decrease default keepalive time (#1069)

    The default gRPC keepalive time has been reduced to increase the compatibility of Flower with more cloud environments (for example, Microsoft Azure). Users can configure the keepalive time to customize the gRPC stack based on specific requirements.

  • New differential privacy example using Opacus and PyTorch (#805)

    A new code example (opacus) demonstrates differentially-private federated learning with Opacus, PyTorch, and Flower.

  • New Hugging Face Transformers code example (#863)

    A new code example (quickstart_huggingface) demonstrates usage of Hugging Face Transformers with Flower.

  • New MLCube code example (#779, #1034, #1065, #1090)

    A new code example (quickstart_mlcube) demonstrates usage of MLCube with Flower.

  • SSL-enabled server and client (#842, #844, #845, #847, #993, #994)

    SSL enables secure encrypted connections between clients and servers. This release open-sources the Flower secure gRPC implementation to make encrypted communication channels accessible to all Flower users.

  • Updated FedAdam and FedYogi strategies (#885, #895)

    FedAdam and FedAdam match the latest version of the Adaptive Federated Optimization paper.

  • Initialize start_simulation with a list of client IDs (#860)

    start_simulation can now be called with a list of client IDs (clients_ids, type: List[str]). Those IDs will be passed to the client_fn whenever a client needs to be initialized, which can make it easier to load data partitions that are not accessible through int identifiers.

  • Minor updates

    • Update num_examples calculation in PyTorch code examples in (#909)

    • Expose Flower version through flwr.__version__ (#952)

    • start_server in app.py now returns a History object containing metrics from training (#974)

    • Make max_workers (used by ThreadPoolExecutor) configurable (#978)

    • Increase sleep time after server start to three seconds in all code examples (#1086)

    • Added a new FAQ section to the documentation (#948)

    • And many more under-the-hood changes, library updates, documentation changes, and tooling improvements!

Incompatible changes

  • Removed flwr_example and flwr_experimental from release build (#869)

    The packages flwr_example and flwr_experimental have been deprecated since Flower 0.12.0 and they are not longer included in Flower release builds. The associated extras (baseline, examples-pytorch, examples-tensorflow, http-logger, ops) are now no-op and will be removed in an upcoming release.

v0.17.0 (2021-09-24)

What’s new?

  • Experimental virtual client engine (#781 #790 #791)

    One of Flower’s goals is to enable research at scale. This release enables a first (experimental) peek at a major new feature, codenamed the virtual client engine. Virtual clients enable simulations that scale to a (very) large number of clients on a single machine or compute cluster. The easiest way to test the new functionality is to look at the two new code examples called quickstart_simulation and simulation_pytorch.

    The feature is still experimental, so there’s no stability guarantee for the API. It’s also not quite ready for prime time and comes with a few known caveats. However, those who are curious are encouraged to try it out and share their thoughts.

  • New built-in strategies (#828 #822)

    • FedYogi - Federated learning strategy using Yogi on server-side. Implementation based on https://arxiv.org/abs/2003.00295

    • FedAdam - Federated learning strategy using Adam on server-side. Implementation based on https://arxiv.org/abs/2003.00295

  • New PyTorch Lightning code example (#617)

  • New Variational Auto-Encoder code example (#752)

  • New scikit-learn code example (#748)

  • New experimental TensorBoard strategy (#789)

  • Minor updates

    • Improved advanced TensorFlow code example (#769)

    • Warning when min_available_clients is misconfigured (#830)

    • Improved gRPC server docs (#841)

    • Improved error message in NumPyClient (#851)

    • Improved PyTorch quickstart code example (#852)

Incompatible changes

  • Disabled final distributed evaluation (#800)

    Prior behaviour was to perform a final round of distributed evaluation on all connected clients, which is often not required (e.g., when using server-side evaluation). The prior behaviour can be enabled by passing force_final_distributed_eval=True to start_server.

  • Renamed q-FedAvg strategy (#802)

    The strategy named QffedAvg was renamed to QFedAvg to better reflect the notation given in the original paper (q-FFL is the optimization objective, q-FedAvg is the proposed solver). Note the original (now deprecated) QffedAvg class is still available for compatibility reasons (it will be removed in a future release).

  • Deprecated and renamed code example simulation_pytorch to simulation_pytorch_legacy (#791)

    This example has been replaced by a new example. The new example is based on the experimental virtual client engine, which will become the new default way of doing most types of large-scale simulations in Flower. The existing example was kept for reference purposes, but it might be removed in the future.

v0.16.0 (2021-05-11)

What’s new?

  • New built-in strategies (#549)

    • (abstract) FedOpt

    • FedAdagrad

  • Custom metrics for server and strategies (#717)

    The Flower server is now fully task-agnostic, all remaining instances of task-specific metrics (such as accuracy) have been replaced by custom metrics dictionaries. Flower 0.15 introduced the capability to pass a dictionary containing custom metrics from client to server. As of this release, custom metrics replace task-specific metrics on the server.

    Custom metric dictionaries are now used in two user-facing APIs: they are returned from Strategy methods aggregate_fit/aggregate_evaluate and they enable evaluation functions passed to built-in strategies (via eval_fn) to return more than two evaluation metrics. Strategies can even return aggregated metrics dictionaries for the server to keep track of.

    Strategy implementations should migrate their aggregate_fit and aggregate_evaluate methods to the new return type (e.g., by simply returning an empty {}), server-side evaluation functions should migrate from return loss, accuracy to return loss, {"accuracy": accuracy}.

    Flower 0.15-style return types are deprecated (but still supported), compatibility will be removed in a future release.

  • Migration warnings for deprecated functionality (#690)

    Earlier versions of Flower were often migrated to new APIs, while maintaining compatibility with legacy APIs. This release introduces detailed warning messages if usage of deprecated APIs is detected. The new warning messages often provide details on how to migrate to more recent APIs, thus easing the transition from one release to another.

  • Improved docs and docstrings (#691 #692 #713)

  • MXNet example and documentation

  • FedBN implementation in example PyTorch: From Centralized To Federated (#696 #702 #705)

Incompatible changes

  • Serialization-agnostic server (#721)

    The Flower server is now fully serialization-agnostic. Prior usage of class Weights (which represents parameters as deserialized NumPy ndarrays) was replaced by class Parameters (e.g., in Strategy). Parameters objects are fully serialization-agnostic and represents parameters as byte arrays, the tensor_type attributes indicates how these byte arrays should be interpreted (e.g., for serialization/deserialization).

    Built-in strategies implement this approach by handling serialization and deserialization to/from Weights internally. Custom/3rd-party Strategy implementations should update to the slightly changed Strategy method definitions. Strategy authors can consult PR #721 to see how strategies can easily migrate to the new format.

  • Deprecated flwr.server.Server.evaluate, use flwr.server.Server.evaluate_round instead (#717)

v0.15.0 (2021-03-12)

What’s new?

  • Server-side parameter initialization (#658)

    Model parameters can now be initialized on the server-side. Server-side parameter initialization works via a new Strategy method called initialize_parameters.

    Built-in strategies support a new constructor argument called initial_parameters to set the initial parameters. Built-in strategies will provide these initial parameters to the server on startup and then delete them to free the memory afterwards.

    # Create model
    model = tf.keras.applications.EfficientNetB0(
        input_shape=(32, 32, 3), weights=None, classes=10
    )
    model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
    
    # Create strategy and initialize parameters on the server-side
    strategy = fl.server.strategy.FedAvg(
        # ... (other constructor arguments)
        initial_parameters=model.get_weights(),
    )
    
    # Start Flower server with the strategy
    fl.server.start_server("[::]:8080", config={"num_rounds": 3}, strategy=strategy)
    

    If no initial parameters are provided to the strategy, the server will continue to use the current behaviour (namely, it will ask one of the connected clients for its parameters and use these as the initial global parameters).

Deprecations

  • Deprecate flwr.server.strategy.DefaultStrategy (migrate to flwr.server.strategy.FedAvg, which is equivalent)

v0.14.0 (2021-02-18)

What’s new?

  • Generalized Client.fit and Client.evaluate return values (#610 #572 #633)

    Clients can now return an additional dictionary mapping str keys to values of the following types: bool, bytes, float, int, str. This means one can return almost arbitrary values from fit/evaluate and make use of them on the server side!

    This improvement also allowed for more consistent return types between fit and evaluate: evaluate should now return a tuple (float, int, dict) representing the loss, number of examples, and a dictionary holding arbitrary problem-specific values like accuracy.

    In case you wondered: this feature is compatible with existing projects, the additional dictionary return value is optional. New code should however migrate to the new return types to be compatible with upcoming Flower releases (fit: List[np.ndarray], int, Dict[str, Scalar], evaluate: float, int, Dict[str, Scalar]). See the example below for details.

    Code example: note the additional dictionary return values in both FlwrClient.fit and FlwrClient.evaluate:

    class FlwrClient(fl.client.NumPyClient):
        def fit(self, parameters, config):
            net.set_parameters(parameters)
            train_loss = train(net, trainloader)
            return net.get_weights(), len(trainloader), {"train_loss": train_loss}
    
        def evaluate(self, parameters, config):
            net.set_parameters(parameters)
            loss, accuracy, custom_metric = test(net, testloader)
            return loss, len(testloader), {"accuracy": accuracy, "custom_metric": custom_metric}
    
  • Generalized config argument in Client.fit and Client.evaluate (#595)

    The config argument used to be of type Dict[str, str], which means that dictionary values were expected to be strings. The new release generalizes this to enable values of the following types: bool, bytes, float, int, str.

    This means one can now pass almost arbitrary values to fit/evaluate using the config dictionary. Yay, no more str(epochs) on the server-side and int(config["epochs"]) on the client side!

    Code example: note that the config dictionary now contains non-str values in both Client.fit and Client.evaluate:

    class FlwrClient(fl.client.NumPyClient):
        def fit(self, parameters, config):
            net.set_parameters(parameters)
            epochs: int = config["epochs"]
            train_loss = train(net, trainloader, epochs)
            return net.get_weights(), len(trainloader), {"train_loss": train_loss}
    
        def evaluate(self, parameters, config):
            net.set_parameters(parameters)
            batch_size: int = config["batch_size"]
            loss, accuracy = test(net, testloader, batch_size)
            return loss, len(testloader), {"accuracy": accuracy}
    

v0.13.0 (2021-01-08)

What’s new?

  • New example: PyTorch From Centralized To Federated (#549)

  • Improved documentation

    • New documentation theme (#551)

    • New API reference (#554)

    • Updated examples documentation (#549)

    • Removed obsolete documentation (#548)

Bugfix:

  • Server.fit does not disconnect clients when finished, disconnecting the clients is now handled in flwr.server.start_server (#553 #540).

v0.12.0 (2020-12-07)

Important changes:

  • Added an example for embedded devices (#507)

  • Added a new NumPyClient (in addition to the existing KerasClient) (#504 #508)

  • Deprecated flwr_example package and started to migrate examples into the top-level examples directory (#494 #512)

v0.11.0 (2020-11-30)

Incompatible changes:

  • Renamed strategy methods (#486) to unify the naming of Flower’s public APIs. Other public methods/functions (e.g., every method in Client, but also Strategy.evaluate) do not use the on_ prefix, which is why we’re removing it from the four methods in Strategy. To migrate rename the following Strategy methods accordingly:

    • on_configure_evaluate => configure_evaluate

    • on_aggregate_evaluate => aggregate_evaluate

    • on_configure_fit => configure_fit

    • on_aggregate_fit => aggregate_fit

Important changes:

  • Deprecated DefaultStrategy (#479). To migrate use FedAvg instead.

  • Simplified examples and baselines (#484).

  • Removed presently unused on_conclude_round from strategy interface (#483).

  • Set minimal Python version to 3.6.1 instead of 3.6.9 (#471).

  • Improved Strategy docstrings (#470).