Share this post

Bridging the Gap: From Federated Learning to Intelligence on Your Device
If you didn't know, Flower has free course material on DeepLearning.ai and you can enroll immediately. You go from begineer to intermediate in no-time, and the course work is filled with interactive material, ensuring that you learn how to use Flower through code examples and video lessons.
In the two courses on DeepLearning.ai, you explored the foundations of federated AI. This includes learning how to train AI models across multiple devices while preserving privacy, as well as a deep dive in how to fine-tune large language models (LLMs) with private data. You learn about techniques like Parameter-Efficient Fine-Tuning (PEFT) and Differential Privacy (DP) to make federated learning more efficient and privacy-preserving.

Now, it’s time to apply these concepts in the real world.
What if you could take federated learning beyond theory and run LLMs directly on your own device? What if fine-tuning didn’t require expensive cloud infrastructure but could happen locally, or in a secure, remote environment that extends your device’s capabilities?
Enter Flower Intelligence: the first open-source AI platform that enables on-device AI apps that can also automatically handoff – if needed – to a purpose-built private cloud if the user allows it. Communication is fully encrypted, and in practice this means that data stays local on your device i.e. never leaves, and if more compute is needed, communication of queries and updates are fully encrypted, and seamlessly integrated to federated workflow. Flower Intelligence enables developers to build AI experiences that offer a mix of user privacy, inference speed and model size that were previously impossible to achieve.
Key Features:
- Local Inference: Run powerful Generative AI models directly on your device, ensuring speed, privacy, and offline accessibility.
- Flower Confidential Remote Compute: Execute large AI models on remote GPU servers with end-to-end encryption, protecting sensitive user data.
Mozilla Thunderbird is an early adopter of Flower Intelligence using it to launch their upcoming Thunderbird Assist feature. Ryan Sipes, Managing Director for Mozilla Thunderbird explains why: "Our 20 million users expect data privacy from every feature we build. Flower Intelligence allows us to ship on-device AI that works locally with the most sensitive data."
Transitioning from Course Concepts to Flower Intelligence
In the second part of the Federated Learning course on DeepLearning.ai, you delved into federated fine-tuning of LLMs using private data. Flower Intelligence enables you to apply these concepts practically and lets you seamlessly interact with LLMs both locally and remotely in a secure and private way. It is built around the Singleton design pattern, meaning you only need to configure a single instance that can be reused throughout your project. This simple setup helps you integrate powerful AI capabilities with minimal overhead.
You can already get started with Flower Intelligence.
Local Inference using Flower Intelligence
Explore the Typescript, Javascript and Swift SDKs provided by Flower Intelligence to integrate AI capabilities into your web or mobile applications. You can get started with simple Hello World scripts to accustom yourself with the set-up, or build your first Web Chat application - with a local-first mindset. Flower Intelligence lets you specify what model you want to run for your application, choose between validated models e.g. LLaMa, DeepSeek, Mistral and more. More documentation is included to show you how to for example check for errors, stream responses, handle history and much more.
Flower Confidential Remote Compute
Flower Intelligence prioritizes local inference, but also allows to privately handoff the compute to the Flower Confidential Remote Compute service when local resources are scarce. This feature is turned off by default, and can be enabled by using the remoteHandoff attribute of the FlowerIntelligence object. You will also need to provide a valid API key via the apiKey attribute. Apply for early access to Flower’s Confidential Remote Compute service to leverage powerful remote GPUs while ensuring data confidentiality.

Future Steps
Stay tuned for upcoming features like federated pre-training, federated fine-tuning, and local fine-tuning for personalization, which will further expand the capabilities of collaborative AI development. All of this will be released during Flower AI Summit 2025, on March 26-27, 2025. Register now to not miss out. It is free of charge, and takes place in London and online.
Embark on this next phase of your AI journey with Flower Intelligence, and contribute to the evolution of decentralized, secure, and efficient AI solutions.
Share this post