Share this post
Flower Intelligence

Launching today, as a preview release: Flower Intelligence - the first open-source AI platform that enables on-device AI apps, that can also automatically handoff – if needed – to a purpose-built private cloud if the user allows it. Flower Intelligence enables developers to build AI experiences that offer a mix of user privacy, inference speed and model size that were previously impossible to achieve.
Mozilla Thunderbird is an early adopter of Flower Intelligence using it to launch their upcoming Thunderbird Assist feature. Ryan Sipes, Managing Director for Mozilla Thunderbird explains why:
Our 20 million users expect data privacy from every feature we build. Flower Intelligence allows us to ship on-device AI that works locally with the most sensitive data.
Ryan Sipes, Managing Director for Mozilla Thunderbird
The World needs Flower Intelligence
AI today largely runs in the cloud which allows easy deployment of powerful AI models. But importantly, this approach also prevents the use of private and sensitive data that should never be transmitted to a public cloud, and an AI app built this way will simply stop working if the wireless network becomes slow. On-device AI offers an alternative by running models locally inside a device, or even browser. But this doesn’t work for the largest models, and is only an option for the latest and most expensive phones and laptops. Many users are left behind with an approach that relies only on on-device AI.
How Flower Intelligence Changes the Game
Local, fast and private AI, with seamless cloud power when needed and allowed.
Flower Intelligence allows developers to build AI apps in a brand new way. When they use the open-source platform, the AI model will run locally when possible. Speed and privacy is prioritized – and an AI app still works even if the network fails. Furthermore, whenever extra power is needed – such as if the user device is older, or if a large AI model is needed, then Flower Confidential Remote Compute can step in as a seamless private extension of the device, without compromising privacy, security or performance. This happens without any additional work by the developer, but only if the user allows it. The Flower intelligence hybrid approach delivers the best of both worlds: local-first AI that remains powerful, private and compatible with all devices.
Our preview launch of Flower Intelligence offers support for a variety of models including those from Meta (Llama), Mistral and DeepSeek. Initial SDK support includes Swift and TypeScript. We have started by offering AI inference-only via on-device and Flower Confidential Remote Compute, but the future roadmap for Flower Intelligence also includes the ability for AI models to safely incorporate sensitive data by offering methods ranging from fine-tuning and RAG all the way to pre-training and more.
Join the Full Launch at FS25 🚀
This preview release is only the beginning. For the complete picture, join us live for the full launch of Flower Intelligence at the Flower AI Summit 2025 on March 26. Sign up 👉 here.
The open-source Flower Intelligence TypeScript and Swift libraries are available now. Further details, and application for early access to the Flower Confidential Remote Compute service are also available from this 🔗 link.
Share this post