Flower Intelligence

An Open-Source AI Platform to Run LLMs Locally in Your App or Remotely on Flower Confidential Remote Compute.

your first on-device AI app

Flower Framework
  • Thunderbird
  • Meta
  • deepseek
  • Mistral
  • Hugging Face
  • Thunderbird
  • Meta
  • deepseek
  • Mistral
  • Hugging Face
Thunderbird- Mozilla

Thunderbird Assist

Powered by Flower Intelligence

"Our 20 million users expect data privacy from every feature we build. Flower Intelligence allows us to ship on-device AI that works locally with the most sensitive data."

Ryan Sipes

Ryan Sipes

Managing Director, Product

Mozilla Thunderbird

Cloud-only AI or Local-only AI is too limited

AI today faces a trade-off:

  • ❌ Run LLM in the cloud: powerful, but slow, unavailable when offline and not possible with sensitive data.
  • ❌ Run LLM on the device: fast, privacy-preserving, but works only on modern devices.

Neither solution is sufficient on its own.

Problem Image

Flower Intelligence:

Local-first AI with Confidential Remote Compute

Flower Intelligence prioritizes on-device AI for speed, privacy and offline use. When extra power is needed, Flower Confidential Remote Compute steps in as a seamless private extension of the device, without compromising privacy, security or performance.

This hybrid approach delivers the best of both worlds: local-first AI that remains powerful, private and compatible with all devices.

Solution Image

Flower AI Summit 2025

Join us live for the launch of Flower Intelligence (March 26, in-person in 🇬🇧 London or online).

Why Flower Intelligence?

With Flower Intelligence, you can run your favorite LLM locally on phones, tablets and laptops.

Large models can run remotely in Flower Confidential Remote Compute. On our roadmap are local and federated fine-tuning to improve LLMs using local user data. Larger models can run remotely in the Flower Confidential Remote Compute service. Upcoming features include local and federated fine-tuning to improve LLMs using local user data.

Local Inference

Run powerful GenAI models locally on the device (phone, tablet, laptop), in a browser tab (TypeScript SDK) or in a mobile app (iOS SDK).

Available now(TypeScript, Swift)
Local Inference

Confidential Remote Compute

Run large AI models on a remote GPU server via Flower Confidential Remote Compute. The Flower Confidential Remote Compute service acts as a seamless private extension of the device that uses end-to-end encryption and other techniques to protect sensitive user data.

Available in Early Access Preview(Apply)
Confidential Remote Compute

Local Fine-Tuning

Personalize AI models using local user data.

Local Fine-Tuning

Federated Fine-Tuning

Fine-Tune AI models without collecting user data.

Coming Later This Year(Join Flower AI Summit to learn more)
Federated Fine-Tuning

Federated Pre-Training

Train foundation models across the entire user base without collecting user data.

Federated Pre-Training

Get Started

import { FlowerIntelligence } from '@flwr/flwr';

const fi = FlowerIntelligence.instance;

const response = await fi.chat({
    messages: [
        {role: 'system', content: 'You are a helpful assistant.'}
        {role: 'user', content: 'Why is the sky blue?'},
    ],
});

console.log(response.message.content);
    

Go to the documentation to learn more

Supports your favorite models

Flower Intelligence runs your favorite LLMs locally on-device or remotely on Flower Confidential Remote Compute (early access preview).

Supported ModelsOn Device (TypeScript)On Device (Swift)Confidential Remote Compute
LLaMA 3.2 1B (Meta)
LLaMA 3.2 3B (Meta)
LLaMA 3.1 8B (Meta)
LLaMA 3.3 70B (Meta)
DeepSeek-R1 (Distill-Llama-8B)
Mistral Small 3

Flower Intelligence Pilot Program

Apply now to get personalized support from the Flower team and Early Access to Flower Confidential Remote Compute.