Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
Flower Baselines 1.19.0
Logo
Flower Baselines 1.19.0

How-to Guides

  • Use Baselines
  • Contribute Baselines

References

  • DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity
  • DepthFL: Depthwise Federated Learning for Heterogeneous Clients
  • Measuring the effects of non-identical data distribution for federated visual classification
  • FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
  • FedDebug: Systematic Debugging for Federated Learning Applications
  • FedMeta: Federated Meta-Learning with Fast Convergence and Efficient Communication
  • FedMLB: Multi-Level Branched Regularization for Federated Learning
  • Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
  • FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning
  • Federated Learning with Personalization Layers
  • FedPFT: One-shot Federated Learning with Foundation Models
  • FedProx: Federated Optimization in Heterogeneous Networks
  • Exploiting Shared Representations for Personalized Federated Learning
  • FedStar: Federated Self-training for Semi-supervised Audio Recognition
  • Federated Self-supervised Learning for Video Understanding
  • Federated Learning for ASR Based on wav2vec 2.0
  • FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
  • FLANDERS: Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection
  • HeteroFL: Computation And Communication Efficient Federated Learning For Heterogeneous Clients
  • Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates
  • Model-Contrastive Federated Learning
  • Federated Learning on Non-IID Data Silos: An Experimental Study
  • StatAvg: Mitigating Data Heterogeneity in Federated Learning for Intrusion Detection Systems
  • TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation
Back to top
Copyright © 2025 Flower Labs GmbH
Made with Sphinx and @pradyunsg's Furo