Code LLM Leaderboard
Embrace Federated LLM Fine-Tuning and Secure Your Spot on the Leaderboard!
← Scroll →
Rank | Team | Base Model Size | Comm. Costs | Average (↑) | MBPP | HumanEval | MultiPL-E (JS) | MultiPL-E (C++) | Code | Date |
---|---|---|---|---|---|---|---|---|---|---|
1 | Baseline | 7B | 40.7 GB | 27.36 | 31.60 | 23.78 | 28.57 | 25.47 | link | 01.10.24 |
Software development and programming are increasingly complex and diverse, requiring tools that understand code context, syntax, and semantics. Federated LLM fine-tuning on coding tasks enables the collaborative improvement of models that assist in code generation, bug fixing, and even educational purposes across various programming languages and development environments. By training models across a federation of data sources from different coding projects and repositories, we ensure that the resulting coding assistants are versatile and sensitive to the subtleties of programming paradigms and practices.