Finance LLM Leaderboard
Embrace Federated LLM Fine-Tuning and Secure Your Spot on the Leaderboard!
Rank | Team | Base Model Size | Comm. Costs | Average (↑) | FPB | FIQA | TFNS | Code | Date |
---|---|---|---|---|---|---|---|---|---|
1 | Baseline | 7B | 101.6 GB | 45.27 | 44.55 | 62.50 | 28.77 | link | 01.10.24 |
The finance sector depends heavily on accurate and trustworthy data interpretation, where the risks of misinterpretation are significant. Federated fine-tuning of large language models in finance generally aims at adapting these models to accurately understand and predict market trends, facilitate regulatory compliance, and analyze financial reports. This domain-specific tuning under federated settings allows institutions to collaborate on model development without exposing sensitive financial data, aligning with stringent industry regulations regarding data security and privacy. Such an approach is crucial for deploying AI tools that can enhance decision-making and operational efficiencies in finance while safeguarding client confidentiality.