developer_tools
Best LLM for Function Calling
Compare models for reliable tool use, function selection, and multi-step API orchestration.
#1 Recommendation
anthropic/claude-sonnet-4.6
Strong on OpenHands Issue Resolution issue_resolution_score_pct (72%) and Vals SWE-bench overall_accuracy_pct (95%)
external/anthropic/claude-sonnet-4-6
16.5%
Score
29.6%
Confidence
26
Evidence
Ranked Models
25
Evidence Quality
80%
Scoring
Benchmark-backed
Top Signal
OpenHands Issue Resolution: issue_resolution_score_pct
All Ranked Models
| Rank | Model | Score |
|---|---|---|
| #5 | anthropic/claude-sonnet-4.6 | 16.5% |
| #8 | kimi/kimi-k2.5-thinking | 14.8% |
| #11 | GLM-5 | 13.8% |
| #12 | gpt-4o | 13.4% |
| #13 | gemini-3-pro-preview | 12.8% |
| #15 | Kimi K2 Thinking | 12.6% |
| #16 | gpt-4.1-20250414 | 12.0% |
| #18 | gemini-2.5-pro | 11.1% |
| #19 | Grok-4-0709 | 11.0% |
| #21 | claude-sonnet-4-20250514 | 11.0% |
| #22 | minimax/minimax-m2.1 | 10.9% |
| #23 | qwen-2.5-72b-instruct | 10.6% |
| #24 | claude-opus-4-5-20251101 | 10.2% |
| #25 | gpt-5.2-2025-12-11 | 9.8% |
| #26 | gpt-4.1-mini-20250414 | 8.8% |
| #29 | gpt-4o-2024-08-06 | 8.7% |
| #30 | z-ai/glm-4.7 | 8.2% |
| #31 | gpt-5-2025-08-07 | 8.2% |
| #33 | deepseek/deepseek-r1 | 7.8% |
| #34 | gpt-4o-20241120 | 7.6% |
| #35 | o3-20250416 | 7.5% |
| #36 | gpt-4o-2024-05-13 | 6.9% |
| #37 | GLM-4.7 | 6.8% |
| #41 | GPT-4.1-nano-2025-04-14 | 4.0% |
| #42 | openai/gpt-4o-mini-2024-07-18 | 3.8% |
Head-to-Head: #1 vs #2
#5
Top Pickanthropic/claude-sonnet-4.6
Strong on OpenHands Issue Resolution issue_resolution_score_pct (72%) and Vals SWE-bench overall_accuracy_pct (95%)
Conf 29.6%
#8
kimi/kimi-k2.5-thinking
Strong on Vals LiveCodeBench overall_accuracy_pct (94%) and Vals SWE-bench overall_accuracy_pct (83%)
Conf 30.4%
Related Lookups
Best LLM for Code Generation
Benchmark-backed ranking of models for generating correct, secure code from requirements.
Best LLM for Debugging
Find the top-ranked models for localizing bugs and proposing fixes with explanations.
Best LLM for Unit Test Generation
Ranked models for generating meaningful unit tests and edge cases from code.
Best LLM for Code Review
Compare models for automated PR review covering correctness, security, and maintainability.
Best LLM for Autonomous Coding
Benchmark-backed ranking of models for end-to-end autonomous software engineering and issue resolution.
Best LLM for Refactoring
Ranked models for safely refactoring code while preserving behavior and improving clarity.