BasedAGIBasedAGI
Menu
Rankings live

developer_tools

Unit test generation

Generate meaningful unit tests and edge cases.

#1 Recommendation

gpt-4o-2024-05-13

Strong on RepoQA Official Results overall_average_pass_at_1_pct (99%) and RepoQA Official Results all_average_pass_at_1_pct (99%)

external/openai/gpt-4o-2024-05-13

19.2%

Score

23.9%

Confidence

Limited benchmark evidence for this use case.

53 ranked models with average evidence of 12.6 points. Rankings may shift as more benchmark data is ingested.

Ranked Models

30

Evidence Quality

80%

Scoring

Benchmark-backed

Top Signal

RepoQA Official Results: overall_average_pass_at_1_pct

All Ranked Models

Max params:
Min confidence:
30 of 30
RankModelScore
#2gpt-4o-2024-05-13

Strong on RepoQA Official Results overall_average_pass_at_1_pct (99%) and RepoQA Official Results all_average_pass_at_1_pct (99%)

19.2%
#5gpt-4.1-20250414
17.4%
#6gemini-3-pro-preview
16.7%
#7google/gemini-3.1-pro-preview
16.3%
#8Grok-4-0709
16.1%
#10openai/gpt-5.4-2026-03-05
15.3%
#12claude-sonnet-4-20250514
15.1%
#13anthropic/claude-sonnet-4.6
14.9%
#14z-ai/glm-4.7
14.9%
#15anthropic/claude-opus-4-6-thinking
14.7%
#17claude-opus-4-5-20251101
14.5%
#18gemini-3-flash-preview
14.3%
#19gpt-5-2025-08-07
14.3%
#20minimax/minimax-m2.1
14.2%
#21gpt-5.2-2025-12-11
14.2%
#22gpt-5.1-2025-11-13
14.2%
#23anthropic/claude-opus-4-5-20251101-thinking
14.1%
#24Kimi K2 Thinking
14.0%
#27Meta-Llama-3-70B-Instruct
12.8%
#28kimi/kimi-k2.5-thinking
12.8%
#29gpt-4o-20241120
12.7%
#31anthropic/claude-sonnet-4-5-20250929-thinking
12.4%
#34gpt-4o-2024-08-06
12.0%
#35gemini-2.5-pro
12.0%
#36deepseek/deepseek-r1
11.6%
#38zai/glm-5-thinking
11.5%
#39qwen-2.5-72b-instruct
11.5%
#40xai-org/grok-4-fast-reasoning
11.2%
#41google/gemini-3.1-flash-lite-preview
11.1%
#43Meta-Llama-3-8B-Instruct
11.1%

Compare Models

Model A leads by +1.9%

Shareable Link →

Model A

gpt-4o-2024-05-13

external/openai/gpt-4o-2024-05-13

19.2%

Rank #2

Confidence 23.9%9 evidence pts

RepoQA Official Results: overall_average_pass_at_1_pct

Value 99.3% · Conf 100.0% · Weight 4.1%

repoqa_leaderboard.overall_average_pass_at_1_pct (Mar 12, 2026)

RepoQA Official Results: all_average_pass_at_1_pct

Value 99.3% · Conf 100.0% · Weight 1.8%

repoqa_leaderboard.all_average_pass_at_1_pct (Mar 12, 2026)

Aider Code Editing Leaderboard: percent_correct_pct

Value 82.3% · Conf 100.0% · Weight 1.4%

aider_code_editing.percent_correct_pct (Mar 12, 2026)

Aider Code Editing Leaderboard: correct_edit_format_pct

Value 85.7% · Conf 100.0% · Weight 0.7%

aider_code_editing.correct_edit_format_pct (Mar 12, 2026)

Model B

gpt-4.1-20250414

external/openai/gpt-4-1-20250414

17.4%

Rank #5

Confidence 26.4%18 evidence pts

Galileo Agent Leaderboard v2: Avg AC

Value 100.0% · Conf 100.0% · Weight 2.4%

galileo_agent_v2.avg_ac (Mar 12, 2026)

MMLongBench-Doc Leaderboard: acc_score_pct

Value 74.6% · Conf 100.0% · Weight 1.0%

mmlongbench_doc_leaderboard.acc_score_pct (Mar 12, 2026)

Vals SWE-bench: overall_accuracy_pct

Value 47.7% · Conf 100.0% · Weight 0.8%

vals_swebench.overall_accuracy_pct (Mar 12, 2026)

Vals LiveCodeBench: overall_accuracy_pct

Value 53.5% · Conf 100.0% · Weight 0.8%

vals_lcb.overall_accuracy_pct (Mar 12, 2026)

Ranking Diagnostics & Missing Models

Source Lift

Ranked

53

Sources

8

Quality

Insufficient

Vals LiveCodeBench

vals_lcb

39 rows

1.1% avg lift

Vals Legal Bench

vals_legal_bench

39 rows

0.3% avg lift

Vals Tax Eval v2

vals_tax_eval_v2

39 rows

0.3% avg lift

Vals GPQA

vals_gpqa

38 rows

0.2% avg lift

Missing Strong Models

google/gemini-2.0-flash-001

external/google/gemini-2-0-flash-001

Rank #56

10.3%

Thin evidence after weighting

deepseek-v3

external/deepseek-ai/deepseek-v3

Rank #66

8.8%

Thin evidence after weighting

Llama-4-Scout-17B-16E-Instruct

external/meta/llama-4-scout-17b-16e-instruct

Rank #73

8.4%

Thin evidence after weighting
Taxonomy Details

Core Tasks

task.test_generation_unit

Required Modes

none

Domains

domain.software_engineering

Related Use Cases