BasedAGIBasedAGI
Menu
Rankings live

devops_sre

RCA writeup

Draft root cause analysis documents with concrete prevention actions.

#1 Recommendation

gpt-4.1-20250414

Strong on Galileo Agent Leaderboard v2 Avg AC (100%) and MMLongBench-Doc Leaderboard acc_score_pct (75%)

external/openai/gpt-4-1-20250414

27.4%

Score

35.6%

Confidence

Limited benchmark evidence for this use case.

57 ranked models with average evidence of 11.8 points. Rankings may shift as more benchmark data is ingested.

Ranked Models

30

Evidence Quality

79%

Scoring

Benchmark-backed

Top Signal

Galileo Agent Leaderboard v2: Avg AC

All Ranked Models

Max params:
Min confidence:
30 of 30
RankModelScore
#1gpt-4.1-20250414

Strong on Galileo Agent Leaderboard v2 Avg AC (100%) and MMLongBench-Doc Leaderboard acc_score_pct (75%)

27.4%
#2claude-sonnet-4-20250514

Strong on Galileo Agent Leaderboard v2 Avg AC (85%) and Galileo Agent Leaderboard v2 Avg TSQ (95%)

19.0%
#3gemini-2.5-pro

Strong on Galileo Agent Leaderboard v2 Avg AC (59%) and Galileo Agent Leaderboard v2 Avg TSQ (79%)

18.2%
#4Grok-4-0709
18.0%
#5gemini-3-pro-preview
15.9%
#6kimi/kimi-k2.5-thinking
15.5%
#7gpt-4.1-mini-20250414
14.7%
#8google/gemini-3.1-pro-preview
14.4%
#9gemini-2.5-flash
13.9%
#10gpt-5-2025-08-07
13.3%
#11openai/gpt-5.4-2026-03-05
13.0%
#12gpt-5.1-2025-11-13
12.7%
#13anthropic/claude-sonnet-4.6
12.6%
#14claude-opus-4-5-20251101
12.5%
#15gpt-4o-20241120
12.5%
#16gpt-5-mini-2025-08-07
12.2%
#17qwen-2.5-72b-instruct
12.0%
#18anthropic/claude-opus-4-6-thinking
11.9%
#19gemini-3-flash-preview
11.9%
#20gpt-5.2-2025-12-11
11.7%
#21anthropic/claude-opus-4-5-20251101-thinking
11.5%
#24anthropic/claude-sonnet-4-5-20250929-thinking
10.5%
#25xai-org/grok-4-fast-reasoning
10.4%
#26gpt-4o
10.2%
#28o3-20250416
9.9%
#30google/gemini-3.1-flash-lite-preview
9.9%
#32xai-org/grok-4-1-fast-reasoning
9.8%
#34openai/gpt-4o-mini-2024-07-18
9.6%
#35Kimi-K2-Instruct
9.2%
#38Llama-2-7b-chat-hf
8.8%

Compare Models

Model A leads by +8.3%

Shareable Link →

Model A

gpt-4.1-20250414

external/openai/gpt-4-1-20250414

27.4%

Rank #1

Confidence 35.6%18 evidence pts

Galileo Agent Leaderboard v2: Avg AC

Value 100.0% · Conf 100.0% · Weight 5.4%

galileo_agent_v2.avg_ac (Mar 12, 2026)

MMLongBench-Doc Leaderboard: acc_score_pct

Value 74.6% · Conf 100.0% · Weight 4.4%

mmlongbench_doc_leaderboard.acc_score_pct (Mar 12, 2026)

Galileo Agent Leaderboard v2: Avg TSQ

Value 64.1% · Conf 100.0% · Weight 0.9%

galileo_agent_v2.avg_tsq (Mar 12, 2026)

Vectara HHEM Leaderboard: overall_hallucination_error_pct

Value 82.5% · Conf 100.0% · Weight 0.7%

vectara_hhem_leaderboard.overall_hallucination_error_pct (Mar 12, 2026)

Model B

claude-sonnet-4-20250514

external/anthropic/claude-sonnet-4-20250514

19.0%

Rank #2

Confidence 26.1%17 evidence pts

Galileo Agent Leaderboard v2: Avg AC

Value 84.8% · Conf 100.0% · Weight 4.6%

galileo_agent_v2.avg_ac (Mar 12, 2026)

Galileo Agent Leaderboard v2: Avg TSQ

Value 94.9% · Conf 100.0% · Weight 1.4%

galileo_agent_v2.avg_tsq (Mar 12, 2026)

Vals Legal Bench: overall_accuracy_pct

Value 90.6% · Conf 100.0% · Weight 0.6%

vals_legal_bench.overall_accuracy_pct (Mar 12, 2026)

Vals MedQA: overall_accuracy_pct

Value 88.4% · Conf 100.0% · Weight 0.6%

vals_medqa.overall_accuracy_pct (Mar 12, 2026)

Ranking Diagnostics & Missing Models

Source Lift

Ranked

57

Sources

8

Quality

Insufficient

Vals Tax Eval v2

vals_tax_eval_v2

41 rows

0.5% avg lift

Vals CorpFin v2

vals_corp_fin_v2

41 rows

0.5% avg lift

Vals Legal Bench

vals_legal_bench

40 rows

0.6% avg lift

Vals LiveCodeBench

vals_lcb

40 rows

0.5% avg lift

Missing Strong Models

Llama-4-Scout-17B-16E-Instruct

external/meta/llama-4-scout-17b-16e-instruct

Rank #73

8.4%

Thin evidence after weighting
Taxonomy Details

Core Tasks

task.root_cause_analysis_writeuptask.risk_assessment

Required Modes

mode.long_context

Domains

domain.devops_sre

Related Use Cases