BasedAGIBasedAGI
Menu
Rankings live

customer_experience

Toxicity moderation routing

Classify abusive content for moderation and escalation.

#1 Recommendation

gemini-2.5-pro

Strong on Galileo Agent Leaderboard v2 Avg AC (59%) and FACTS Benchmark Suite facts_grounding_score_pct (100%)

external/google/gemini-2-5-pro

24.0%

Score

37.3%

Confidence

Limited benchmark evidence for this use case.

50 ranked models with average evidence of 13.9 points. Rankings may shift as more benchmark data is ingested.

Ranked Models

30

Evidence Quality

81%

Scoring

Benchmark-backed

Top Signal

Galileo Agent Leaderboard v2: Avg AC

All Ranked Models

Max params:
Min confidence:
30 of 30
RankModelScore
#1gemini-2.5-pro

Strong on Galileo Agent Leaderboard v2 Avg AC (59%) and FACTS Benchmark Suite facts_grounding_score_pct (100%)

24.0%
#2gemini-3-pro-preview

Strong on FACTS Benchmark Suite facts_grounding_score_pct (88%) and Vals Finance Agent overall_accuracy_pct (87%)

23.2%
#3gpt-4.1-20250414

Strong on Galileo Agent Leaderboard v2 Avg AC (100%) and Vectara HHEM Leaderboard overall_hallucination_error_pct (82%)

21.0%
#4Grok-4-0709
19.0%
#5claude-sonnet-4-20250514
17.9%
#6anthropic/claude-sonnet-4.6
17.8%
#7gpt-5-mini-2025-08-07
17.5%
#8gemini-2.5-flash
16.9%
#9gpt-5-2025-08-07
16.6%
#10google/gemini-3.1-pro-preview
16.6%
#11openai/gpt-5.4-2026-03-05
15.8%
#12gpt-5.1-2025-11-13
14.4%
#13qwen-2.5-72b-instruct
14.1%
#14claude-opus-4-5-20251101
13.8%
#15gpt-4o
13.6%
#16gemini-3-flash-preview
12.8%
#17google/gemini-3.1-flash-lite-preview
12.7%
#18xai-org/grok-4-fast-reasoning
12.5%
#20gpt-5.2-2025-12-11
12.3%
#25anthropic/claude-opus-4-6-thinking
11.8%
#26xai-org/grok-4-1-fast-reasoning
11.6%
#34anthropic/claude-opus-4-5-20251101-thinking
11.0%
#40Qwen3-Embedding-4B
10.9%
#47gpt-4.1-mini-20250414
10.6%
#50kimi/kimi-k2.5-thinking
10.6%
#56anthropic/claude-sonnet-4-5-20250929-thinking
10.4%
#76gpt-4o-2024-08-06
9.6%
#85anthropic/claude-opus-4-1-20250805
9.6%
#94deepseek-v3
9.5%
#124mistralai/mistral-large-2512
9.0%

Compare Models

Model A leads by +0.8%

Shareable Link →

Model A

gemini-2.5-pro

external/google/gemini-2-5-pro

24.0%

Rank #1

Confidence 37.3%23 evidence pts

Galileo Agent Leaderboard v2: Avg AC

Value 58.7% · Conf 100.0% · Weight 2.5%

galileo_agent_v2.avg_ac (Mar 12, 2026)

FACTS Benchmark Suite: facts_grounding_score_pct

Value 100.0% · Conf 100.0% · Weight 2.1%

facts_benchmark_suite.facts_grounding_score_pct (Mar 12, 2026)

Vectara HHEM Leaderboard: overall_hallucination_error_pct

Value 76.0% · Conf 100.0% · Weight 2.0%

vectara_hhem_leaderboard.overall_hallucination_error_pct (Mar 12, 2026)

SciArena Leaderboard: rating_elo

Value 70.7% · Conf 100.0% · Weight 1.6%

sciarena_leaderboard.rating_elo (Mar 12, 2026)

Model B

gemini-3-pro-preview

external/google/gemini-3-pro-preview

23.2%

Rank #2

Confidence 32.2%22 evidence pts

FACTS Benchmark Suite: facts_grounding_score_pct

Value 88.3% · Conf 100.0% · Weight 1.9%

facts_benchmark_suite.facts_grounding_score_pct (Mar 12, 2026)

Vals Finance Agent: overall_accuracy_pct

Value 87.0% · Conf 100.0% · Weight 1.8%

vals_finance_agent.overall_accuracy_pct (Mar 12, 2026)

SciArena Leaderboard: rating_elo

Value 78.8% · Conf 100.0% · Weight 1.8%

sciarena_leaderboard.rating_elo (Mar 12, 2026)

FACTS Benchmark Suite: average_score_pct

Value 100.0% · Conf 100.0% · Weight 1.8%

facts_benchmark_suite.average_score_pct (Mar 12, 2026)

Ranking Diagnostics & Missing Models

Source Lift

Ranked

50

Sources

8

Quality

Insufficient

Vals MedQA

vals_medqa

38 rows

0.4% avg lift

Vals Legal Bench

vals_legal_bench

37 rows

0.4% avg lift

Vals Tax Eval v2

vals_tax_eval_v2

37 rows

0.4% avg lift

Vals LiveCodeBench

vals_lcb

35 rows

0.4% avg lift

Missing Strong Models

google/gemini-2.0-flash-001

external/google/gemini-2-0-flash-001

Rank #56

10.3%

Thin evidence after weighting
Taxonomy Details

Core Tasks

task.toxicity_detection

Required Modes

mode.json_schema

Domains

domain.customer_support

Related Use Cases