Model Profile
qwen-2.5-coder7b-instruct
Use this page to decide where this model is a strong fit. Rankings below are benchmark-backed by use case, with explicit confidence and contributor metrics.
Identity
ID: external/qwen/qwen-2-5-coder7b-instruct
Author: qwen
Origin: external_benchmark_shadow
Arch: unknown
Benchmark Coverage
Scored use cases: 12
Avg confidence: 27.2%
Evidence points: 107
Raw rows: 114
Weighted rows: 17
Catalog Metadata
Parameters: unknown
Context window: 4096
Downloads: 0
Intelligence Profile
Dimension Breakdown
* Low confidence — limited benchmark evidence for this dimension
5/5 dimensions scored · Last updated Apr 30, 2026
Benchmark Signals
Click through to the benchmark source behind this model profile.
DuckDB NSQL Leaderboard
all_execution_accuracy
Normalized value 71.1% · confidence 100.0%
Strongest impact in Metric definition workshop
duckdb_nsql_leaderboard.all_execution_accuracy · Apr 30, 2026
JSONSchemaBench Leaderboard
medium_schema_compliance_pct
Normalized value 82.6% · confidence 100.0%
Strongest impact in Metric definition workshop
jsonschemabench_leaderboard.medium_schema_compliance_pct · Apr 30, 2026
DuckDB NSQL Leaderboard
hard_execution_accuracy
Normalized value 50.0% · confidence 100.0%
Strongest impact in SQL debugging
duckdb_nsql_leaderboard.hard_execution_accuracy · Apr 30, 2026
Open LLM Leaderboard MMLU-Pro
mmlu_pro_accuracy_pct
Normalized value 37.4% · confidence 100.0%
Strongest impact in Claims summary
openllm_mmlu_pro_official.mmlu_pro_accuracy_pct · Apr 30, 2026
JSONSchemaBench Leaderboard
hard_schema_compliance_pct
Normalized value 66.9% · confidence 100.0%
Strongest impact in Metric definition workshop
jsonschemabench_leaderboard.hard_schema_compliance_pct · Apr 30, 2026
Open LLM Leaderboard IFEval
ifeval
Normalized value 68.3% · confidence 100.0%
Strongest impact in Tail spend categorization
openllm_ifeval_official.ifeval · Apr 30, 2026
Some fit rows have limited benchmark evidence.
5 of 12 scored use cases have low confidence or thin contributor coverage.
Coverage Diagnostics
actively scoredUse-Case Scores
148
Total Measurements
114
Weighted Measurements
17
Weighted Sources
10
Raw Source Coverage
Weighted Source Coverage
Best Use Cases for This Model
| Use Case | Score |
|---|---|
| Metric definition workshop use_case.data.metric_definition_workshop | 19.9% |
| SQL debugging use_case.data.sql_debugging | 16.1% |
| Data quality assistant use_case.data.data_quality_assistant | 16.1% |
| Insight mining from text corpora use_case.data.insight_mining | 15.7% |
| Executive brief from metrics use_case.data.exec_brief_from_metrics | 14.8% |
| Text-to-SQL analyst assistant use_case.data.text_to_sql | 14.1% |
| Candidate summary memo use_case.hr.candidate_summary | 11.2% |
| Claims summary use_case.ins.claims_summary | 10.7% |
| Tail spend categorization use_case.proc.tail_spend_categorization | 10.7% |
| Simulation setup assistant use_case.eng.simulation_setup_assistant | 10.7% |
| Interview question bank use_case.hr.interview_question_bank | 10.4% |
| Resume structuring use_case.hr.resume_structuring | 10.2% |