Model Profile
openai/gpt-4.1
Use this page to decide where this model is a strong fit. Rankings below are benchmark-backed by use case, with explicit confidence and contributor metrics.
Identity
ID: external/openai/gpt-4-1
Author: openai
Origin: external_benchmark_shadow
Arch: unknown
Benchmark Coverage
Scored use cases: 12
Avg confidence: 23.5%
Evidence points: 176
Raw rows: 104
Weighted rows: 24
Catalog Metadata
Parameters: unknown
Context window: 4096
Downloads: 0
Intelligence Profile
Dimension Breakdown
* Low confidence — limited benchmark evidence for this dimension
3/5 dimensions scored · Last updated Mar 17, 2026
Some fit rows have limited benchmark evidence.
10 of 12 scored use cases have low confidence or thin contributor coverage.
Coverage Diagnostics
actively scoredUse-Case Scores
105
Total Measurements
104
Weighted Measurements
24
Weighted Sources
14
Raw Source Coverage
Weighted Source Coverage
Best Use Cases for This Model
| Use Case | Score |
|---|---|
| Archaic and historical translation use_case.history.archaic_translation | 26.8% |
| Legal translation use_case.legal.legal_translation | 25.1% |
| Verilog/VHDL generation use_case.eda.verilog_generation | 21.9% |
| Historical document summarization use_case.history.historical_doc_summarization | 21.0% |
| Brand voice localization use_case.mkt.brand_voice_localization | 20.9% |
| Integration test generation use_case.dev.integration_tests | 18.5% |
| Metric definition workshop use_case.data.metric_definition_workshop | 17.8% |
| Simulation setup assistant use_case.eng.simulation_setup_assistant | 17.1% |
| Grammar and writing coach use_case.lang.grammar_coach | 16.9% |
| Documentation from code use_case.dev.docstrings_and_docs | 16.5% |
| Contract term extraction use_case.legal.contract_term_extraction | 16.4% |
| Clause playbook check use_case.legal.playbook_clause_check | 16.4% |