

Evaluation Leaderboard
Explore Interactive Results & Traces
Powered by Inspect and Inspect Evals, the Vector Evaluation Leaderboard presents an evaluation of leading frontier models across a comprehensive suite of benchmarks. Go beyond the summary metrics: click through to interactive reporting for each model and benchmark to explore sample-level performance and detailed traces.
Vector Institute
The Vector Institute is dedicated to advancing the field of artificial intelligence through cutting-edge research and application. Our mission is to drive excellence and innovation in AI, fostering a community of researchers, developers, and industry partners.
🎯 Benchmarks
This leaderboard showcases performance across a comprehensive suite of benchmarks, designed to rigorously evaluate different aspects of AI model capabilities. Let's explore the benchmarks we use:
Inspect Evals
This leaderboard leverages Inspect Evals to power evaluation. Inspect Evals is an open-source repository built upon the Inspect AI framework. Developed in collaboration between the Vector Institute, Arcadia Impact and the UK AI Security Institute, Inspect Evals provides a comprehensive suite of high-quality benchmarks spanning diverse domains like coding, mathematics, cybersecurity, reasoning, and general knowledge.
Transparent and Detailed Insights
All evaluations presented on this leaderboard are run using Inspect Evals. To facilitate in-depth analysis and promote transparency, we provide Inspect Logs for every benchmark run. These logs offer sample and trace level reporting, allowing the community to explore the granular details of model performance.
⚙️ Base Benchmarks
These benchmarks assess fundamental reasoning and knowledge capabilities of models.
Benchmark | Description |
---|---|
ARC-Easy / ARC-Challenge | Multiple-choice science questions. |
DROP | Comprehension benchmark evaluating advanced reasoning capability. |
WinoGrande | Commonsense reasoning challenge. |
GSM8K | Grade-school math word problems testing math capability & multi-step reasoning. |
HellaSwag | Commonsense reasoning task. |
HumanEval | Evaluates code generation and reasoning in a programming context. |
IFEval | Specialized benchmark for instruction following. |
MATH | Challenging questions sourced from math competitions. |
MMLU / MMLU-Pro | Multi-subject multiple-choice tests of advanced knowledge. |
GPQA-Diamond | Question-answering benchmark assessing deeper reasoning. |
MMMU (Multi-Choice / Open-Ended) | Multi-modal tasks testing structured & open responses. |
🚀 Agentic Benchmarks
These benchmarks go beyond basic reasoning and evaluate more advanced, autonomous, or "agentic" capabilities of models, such as planning and interaction.
Benchmark | Description |
---|---|
GAIA | Evaluates autonomous reasoning, planning, problem-solving for question answering. |
InterCode-CTF | Capture-the-flag challenge testing cyber-security skills. |
In-House-CTF | Capture-the-flag challenge testing cyber-security skills. |
AgentHarm / AgentHarm-Benign | Measures harmfulness of LLM agents (and benign behavior baseline). |
SWE-Bench-Verified | Tests AI agent ability to solve software engineering tasks. |