OpenAI released its latest model, GPT-5.5, on April 23, just a week after Anthropic introduced Claude Opus 4.7.
As the two leading models from the two leading AI labs, we wanted to see how the new models compare.
Spoiler alert: We think Claude Opus 4.7 has an edge on advanced and agentic coding, but GPT-5.5 performs better on most benchmarks.
Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.
GPT-5.5 and Opus 4.7: Leaderboards
GPT-5.5 isn't yet ranked on all AI leaderboards yet, but it should be very competitive with Claude Opus 4.7. On the leaderboards of verified benchmark tests such as Arc Prize, GPT-5.5 beats Opus 4.7 (more on this below).
On the popular Arena leaderboard, which is based on user testing, Claude Opus 4.7 Thinking has the top overall spot. Interestingly, Opus 4.7 is currently ranked below Opus 4.6, though that will likely change in time. Currently, new Anthropic models hold the top four overall spots. What's more, Anthropic's unreleased Claude Mythos isn't ranked, and Anthropic says it performs even better than Opus 4.7.
On the Epoch Capabilities Index (ECI) leaderboard, GPT-5.4 Pro has the top score for now. (ECI combines several benchmarks into a single score.) You'll find Gemini 3.1 Pro and GPT-5.4 in the second and third positions.
GPT-5.5 and Opus 4.7: Benchmarks
How do the new models perform on the most common benchmark tests? We have to rely primarily on OpenAI and Anthropic's self-reported scores for these tests. They both achieve high marks, as you'd expect, but GPT-5.5 definitely has the edge.
Here's how they compare on some top AI benchmark tests:
-
SWE-Bench Pro: GPT-5.5 scored 58.6; Opus 4.7 scored 64.3 percent
-
Terminal-Bench 2.0: GPT-5.5 scored 82.7 percent; Opus 4.7 scored 69.4 percent
-
Humanity's Last Exam: GPT-5.5 scored 40.6 percent; Opus 4.7 scored 31.2 percent*
-
Humanity's Last Exam (with tools): GPT-5.5 scored 52.2 percent; Opus 4.7 scored 54.7 percent
-
BrowseComp: GPT-5.5 scored 84.4 percent; Opus 4.7 scored 79.3 percent
-
GPQA Diamond: GPT-5.5 scored 93.6 percent; Opus 4.7 scored 94.2 percent
-
ARC-AGI-1 (Verified): GPT-5.5 (High) scored 94.5 percent; Claude 4.7 (High) scored 92 percent**
-
ARC-AGI-2 (Verified): GPT-5.5 (High) scored 83.3 percent; Claude 4.7 (High) scored 68.3 percent**
*For Humanity's Last Exam, we're citing Artificial Analysis's verified HLE results. Notably, Anthropic reports that Opus 4.7 scored 46.9 percent on this test.
**See the full results at the Arc Prize website.
GPT 5.5 and Opus 4.7: Availability and pricing
OpenAI says GPT 5.5 is "our smartest and most intuitive to use model yet." Claude Opus 4.7 is Anthropic's most advanced model available to Claude users, though Anthropic says the unreleased Claude Mythos Preview is the more capable model overall.
As such, only paid subscribers can access these frontier models.
GPT 5.5 is only available to OpenAI Plus, Pro, Business, and Enterprise users in ChatGPT and Codex (sorry, ChatGPT Go users). Pro, Business, and Enterprise users can also access GPT-5.5 Pro, while Plus, Pro, Business, and Enterprise customers can access GPT-5.5 Thinking.
OpenAI is raising prices for GPT-5.5 in its API, though the company says it's more token-efficient. API pricing starts at "$5 per 1M input tokens and $30 per 1M output tokens, with a 1M context window."
Opus 4.7 is available to Pro and Max customers; via the API, it's available for "$5 per million input tokens and $25 per million output tokens."
GPT-5.5 and Opus 4.7: Feature set
OpenAI says that GPT-5.5 makes noticeable improvements in "agentic coding, computer use, knowledge work, and early scientific research." Anthropic says Claude Opus 4.7 improves in advanced coding, visual intelligence, and document analysis.
ChatGPT and Claude have similar overall feature sets, though there are some exceptions. Broadly speaking, you can use both of these AI chatbots for research, coding, creative projects, and everyday professional work. You can also use both of the new models in OpenAI and Anthropic's coding platforms, Codex and Claude Code.
It's easier to talk about the differences than the similarities. While GPT-5.5 is not an image model, within ChatGPT you can use the new ChatGPT Images 2.0 model. Anthropic recently rolled out Claude Design, but it only offers data visualizations, graphics, and slides, not full image generation. So, if you need to generate images or interactive graphics for a project, GPT-5.5 will have more tools available to call.

ChatGPT has more app and shopping integrations, though thanks to its recent acquisition of OpenClaw, Anthropic has the edge on agentic capabilities.
TL;DR: If we had to pick one of these models for everyday professional work, GPT-5.5 would have the edge thanks to ChatGPT's broader overall feature set. However, for advanced and agentic coding, we'd go with Claude Opus 4.7.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
from Mashable https://ift.tt/R2zH8NC

No comments:
Post a Comment