Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings. (Read Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings. (Read

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

2026/02/03 04:01
2 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo [email protected].

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

Lawrence Jengar Feb 02, 2026 20:01

Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings.

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

Together AI has expanded its Evaluations platform to support direct benchmarking against proprietary models from OpenAI, Anthropic, and Google—a move that could reshape how enterprises make AI infrastructure decisions.

The update, announced February 3, enables side-by-side comparisons between open-source models and closed-source alternatives including GPT-5, Claude Sonnet 4.5, and Gemini 2.5 Pro. For AI-focused crypto projects and decentralized compute networks, this creates a standardized framework for proving cost-efficiency claims.

What's Actually New

Together Evaluations now accepts models from three major providers as both evaluation targets and judges:

OpenAI: GPT-5, GPT-5.2
Anthropic: Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.5
Google: Gemini 2.5 Pro, Gemini 2.5 Flash

The platform also supports any OpenAI Chat Completions-compatible URL, meaning self-hosted and decentralized inference endpoints can plug directly into the benchmarking system.

The Cost Argument Gets Data

Together AI published accompanying research showing fine-tuned open-source judges (GPT-OSS 120B, Qwen3 235B) outperforming GPT-5.2 as evaluators—62.63% accuracy versus 61.62%—while running at reportedly 10x lower cost and 15x higher speed.

That's a specific, testable claim. For decentralized AI networks competing on inference pricing, having a neutral benchmarking platform that accepts custom endpoints could prove valuable for customer acquisition.

The company, founded in 2020 and known for research innovations like FlashAttention-3, has positioned itself as infrastructure-agnostic. Its platform already offers access to over 200 open-source models with claimed 4x faster inference and 11x lower cost compared to GPT-4o, according to December 2024 benchmarks.

Why This Matters for Crypto AI

Several blockchain-based AI projects—from decentralized GPU marketplaces to inference networks—have struggled to prove their cost advantages aren't just marketing. A third-party evaluation framework that accepts any compatible endpoint changes that dynamic.

The Evaluations API runs on Together's Batch API at roughly 50% lower cost than real-time inference, making large-scale model comparisons economically viable for smaller teams.

Together AI remains a private company with no associated token. But its tooling increasingly touches the infrastructure layer where crypto AI projects compete—and now those projects have a standardized way to benchmark against the incumbents they're trying to displace.

Image source: Shutterstock
  • together ai
  • ai infrastructure
  • llm benchmarking
  • open source ai
  • enterprise ai
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta [email protected] per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.