This changelog covers the major features, improvements, and milestones shipped to the Router One platform. We update this page as new releases go live. For real-time status and incident reports, visit our status page.
April 2026
Google and GitHub OAuth Login
You can now sign in to Router One using your Google or GitHub account. This complements the existing email-based authentication and makes onboarding faster for teams that already use these identity providers. OAuth sessions support the same RBAC permissions and organization-level access controls as standard accounts.
Google Search Console Integration
We have integrated Google Search Console verification and sitemap management into the platform. This is part of our ongoing effort to improve the discoverability of Router One's documentation and public-facing pages. For users building on our platform, this also means better SEO tooling for any documentation or landing pages served through Router One.
Improved Dashboard Performance
The main dashboard now loads 40 percent faster thanks to server-side rendering optimizations and lazy loading of chart components. Usage graphs and cost breakdowns render incrementally, so you see critical numbers immediately without waiting for the full page to hydrate.
March 2026
Model Marketplace with Detailed Pricing
The new model marketplace gives you a clear view of every LLM available through Router One, with real-time pricing per input token and output token, capability tags, and context window sizes. Compare models side by side and make informed routing decisions without leaving the dashboard.
The marketplace currently lists models from OpenAI, Anthropic, Google, Mistral, DeepSeek, and Meta, with more providers being added on a rolling basis.
Full i18n Support: English and Chinese
The entire Router One dashboard and documentation are now available in both English and Chinese. Language selection is automatic based on browser locale and can be overridden in account settings. All UI text, error messages, help content, and API documentation are fully translated — not machine-translated, but reviewed by native speakers for accuracy and clarity.
Real-Time Usage Dashboard
The usage dashboard now updates in real time with no manual refresh required. Watch token consumption, cost accrual, and request volume as they happen. The dashboard includes:
- Per-model breakdown — see exactly how much each model is costing you
- Per-project breakdown — allocate costs to specific projects or teams
- Per-key breakdown — track usage by individual API key
- Time-series charts — visualize trends over hours, days, or weeks
- Anomaly indicators — automatic highlighting when usage deviates significantly from historical patterns
Landing Page Redesign
The public-facing website received a complete redesign with improved messaging, faster load times, and a clearer explanation of Router One's value proposition across the L3 (Invocation), L5 (Orchestration), and L7 (Observability) layers.
February 2026
Smart Routing with EWMA Scoring
The core routing engine is now live. Every request to the unified API is evaluated against real-time EWMA (Exponentially Weighted Moving Average) latency scores, per-token cost data, and quality baselines for all available models.
Routing weights are fully configurable per project and per API key. Set your priorities — latency, cost, quality — and the router makes the optimal decision for every request, every time.
Automatic Failover
When a provider experiences degradation or an outage, Router One now automatically reroutes traffic to healthy alternatives within milliseconds. Recovery detection is also automatic: once a provider stabilizes, it is gradually reintroduced to the routing pool.
This feature requires zero configuration. It is always on, and the full failover trace is visible in the observability dashboard.
WeChat and Alipay Payment Integration
Chinese users can now add funds to their Router One account using WeChat Pay and Alipay, in addition to international credit cards. Payments are processed in real time with immediate balance updates. This removes a major friction point for teams operating primarily in China.
Budget Controls and QPS Limits
Set spending limits and requests-per-second caps at the organization, project, and API key level. Soft alerts notify you at configurable thresholds (e.g., 80 percent of budget consumed), and hard limits block or downgrade requests when the ceiling is hit. All enforcement happens in real time, not retroactively.
January 2026
Platform Launch
Router One is live. The initial release includes the foundational infrastructure that everything else is built on:
Unified LLM API Endpoint. A single POST /llm.invoke endpoint that accepts requests in a standardized format and routes them to the appropriate provider. Supports OpenAI, Anthropic, Google, Mistral, and DeepSeek at launch, with a provider-agnostic request and response schema.
Claude Code and Codex Support. Router One works as a drop-in API base URL for both Anthropic's Claude Code CLI and OpenAI's Codex CLI. Configure the base URL, provide your Router One API key, and all requests flow through the platform with full tracking and budget enforcement.
Organization and Project Structure. Multi-tenant architecture with organization-level isolation, project-level grouping, and API key-level granularity. Invite team members, assign roles, and manage access from the dashboard.
Observability Foundation. Every request is logged with complete metadata: model used, tokens consumed (input and output), cost incurred, latency measured, and the originating project and API key. This data powers the usage dashboard, cost reports, and will serve as the foundation for the trace and metrics features shipping in subsequent releases.
Prepaid Credit System. Add funds to your account and consume them as you use LLM APIs. No surprise invoices, no billing cycles — you see your balance decrease in real time with every request. Top up when you need to, and set alerts to notify you when your balance runs low.
What Is Coming Next
We are actively working on several major features for Q2 2026:
- L5 Orchestration Layer — Run/Step state machine for managing multi-step AI agent workflows with pause, resume, and cancel support
- L4 Tool Execution Layer — Register HTTP tools that your agents can call through a unified tool schema
- Advanced Observability — Per-run traces that show every LLM call and tool invocation in a waterfall view, with aggregated metrics for QPS, latency percentiles, cost per run, and success rates
- Team Collaboration — Shared dashboards, team-level budgets, and approval workflows for high-cost operations
Follow this changelog for updates as these features ship. Have a feature request? Reach out to us at support@router.one or open an issue on our GitHub.