Cisco Unveils Groundbreaking Routing System To Fuel Ai Growth

Cisco Unveils Groundbreaking Routing System To Fuel Ai Growth

Cisco has launched the 8223 routing system, a game-changing solution designed to connect data centers for distributed artificial intelligence (AI) workloads. The system, powered by Cisco’s new Silicon One P200 chip, boasts an impressive 51.2 terabits per second (Tbps) of Ethernet routing capacity, making it a powerhouse in the world of AI networking.

The 8223 addresses the growing demand for inter-data center connections as AI workloads increase, stretching power and space limits in hyperscale data centers. The system supports “scale-across” architectures, enabling multiple data centers to work together efficiently. This innovative approach is crucial in addressing the challenges posed by AI computing, which requires enormous computational power that often exceeds what any single facility can provide.

According to Martin Lund, EVP of Cisco’s Common Hardware Group, “AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart.” With the Cisco 8223, powered by the new Cisco Silicon One P200, we’re delivering the massive bandwidth, scale, and security needed for distributed data center architectures.

The 8223 system offers 64 ports of 800G, processing more than 20 billion packets per second and scaling to over 3 exabits per second. It includes deep buffering for traffic surges, 800G coherent optics for data center interconnects up to 1,000 km, and line-rate encryption with post-quantum resilient algorithms. These features make the system an attractive option for AI-driven organizations looking to expand their network capabilities.

The system is initially available for open-source SONiC deployments, with support for IOS XR planned. The P200 chip can also be used in modular and disaggregated chassis, allowing consistent architecture across network sizes. This flexibility is crucial in addressing the diverse needs of various data center environments.

Industry partners have highlighted the potential impact of the system. Dave Maltz, technical fellow at Microsoft, said, “The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts. We’re pleased to see the P200 providing innovation and more options in this space.” Dennis Cai, vice president of Alibaba Cloud, added, “This new routing chip will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.”

Patrick Moorhead, CEO of Moor Insights & Strategy, said, “Cisco’s 8223, powered by Silicon One P200, marks a significant step forward, delivering the industry’s first 51.2-terabit fixed Ethernet router purpose-built for secure, power-efficient scale-across networking.” Cisco stated that initial shipments of the 8223 are going to hyperscale customers.

When AI data centers run out of space, they face a costly dilemma: build bigger facilities or find ways to make multiple locations work together seamlessly. The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts. This is driving innovation in networking infrastructure, including the development of scale-across architectures that enable multiple data centers to work together efficiently.

As traditional AI data centers face constraints in power capacity, physical space, and cooling capabilities, companies are looking for scalable, cost-effective, and widely available networking solutions. The move towards an open Ethernet ecosystem addresses supply chain bottlenecks and availability challenges that have plagued AI infrastructure in recent years.

A fundamental shift is now underway, moving AI infrastructure away from proprietary, closed networking solutions toward an open Ethernet ecosystem. Unlike InfiniBand, which a single vendor solely controls, Ethernet offers an open, multi-vendor ecosystem, eliminating supply chain constraints and fostering innovation.

The industry’s transition to Ethernet-based networking solutions is not just about solving supply constraints; it is also about enabling a more competitive, innovative, and resilient AI infrastructure landscape. As the open Ethernet ecosystem expands, AI cloud builders and enterprises can expect greater availability, lower costs, and improved performance, making Ethernet the optimal choice for AI backend networking.

Ongoing advancements in AI-specific Ethernet standards are defining new standards to make Ethernet AI-ready. Key innovations include NIC-Based Scheduling and Fabric-Based Scheduling, which ensure predictable latency and lossless data transfer. These advancements are bridging the gap between traditional Ethernet infrastructure and the high-performance demands of AI workloads.

The Case for Open Ethernet is clear: it offers a scalable, cost-effective, and widely available networking solution that can meet the growing needs of AI infrastructure. With an open ecosystem, multiple vendors can supply Ethernet-based networking solutions, reducing dependency on a single supplier and ensuring a more flexible, scalable, and resilient AI infrastructure.

As the industry moves towards an open Ethernet ecosystem, companies will benefit from greater availability, lower costs, and improved performance. The future of AI networking is looking bright, with the potential for faster, more efficient, and more secure data transfer rates.

Latest Posts