Feb 10 (GeokHub) Cisco has introduced a new artificial intelligence–focused networking chip designed to speed up large-scale AI computing and prevent network slowdowns, as competition intensifies in a fast-growing segment dominated by Nvidia and Broadcom.
The chip is manufactured using advanced 3-nanometer process technology and includes built-in mechanisms intended to absorb sudden surges in data traffic — a common problem in massive AI systems that rely on tens of thousands of connected processors.
Designed for High-Volume AI Workloads
Cisco executives said the new chip can improve the speed of certain AI computing tasks by as much as 28%, largely by automatically rerouting data around congestion or failures within microseconds.
The company said the design focuses on overall network efficiency rather than raw processing power, allowing AI workloads to continue running smoothly even during traffic spikes.
“These environments have hundreds of thousands of connections, and disruptions happen regularly,” a senior Cisco executive said. “The goal is to keep the entire system running efficiently from end to end.”
Networking Becomes a Key AI Battleground
As AI systems grow larger and more complex, networking hardware has become a critical performance bottleneck — turning it into a major competitive front in the AI hardware race.
Nvidia recently highlighted networking as a core component of its latest AI systems, while Broadcom has been pushing aggressively into the same space with its high-speed networking chips.
Cisco’s new offering signals its intent to defend and expand its position as AI infrastructure spending accelerates, with data centers increasingly prioritizing fast, resilient connectivity alongside computing power.









