Cisco Systems has unveiled its new networking chips designed for AI supercomputers, putting it in direct competition with Broadcom and Marvell Technology. The chips are currently being tested by several major cloud providers, although Cisco did not disclose their names.
Key players in the cloud computing market, such as Amazon Web Services, Microsoft Azure, and Google Cloud, are expected to be among the testers.
The demand for AI applications like ChatGPT, which rely on specialized chips called graphics processing units (GPUs), has made efficient communication between these chips crucial. As a leading supplier of networking equipment, including ethernet switches that connect various devices to a local area network, Cisco has introduced its latest generation of ethernet switches called G200 and G202. These switches offer double the performance of their predecessors and can connect up to 32,000 GPUs together.
According to Rakesh Chopra, a Cisco fellow, and former principal engineer, the G200 and G202 chips will be the most powerful networking chips on the market. They are specifically optimized for AI and machine learning workloads, enabling a more power-efficient network.
Cisco claims that using these chips can reduce the number of switches required for AI and machine learning tasks by 40% while minimizing latency and improving power efficiency.
Broadcom previously announced its Jericho3-AI chip, which also supports connecting up to 32,000 GPU chips together.