Ampere announces monstrous 256-core 3nm CPU, teams up with Qualcomm for AI

Ampere announces 256-core 3nm CPU, joins with Qualcomm for AI projects, promising enhanced performance for cloud-native and AI workloads.

: Ampere has announced a 256-core 3nm CPU, the AmpereOne, aimed at cloud-native and AI workloads. The chip promises a 40% performance boost and integrates seamlessly with Qualcomm's AI accelerators. This collaboration hopes to enhance AI inference and cloud computing capabilities.

Ampere Computing has revealed its ambitious plans to introduce a 256-core CPU named AmpereOne, intended to cater to cloud-native workloads, AI inferencing, databases, web servers, and media delivery. Scheduled for release next year, this high-performance, power-efficient processor is expected to feature TSMC’s advanced 3nm manufacturing process. The 256-core design epitomizes Ampere’s vision of consolidating and upgrading data centers with improved performance metrics. Ampere's 256-core processor will retain the cooling solution used by its current offerings, indicating an estimated TDP of around 350 watts. Significantly, the AmpereOne is projected to enhance memory management, caching, and AI computing abilities by leveraging newly engineered features. The company is already shipping its 192-core AmpereOne processors and will shortly introduce an updated version sporting a 12-channel DDR5 memory subsystem, setting the stage for transitioning to the 256-core variant in 2025. Comparatively, Ampere asserts its current CPUs outperform AMD's Genoa and Bergamo processors in performance per watt by 50% and 15%, respectively. This efficiency makes the AmpereOne an enticing choice for data centers aiming to streamline operations and elevate performance per rack by up to 34%.

Intriguingly, Ampere has entered into a partnership with Qualcomm to utilize the latter's Cloud AI 100 accelerators. This collaboration aims to address the computational demands of large language models and generative AI applications. Meta's Llama 3 language model has already shown promising results when run on Ampere CPUs at Oracle Cloud, outperforming NVIDIA's A10 GPU in power consumption while delivering comparable performance. The amalgamation of Ampere’s CPUs and Qualcomm’s accelerators underscores a concerted effort to push the boundaries of AI inferencing and cloud computing efficiency.

To further empower its processors, Ampere has initiated the UCIe (Universal Chiplet Interconnect Express) working group under the AI Platform Alliance. This initiative is expected to magnify the versatility of Ampere’s CPUs by incorporating customer-specific intellectual property through the open UCIe interface. Together, these advancements position Ampere at the forefront of the CPU arms race, illustrating a holistic strategy to elevate both cloud-native computing and AI workloads.