Lisa Su showcases an AMD Instinct M1300 chip during her keynote speech at CES 2023 held at The Venetian Las Vegas on January 04, 2023 in Las Vegas, Nevada. (David Becker | Getty Images)
- AMD announced on Tuesday that its most-advanced GPU for artificial intelligence, the MI300X, will begin shipping to select customers later this year.
- AMD’s move presents a significant challenge to Nvidia, the current market leader in AI chips. Analysts estimate that Nvidia currently holds over 80% market share in this space.
By offering competitive AI accelerators, AMD has the potential to tap into a substantial untapped market and diversify beyond its traditional computer processors. AMD CEO Lisa Su emphasized the company’s focus on AI as its largest long-term growth opportunity, projecting the data center AI accelerator market to reach over $150 billion by 2027.
This development could also exert downward pressure on GPU prices, potentially benefiting the affordability of generative AI applications. The MI300X chip, part of AMD’s CDNA architecture, is designed to support large language models and other advanced AI applications. With up to 192GB of memory, the MI300X can accommodate even larger AI models compared to competing chips.
AMD’s Infinity Architecture, which combines eight M1300X accelerators in one system, provides further scalability for AI applications. While Nvidia’s CUDA software has traditionally attracted AI developers, AMD has developed its own software ecosystem called ROCm, offering compatibility with open models, libraries, frameworks, and tools.
Overall, AMD’s advancements in AI chips represent a significant milestone in the industry. The company’s commitment to innovation and its focus on addressing the needs of AI developers and server makers signal a strong competitive stance.
As the demand for AI continues to grow, AMD’s offerings have the potential to shape the future of AI technology and reshape the market dynamics.