Lisa Su doesn’t do small announcements. When AMD’s CEO took the stage for the CES 2026 opening keynote, she came with a simple message that carried enormous weight: AI should be everywhere, for everyone. What followed was a comprehensive look at how AMD plans to make that happen, from warehouse-sized data centers all the way down to the laptop on your desk.
But this wasn’t just AMD talking to itself. The company brought some serious partners along for the ride. OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci, and Illumina all made appearances, each explaining how AMD hardware is powering their AI work. When you see that kind of lineup, you know something significant is happening.
We’re Going to Need a Bigger Scale
Here’s a number that might make your head spin: AMD predicts that global compute capacity will grow from today’s 100 zettaflops to over 10 yottaflops in the next five years. If you’re not familiar with these terms, don’t worry. The short version is that a yottaflop is a thousand times bigger than a zettaflop. We’re talking about an almost incomprehensible expansion of computing power.
Why does this matter? Because the AI models everyone’s excited about need an absurd amount of compute to train and run. Today’s infrastructure simply won’t cut it for tomorrow’s AI ambitions. AMD is betting big that they can be the company providing the foundation for this next era.
Meet Helios: AMD’s Answer to AI Infrastructure
The centerpiece of AMD’s data center announcements is something called the Helios rack-scale platform. Think of it as AMD’s blueprint for building AI infrastructure at a scale we haven’t really seen before. A single Helios rack can deliver up to 3 AI exaflops of performance, which is the kind of muscle you need when you’re training models with a trillion parameters.
What’s inside? The Helios platform combines AMD’s Instinct MI455X accelerators with EPYC “Venice” CPUs and Pensando “Vulcano” network interface cards. Everything runs on AMD’s ROCm software ecosystem, which the company keeps emphasizing is open and not locked to proprietary standards.
This last point matters more than it might seem. As AI infrastructure costs continue to climb, companies are getting nervous about being locked into a single vendor’s ecosystem. AMD is clearly positioning itself as the more flexible alternative.
The Instinct MI400 Series Gets a New Member
AMD also introduced the Instinct MI440X GPU, and this one’s specifically aimed at enterprises that want to run AI on-premises. While the big cloud providers have been running AI accelerators for years, many companies are still trying to figure out how to bring AI into their existing data centers without ripping everything out and starting over.
The MI440X tries to solve this problem. It supports training, fine-tuning, and inference workloads in a compact eight-GPU configuration that should slot into existing infrastructure without too much drama. For companies that aren’t ready to go all-in on cloud AI but still want serious capabilities, this could be exactly what they’re looking for.
Meanwhile, the MI430X that AMD announced recently is already lined up for some impressive projects. It’ll power Discovery at Oak Ridge National Laboratory and Alice Recoque, which happens to be France’s first exascale supercomputer. Not bad company to keep.
Looking Ahead to 2027: The MI500 Series
AMD also gave us a peek at what’s coming in 2027 with the Instinct MI500 Series. The claim here is eye-popping: AMD says these GPUs are on track to deliver up to a 1,000x increase in AI performance compared to the MI300X from 2023.
Now, that number comes with some caveats. It’s based on peak theoretical performance from engineering projections, not real-world benchmarks. But even if the actual improvement is a fraction of that, we’re still talking about a massive leap forward. The MI500 Series will be built on AMD’s next-generation CDNA 6 architecture, use 2nm process technology, and feature HBM4E memory.
Whether AMD can actually deliver on these projections remains to be seen, but they’re clearly not planning to slow down in the AI accelerator race.
Your Next Laptop Might Be Smarter Than You Think
Data centers are exciting and all, but what about the rest of us? AMD had plenty to say about AI on personal devices too.
The new Ryzen AI 400 Series processors come with a 60 TOPS NPU, which is a significant bump in on-device AI processing power. These chips also support AMD’s ROCm platform, meaning developers can write code that scales smoothly between cloud servers and personal devices. The first systems should be hitting shelves this month, with more options coming throughout Q1 2026.
But the real attention-grabber is the Ryzen AI Max+ lineup. The Ryzen AI Max+ 392 and 388 processors can support AI models with up to 128 billion parameters using 128GB of unified memory. Let that sink in for a moment. We’re talking about running models locally that would have required server hardware not too long ago.
For content creators, developers working on AI applications, or anyone who needs serious local AI capabilities, this is a big deal. You get the performance without constantly relying on cloud connectivity, and you keep your data on your own machine.
A Platform for AI Developers
AMD is also thinking about developers specifically with the Ryzen AI Halo Developer Platform. It’s a compact small form factor desktop PC built around the Ryzen AI Max+ Series processors, designed to give AI developers a powerful local development environment without breaking the bank.
AMD claims it delivers “leadership tokens-per-second-per-dollar,” which is developer-speak for getting good AI performance without spending a fortune. The Halo platform should be available sometime in Q2 2026.
AI at the Edge: Beyond PCs and Data Centers
One area that often gets overlooked in AI discussions is embedded systems. AMD addressed this with the new Ryzen AI Embedded processor family, specifically the P100 and X100 Series.
These chips are designed for AI applications that need to run at the edge, in places where you can’t just connect to a data center. Think automotive systems, healthcare devices, industrial equipment, and yes, robots. As AI moves from being something that happens in the cloud to something that happens in the physical world around us, this category of hardware becomes increasingly important.
Government Partnerships and the Genesis Mission
In an interesting segment of the keynote, Lisa Su was joined by Michael Kratsios, Director of the White House Office of Science and Technology Policy. They discussed AMD’s role in something called the Genesis Mission, a public-private initiative aimed at keeping the United States at the forefront of AI technology.
As part of this initiative, AMD is powering two AI supercomputers at Oak Ridge National Laboratory: Lux and Discovery. These projects represent significant investments in what’s often called “sovereign AI” – ensuring that critical AI capabilities exist within national borders and aren’t dependent on foreign infrastructure.
Investing in the Next Generation
AMD also announced a $150 million commitment to expanding AI education. The goal is to bring AI into more classrooms and communities, giving students hands-on experience with the technology that will likely define much of their careers.
The keynote wrapped up with a nod to the more than 15,000 students who participated in the AMD AI Robotics Hackathon through a partnership with Hack Club. It’s a reminder that while the hardware announcements grab headlines, the people who will actually use these tools matter just as much.
What Does All This Mean?
Stepping back from all the product names and specifications, AMD’s CES 2026 presentation tells us a few important things about where the AI hardware market is heading.
First, the scale of AI compute is about to grow dramatically. The jump from zettaflops to yottaflops isn’t just marketing speak – it reflects the genuine demands of next-generation AI models.
Second, the fight over open versus closed platforms is heating up. AMD keeps emphasizing its open approach, positioning itself as an alternative for companies worried about vendor lock-in. Whether this resonates with customers will be one of the more interesting stories to watch in coming years.
Third, local AI is becoming genuinely practical. The ability to run 128-billion-parameter models on a laptop changes what’s possible for developers, creators, and anyone who cares about keeping their data private.
AMD walked into CES 2026 with something to prove, and they made a strong case for their vision of AI everywhere. Whether they can execute on all these ambitious plans is another question entirely, but there’s no doubt they’re swinging for the fences.