Jump to content

[Hardware] AMD Claims First x86 Processor with Dedicated AI Hardware


Douma

Recommended Posts

Aiming to enable efficient AI computing in the mobile space, AMD at CES rolled out its Ryzen AI engine, along with processors that embed the new AI engine.

CES-AMD-Ryzen-AI-Dr-Su-CES.jpg

We’re back again with more CES 2023 news. Amidst a slew of processor rollouts at the show from AMD last evening, the company unveiled what it claims as the first x86 processor with dedicated artificial intelligence hardware.

In this article, we’ll examine the rationale behind dedicated AI hardware on a processor, delve into the details of the AMD’s Ryzen AI engine, and share insights from the press briefing with AMD’s Don Waligroski, senior product manager and Nick Ni, senior director for data center AI and compute markets.Last night’s CES 2023 show opened with the keynote by AMD Chair and CEO Dr. Lisa Su. A key statement by her was that “AI is the defining megatrend in technology [today].” Following through on that idea, a number of the company’s key announcements last night revolved around AI.

In a briefing to the press, AMD’s Nick Ni introduced a new architecture from the company called XDNA. XDNA is AMD’s new adaptive AI architecture and Ni says the company is integrating it across its product roadmap. Explaining XDNA at a high level, NI says it’s really optimized for AI inference.

“Neural networks function like neurons where data flows from layer to layer,” says Ni. In this simple example (see below), there are multiple layers of neurons where each “dot” does some kind of compute function like matrix multiply or convolution and then you pass the data to the next neuron to be processed.“There's quite a bit of sequential dependency from layer to layer,” says Ni, “To follow this, our AMD AI engine architecture works as an adaptive dataflow architecture where you can actually use a large array of compute, but then you can pass the data efficiently from array to array. This can be done without going out to external memory or even to some cache, which often consumes more power and results in a longer latency.”

Ni says that another key benefit of the XDNA architecture is that it’s extremely scalable. “We call this cloud-to-client symmetry,” saysNi. “We can shrink the array to something very small to fit into something like a laptop or even a tablet. And we can scale it for something much bigger with hundreds or even thousands of arrays.” Such huge arrays can be designed in data center and cloud infrastructure systems.

Along just those lines, among AMD’s AI-themed product previews last night was its Alveo V70 AI Accelerator board. Designed for AI and cloud infrastructure systems, the board can handle multiple AI inference workloads, and is available for pre-order now.

https://www.allaboutcircuits.com/news/amd-claims-first-x86-processor-with-dedicated-ai-hardware/

Member -> Moderator -> Super Moderator -> Supervisor -> Ex-Staff (Absent) -> Supervisor ->  Administrator -> Ex-Staff  ->  Administrator -> Ex-Staff

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.