XDNA is a neural processing unit (NPU) microarchitecture developed byAMD to accelerate on-deviceAI (artificial intelligence) andML (machine learning) workloads. It is based on technology acquired fromXilinx as part of AMD’s strategic acquisition and forms the hardware foundation of AMD’s Ryzen AI branding. XDNA tightly integrates with AMD’sZen CPU andRDNA GPU architectures, targeting diverse use cases ranging from ultrabooks to high-performance enterprise systems.[1][2]
XDNA employs a spatialdataflow architecture, where AI Engine (AIE) tiles process data in parallel with minimal external memory access. This design leverages parallelism and data locality to optimize performance and power efficiency. Each AIE tile contains:
A scalarRISC-style processor responsible for control flow and auxiliary operations.
Local memory blocks for storing weights, activations, and intermediate coefficients, reducing dependency on external DRAM and lowering latency.
On-chip program and data memories that further reduce latency and power by minimizing external memory traffic.
Dedicated DMA engines and programmable interconnects for deterministic and high-bandwidth data transfers between tiles.
The tile arrays are scalable and modular, allowing AMD to configure NPUs with varying tile counts to fit different power, area, and performance targets. Operating frequencies typically reach up to 1.3 GHz, adjustable according to thermal and power constraints.
The initial XDNA NPU launched in early 2023 with the Ryzen 7040 "Phoenix" series, achieving up to 10 TOPS (Tera Operations Per Second) in mobile form factors.[3]
Released in 2024, the Ryzen 8040 "Hawk Point" series improves the NPU through firmware updates, higher clock speeds, and tuning enhancements, pushing performance to around 16 TOPS.[4]
XDNA 2 debuted with the Ryzen AI 300 and PRO 300 mobile processors based onZen 5 microarchitecture. This generation drastically increased AI throughput, reaching up to 55 TOPS on flagship models.[1][5][6]
Deterministic latency: The spatial dataflow architecture ensures predictable and consistent inference timings, crucial for real-time applications.
Power efficiency: On-chip local memory usage reduces external DRAM accesses, lowering power consumption compared to traditional GPU or CPU approaches.[7]
Compute density: High TOPS in a compact silicon area enables integration into thin and light devices such as ultrabooks and portable workstations.
Scalability: The modular tile design supports scaling from lightweight mobile devices with few tiles to enterprise-class servers with many tiles.
XDNA is supported via AMD’s ROCm (Radeon Open Compute) and Vitis AI software stacks, enabling developers to utilize the NPU for accelerating AI workloads efficiently. Popular ML frameworks such as ONNX, TensorFlow, and PyTorch are supported through these tools.[8] Additionally, Microsoft Windows ML runtime integrates AMD NPU acceleration in devices marketed as Copilot+ PCs, enabling local AI inference without cloud dependency.[9]