Advanced Matrix Extensions

From Wikipedia the free encyclopedia

Advanced Matrix Extensions (AMX), also known as Intel Advanced Matrix Extensions (Intel AMX), are extensions to the x86 instruction set architecture (ISA) for microprocessors from Intel designed to work on matrices to accelerate artificial intelligence (AI) and machine learning (ML) workloads.[1]

Extensions[edit]

AMX was introduced by Intel in June 2020 and first supported by Intel with the Sapphire Rapids microarchitecture for Xeon servers, released in January 2023.[2][3] It introduced 2-dimensional registers called tiles upon which accelerators can perform operations. It is intended as an extensible architecture; the first accelerator implemented is called tile matrix multiply unit (TMUL).[4][5]

In Intel Architecture Instruction Set Extensions and Future Features revision 46, published in September 2022, a new AMX-FP16 extension was documented. This extension adds support for half-precision floating-point numbers. In revision 48 from March 2023, AMX-COMPLEX was documented, adding support for half-precision floating-point complex numbers. Both extensions are planned for inclusion in the future Granite Rapids processors (AMX-COMPLEX - only in Granite Rapids-D[6]).

Tile matrix multiply unit[edit]

TMUL unit supports BF16 and INT8 input types.[7] AMX-FP16 also adds support for real and complex FP16 numbers. The register file consists of 8 tiles, each with 16 rows of size of 64 bytes (32 BF16/FP16 or 64 INT8 elements). The only supported operation is matrix multiplication [4]

4th Gen Intel Xeon Scalable processor can perform 2048 INT8 or 1024 BF16 operations per cycle:[8][9] the maximal input sizes are for A and for B, where J is 64 for INT8 and 32 BF16. The matrix multiplication requires multiplication and additions, thus performing operations in 16 cycles.[9]

Software support[edit]

References[edit]

  1. ^ Hemsoth, Nicole (August 19, 2021). "With AMX, Intel Adds AI/ML Sparkle to Sapphire Rapids". The Next Platform.
  2. ^ online, heise (28 June 2020). "Intel AMX: Erste Informationen zur Advanced Matrix Extensions Architecture". heise online.
  3. ^ Cutress, Ian. "Intel Xeon Sapphire Rapids: How To Go Monolithic with Tiles". AnandTech.
  4. ^ a b "Intel® Architecture Instruction Set Extensions and Future Features".
  5. ^ Schor, David (June 29, 2020). "The x86 Advanced Matrix Extension (AMX) Brings Matrix Operations; To Debut with Sapphire Rapids".
  6. ^ Larabel, Michael (July 12, 2023). "Intel Granite Rapids D Support Merged Into GCC 14". Phoronix.
  7. ^ "Advanced Matrix Extension (AMX) - x86 - WikiChip". en.wikichip.org.
  8. ^ "Accelerate Artificial Intelligence (AI) Workloads with Intel Advanced Matrix Extensions (Intel AMX)" (PDF). Intel. Retrieved 2023-04-13.
  9. ^ a b "Intel® 64 and IA-32 Architectures Optimization Reference Manual Volume 1". Intel.
  10. ^ "What's New in LLVM for 4th Gen Intel® Xeon® & Max Series CPUs". Retrieved 21 April 2023.
  11. ^ Larabel, Michael (2020-07-02). "Intel AMX Support Begins Landing In LLVM". Phoronix. Retrieved 2020-07-02.
  12. ^ "[X86-64] Support Intel AMX instructions". GitHub. 2020-07-02. Retrieved 2020-07-02.
  13. ^ a b Larabel, Michael (2020-07-02). "Intel AMX Support Lands In The GNU Assembler". Phoronix. Retrieved 2020-07-02.
  14. ^ "GCC 11 Release Series — Changes, New Features, and Fixes - GNU Project". Retrieved 21 April 2023.
  15. ^ "[PATCH] Enable GCC support for AMX". 2020-07-06. Retrieved 2020-07-09.
  16. ^ "Enable GCC support for AMX-TILE,AMX-INT8,AMX-BF16. · gcc-mirror/gcc@5c60984". GitHub. Retrieved 2022-09-05.
  17. ^ "commits with Intel AMX". 2020-07-02. Retrieved 2020-07-02.
  18. ^ "x86: Detect Intel Advanced Matrix Extensions". 2020-07-02. Retrieved 2020-07-02.
  19. ^ "Linux 5.16 Features Include FUTEX2, Intel AMX, Folios, DG2/Alchemist, More Apple Silicon Support". Phoronix.
  20. ^ "Accessing Sapphire Rapids AMX instructions on vSphere". Earl C. Ruby III. 2023-08-24.

External links[edit]