Spatial architecture
From Wikipedia the free encyclopedia

Spatial architectures are a kind of computer architecture leveraging many collectively coordinated and directly communicating processing elements (PEs) to quickly and efficiently run highly parallelizable kernels. The "spatial" term comes from processing element instances being typically arranged in an array or grid, both logically and in the silicon design. Their most common workloads consist of matrix multiplications, convolutions, or, in general, tensor contractions. As such, spatial architectures are often used in AI accelerators.[1][2]
The key goal of a spatial architecture is to reduce the latency and power consumption of running very large kernels through the exploitation of scalable parallelism and data reuse. Consider a kernel, i.e. a function to be executed, expressed as one or more loops; this means distributing its computations between processing elements while ensuring that their data dependencies land either within the same element or the same region of elements.[3][4]
While spatial architectures can be designed or programmed to support different algorithms, each workload must then be mapped onto the processing elements using specialized dataflows. Formulating a mapping involves the assignment of each operation to a processing element and the scheduling of the ensuing data movements. All tuned to maximize data parallelism and reuse.[5][2]
Spatial architectures are classifiable as a SPMD (or single function multiple data) array processor, in that each processing element runs the same operations on a different subset of data, yet they are still programmed through a single mapping[6]. The architecture of an individual processing element can then itself belong to any Fynn class. In particular, spatial architectures are well suited for applications whose dataflow exhibits producer-consumer relationships (e.g., parallel reduce) or can leverage efficient data sharing among a region of PEs[1].
Spatial architectures can typically be found as hardware accelerators in heterogeneous systems, under the broader category of manycore processor.[7]
Design details
[edit]Core element of spatial architecture is its multidimensional array of processing elements. Each processing element is simple, namely a multiply-and-accumulate functional unit, a stripped-down core, or application-specific logic[4]. Processing elements are then connected with each other and the memory hierarchy through busses or a network on chip, or even asynchronous logic[2][8]. The memory hierarchy is explicitly managed and may consist of multiple on-chip buffers, like register files, scratchpads, and FIFOs, backed by large off-chip DRAM and non-volatile memories[1].
The number of processing elements, interconnect bandwidth, and amount of on-chip memory vary widely between designs and target applications. From thousands of processing elements and tens of megabytes of memory for high-performance computing[9] to tens of elements and a few kilobytes for the edge[10].
The key performance metrics for a spatial architecture are its consumed energy and latency when running a given workload[7]. Due to technology and bandwidth limitations, the energy and latency required to access larger memories, like DRAM, dominate those of computation, being hundreds of times more than what's needed for storage near processing elements[11][12]. That's why a spatial architecture's memory hierarchy is intended to localize most repeated value accesses on faster and more efficient on-chip memories, exploiting data reuse to minimize costly accesses[4].
Data reuse
[edit]
The mechanisms that enable reuse in spatial architectures are multicast and reduction. Reuse can be further classified as spatial and temporal. Spatial architectures' interconnects can support spatial multicast as one read from an outer memory being used for multiple writes to inner instances, and spatial reduction, where reads from inner memories are accumulated in a single outer write[13]. These can be implemented either with direct element-to-element forwarding, like in systolic arrays, or on the interconnect during memory accesses[7]. Temporal reuse occurs when the same value is retained in a memory while being read (multicast) and/or updated in place (reduction) multiple times without it being re-fetched from another memory[13].
Consider the case of kernels that can be computed with parallel ALU-like processing elements, such as matrix multiplications and convolutions. Direct inter-processing-element communication can be used effectively for passing partial sums to achieve spatially distributed accumulation, or sharing the same input data for parallel computation without repeated accesses to outer memories[14].
The amount of data reuse that can be exploited is a property of the kernel being run, and can be inferred by analyzing its data dependencies. When the kernel can be expressed as a loop nest, reuse arises from subsequent iterations accessing, in part, the same values. This overlap is a form of access locality and constitutes a reuse opportunity for spatial architectures often called "stationarity"[15][1]. For kernels presenting affine transformations of indices, like convolutions and, more generally, stencil patterns, the partial overlap arising from the sliding window of the computation also yields a reuse opportunity, taking the name of "ghost zone" or "halo"[16].
Naturally, spatial architectures are more effective the more reuse opportunities are present. At the same time, limited hardware resources mean that not all opportunities can be leveraged at once, requiring proper planning of the computation to exploit the most effective ones[5].
Mapping computations
[edit]To run a kernel on a spatial architecture a mapping must be constructed, detailing how the execution will unfold. Mapping a workload to a spatial architecture requires binding each of its computations to a processing element and then scheduling both computations and the data movements required to support them[17]. A good choice of mapping is crucial to maximize performance[5]. The starting point for a mapping is the loop nest representation of a kernel. To leverage parallelism and data reuse simultaneously, iterations must be divided between processing elements while taking care that inter-iteration data dependencies are handled by the same element or neighborhood of elements[3].

For illustrative purposes, the following mapping example focuses on a matrix multiplication, but everything remains generalizable to any data-parallel kernel. This choice stems from the fact that most works on spatial architectures focus on neural networks support and related optimizations[18][19]. Note that a similar parallelization of matrix multiplication has also been discussed in the context of multiprocessor systems.
All mappings can be constructed through three loop transformations of the original kernel[20]:
- Loop tiling (or blocking), resulting in smaller and smaller block matrices that can fit on inner memories. This is repeated for every memory in the hierarchy, for each creating a nested copy of the original loops. At any moment during execution, a memory only needs to store all data required for iterations on its copies of the loops and inner ones.
- Parallelization, similar to tiling, but different tiles are processed concurrently across multiple processing elements, rather than sequentially.
- Computation ordering, loops can be arbitrarily reordered inside each copy of the original loop nest. This alters data access patterns, changing which and when the same values are used in consecutive iterations.
Each memory hierarchy level is in charge of progressing through its assigned iterations. After parallelization, each processing element runs the same inner loops on partially different data. Exploited data reuse is implicit in this formalism. Tiling enables temporal reuse, while parallelization enables spatial reuse. Finally, the order of computation determines which values can actually undergo reuse[3][13].
Consider, for this example, a spatial architecture comprising a large scratchpad storing operands in their entirety and an array of 16x16 processing elements, each with a small register file and a multiply-and-accumulate unit. Then, let the original kernel be this matrix multiplication:
M, K, N = 128, 64, 256 for m in [0:M): for k in [0:K): for n in [0:N): Out[m][n] += W[m][k] * In[k][n]
A complete mapping, with a tiled and parallelized kernel and a fully specified dataflow, can be written as follows. Iterations that are distributed across processing elements to run concurrently are marked with pfor
:
# ================ outer memory ================= for m_mem in [0:8): for k_mem in [0:1): for n_mem in [0:16): # ========= across processing elements ========== pfor m_par in [0:16): pfor k_par in [0:16): # ========= inside processing elements ========== for n_pe in [0:16): for m_pe in [0:1): for k_pe in [0:4): m = m_mem*16 + m_par + m_pe k = k_mem*16*4 + k_par*4 + k_pe n = n_mem*16 + n_pe Out[m][n] += W[m][k] * In[k][n]
Temporal data reuse can be observed in the form of stationarity, with outputs accumulated in-place during k_pe
iterations and weights remaining stationary during n_mem
ones. Spatial reuse occurs as the reduction of outputs over k_par
and the multicast of inputs throughout m_par
. Each processing element sees instead a unique tile of weights. While inferring the reuse opportunities exploited by a mapping, any loop with a single iteration must be ignored, while, for each operand, only loops affecting its dimensions need to be considered[15].
Mapping optimization
[edit]The number of possible mappings achievable varies depending on the target hardware, but is hardly less than billions, due to the large number of possible combinations resulting from the above decisions[21]. As a result, finding the best set of transformations yielding the highest data reuse, and thus the lowest running latency and power consumption for a spatial architecture and kernel, is a challenging optimization problem[17][22][19].
Most spatial architecture designs have been developed together with a tailored mapping technique[1]. The complexity of the problem has, however, motivated the development of dedicated mapping tools that can work for a variety of spatial architectures and implement general heuristics that consistently find good mappings within reasonable time[5]. Techniques successfully applied to the problem include[7]:
- Pruned search, since many mappings yield identical behavior, they can be pruned as redundant (e.g., reordering loop with a single iteration). Then, a random search is often able to find good mappings[5][22].
- Genetic algorithms have been used to iteratively improve an initial set of diverse random mappings by transplanting between them the most successful loop transformations[20].
- Simulated annealing also starts from a random pool of mappings, iteratively applying to them random transformations. Each transformation is then retained with a probability that is proportional to its performance improvement and inversely proportional to the elapsed time since the start of the exploration[23].
- Integer programming can be applied by reformulating the problem of assigning kernel iterations to different loops and reordering them as a generalized assignment problem. It can then be solved through dedicated tools, like Gurobi[17].
Examples
[edit]Spatial architecture platforms
[edit]ASICs, fully custom hardware accelerator designs, are the most common form in which spatial architectures have been developed. This is mainly because ASICs mesh well with the efficiency design goals of spatial architectures.[7]
FPGAs can be seen as fine-grained and highly flexible spatial architectures. The same applies to CGRAs[1]. However, both are not limited to following the spatial architecture paradigm, as they may, for example, be reconfigured to run most arbitrary tasks. Therefore, they should only be considered a spatial architecture when set up to operate as one. In fact, several spatial architecture designs have been developed for deployment on FPGAs[24][19].
Systolic arrays are a form of spatial architecture, in that they employ a mesh of computing nodes with a programmable interconnect, allowing computations to unfold while data moves in lock-step from node to node. The computational flow graph of systolic arrays naturally aligns with the pfor
s of spatial architecture mappings[25].
Asynchronous arrays of simple processors are a precursor of spatial architectures following the MIMD paradigm and targeted towards digital signal processing workloads[1][26].
Not spatial architectures
[edit]Digital signal processors are highly specialized processors with custom datapaths to perform many arithmetic operations quickly, concurrently, and repeatedly on a series of data samples. Despite commonalities in target kernels, a single digital signal processor is not a spatial architecture, lacking its inherent spatial parallelism over an array of processing elements. Nonetheless, digital signal processors can be found in FPGAs and CGRAs, where they could be part of a there-instantiated, larger, spatial architecture design[19].
Tensor Core present in Nvidia GPUs since the Volta series, while accelerating matrix multiplication, do not classify as spatial architectures either, as they are hardwired functional units that do not expose spatial features by themselves. Again, a streaming multiprocessor, containing multiple tensor cores, is not a spatial architecture, but an instance of SIMT, due to its control being shared across several GPU threads.[7]
Emergent or unconventional spatial architectures
[edit]In-memory computing proposes to perform computations on the data directly inside the memory it is stored in. Its goal is to improve a computation's efficiency and density by sparing costly data transfers and reusing the existing memory hardware[27]. For instance, one operand of a matrix multiplication could be stored in memory, while the other is gradually brought in, and the memory itself produces the final product. When each group of memory cells that performs a computation between the stored and an incoming operand, such as a multiplication, is viewed as a processing element, an in-memory computing-capable memory bank can be seen as a spatial architecture with a predetermined dataflow. The bank's width and height forming the characteristic pfor
s. [28]
Cognitive computers developed as part of research on neuromorphic systems are instances of spatial architectures targeting the acceleration of spiking neural networks. Each of their processing elements is a core handling several neurons and their synapses. It receives spikes directed to its neurons from other cores, integrates them, and eventually propagates produced spikes. Cores are connected through a network-on-chip and usually operate asynchronously. Their mapping consists of assigning neurons to cores while minimizing the total distance traveled by spikes, which acts as a proxy for energy and latency. [8][29]
Specific implementations
[edit]



Produced or prototyped spatial architectures independent accelerators:
- Eyeriss[1]: a deep learning accelerator developed by MIT's CSAIL laboratory, in particular Vivienne Sze's team, and presented in 2016. It employs a 108 KB scratchpad and a 12x14 grid of processing elements, each with a 0.5 KB register file. A successor, Eyeriss v2[14] has also been designed, implementing a hierarchical interconnect between processing elements to compensate for the lack of bandwidth in the original.
- DianNao[10]: a family of deep learning accelerators developed at ICT that offers both edge and high-performance computing oriented variants. Their base architecture uses reconfigurable arrays of multipliers, adders, and activation-specific functional units to parallelize most deep learning layers.
- Simba[2]: an experimental multi-chip module spatial achitecture developed by Nvidia. Each chip has roughly 110 KB of memory and features 16 processing elements, each containing a vector multiply-and-accumulate unit capable of performing a dot product between 8-element vectors. Up to 6x6 chips have been installed in the same module.
- NVDLA[30]: an open-source, parametric, unidimensional array of processing elements specialized for convolutions, developed by Nvidia.
- Tensor Processing Unit (TPU): developed by Google and internally deployed in its datacenters since 2015, its first version employed a large 256x256 systolic array capable of 92 TeraOps/second and a large 28 MB software-managed on-chip memory[9]. Several subsequent versions have been developed with increasing capabilities[12].
- TrueNorth[8]: a neuromorphic chip produced by IBM in 2014. It features 4096 cores, capable of handling 256 simulated neurons and 64k synapses. It does not have a clock and cores operate as event-driven by using both synchronous and asynchronous logic.
- Reconfigurable match-action tables[31]: a network pipeline, leveraging spatial parallelism through the use of many programmable ALUs and memory blocks. Developed and published as a paper in the 2013, the idea was bought by Intel to and sold as a programmable switch, the Intel Tofino. Thanks to it's pipelined design and logic duplication it is able to elaborate packets at line rate, offloading some the logic form the CPU of the data centers to the NIC or the switches[32][33].
Spatial architectures integrated into existing products or platforms:
- Gemmini[24][25]: systolic array-based deep learning accelerator developed by UC Berkeley as part of their open-source RISC-V ecosystem[34]. Its base configuration is a 16x16 array with 512 KB of memory, and is intended to be controlled via a tightly coupled core.
- AI Engine[35]: an accelerator developed by AMD and integrated in their Ryzen AI series of products. In it, each processing element is a SIMD-capable VLIW core, increasing the flexibility of the spatial architecture and enabling it to also exploit task parallelism.
Workloads demonstrated to run on these spatial architectures include: AlexNet[1], ResNet[2][24], BERT[21][24], Scientific computing[18].
See also
[edit]- Digital signal processor
- Loop nest optimization
- Manycore processor
- Neural processing unit
- Polytope model
- Symmetric multiprocessing
- Systolic array
- Vision processing unit
References
[edit]- ^ a b c d e f g h i j Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2016). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA): 367–379. doi:10.1109/ISCA.2016.40.
- ^ a b c d e f Shao, Yakun Sophia; Cemons, Jason; Venkatesan, Rangharajan; Zimmer, Brian; Fojtik, Matthew; Jiang, Nan; Keller, Ben; Klinefelter, Alicia; Pinckney, Nathaniel; Raina, Priyanka; Tell, Stephen G.; Zhang, Yanqing; Dally, William J.; Emer, Joel; Gray, C. Thomas; Khailany, Brucek; Keckler, Stephen W. (2021). "Simba: scaling deep-learning inference with chiplet-based architecture". Commun. ACM. 64 (6). New York, NY, USA: Association for Computing Machinery: 107–116. doi:10.1145/3460227. ISSN 0001-0782.
- ^ a b c Kao, Sheng-Chun; Kwon, Hyoukjun; Pellauer, Michael; Parashar, Angshuman; Krishna, Tushar (2022). "A Formalism of DNN Accelerator Flexibility". Proc. ACM Meas. Anal. Comput. Syst. 6 (2). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3530907. acceleratorflexibility.
- ^ a b c Sze, Vivienne; Chen, Yu-Hsin; Yang, Tien-Ju; Emer, Joel (2017). "Efficient processing of deep neural networks: A tutorial and survey". Proceedings of the IEEE.
- ^ a b c d e Parashar, Angshuman; Raina, Priyanka; Shao, Yakun Sophia; Chen, Yu-Hsin; Ying, Victor A.; Mukkara, Anurag; Venkatesan, Rangharajan; Khailany, Brucek; Keckler, Stephen W.; Emer, Joel (2019). "Timeloop: A Systematic Approach to DNN Accelerator Evaluation". 2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS): 304–315. doi:10.1109/ISPASS.2019.00042.
- ^ Quinn, Michael J. (2003). Parallel Programming in C with MPI and OpenMP. McGraw-Hill Education Group. ISBN 0071232656.
- ^ a b c d e f Silvano, Cristina; Ielmini, Daniele; Ferrandi, Fabrizio; Fiorin, Leandro; Curzel, Serena; Benini, Luca; Conti, Francesco; Garofalo, Angelo; Zambelli, Cristian; Calore, Enrico; Schifano, Sebastiano; Palesi, Maurizio; Ascia, Giuseppe; Patti, Davide; Petra, Nicola; De Caro, Davide; Lavagno, Luciano; Urso, Teodoro; Cardellini, Valeria; Cardarilli, Gian Carlo; Birke, Robert; Perri, Stefania (2025). "A Survey on Deep Learning Hardware Accelerators for Heterogeneous HPC Platforms". ACM Comput. Surv. 57 (11). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3729215. ISSN 0360-0300.
- ^ a b c Akopyan, Filipp; Sawada, Jun; Cassidy, Andrew; Alvarez-Icaza, Rodrigo; Arthur, John; Merolla, Paul; Imam, Nabil; Nakamura, Yutaka; Datta, Pallab; Nam, Gi-Joon; Taba, Brian; Beakes, Michael; Brezzo, Bernard; Kuang, Jente B.; Manohar, Rajit; Risk, William P.; Jackson, Bryan; Modha, Dharmendra S. (2015). "TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 34 (10): 1537–1557. doi:10.1109/TCAD.2015.2474396.
- ^ a b c Jouppi, Norman P.; Young, Cliff; others (2017). "In-Datacenter Performance Analysis of a Tensor Processing Unit". SIGARCH Comput. Archit. News. 45 (2). New York, NY, USA: Association for Computing Machinery: 1–12. doi:10.1145/3140659.3080246. ISSN 0163-5964.
- ^ a b Chen, Yunji; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (2016). "DianNao family: energy-efficient hardware accelerators for machine learning". Commun. ACM. 59 (11). New York, NY, USA: Association for Computing Machinery: 105–112. doi:10.1145/2996864. ISSN 0001-0782. 10.1145/2996864.
- ^ Horowitz, Mark (2014). "1.1 Computing's energy problem (and what we can do about it)". 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC): 10–14. doi:10.1109/ISSCC.2014.6757323.
- ^ a b Jouppi, Norman P.; Yoon, Doe Hyun; others (2021). "Ten lessons from three generations shaped Google's TPUv4i". Proceedings of the 48th Annual International Symposium on Computer Architecture. ISCA '21. Virtual Event, Spain: IEEE Press: 1–14. doi:10.1109/ISCA52012.2021.00010. ISBN 9781450390866.
- ^ a b c Kwon, Hyoukjun; Chatarasi, Prasanth; Sarkar, Vivek; Krishna, Tushar; Pellauer, Michael; Parashar, Angshuman (2020). "MAESTRO: A Data-Centric Approach to Understand Reuse, Performance, and Hardware Cost of DNN Mappings". IEEE Micro. 40 (3): 20–29. doi:10.1109/MM.2020.2985963.
- ^ a b Chen, Yu-Hsin; Yang, Tien-Ju; Emer, Joel; Sze, Vivienne (2019). "Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices". IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 9 (2): 292–308. doi:10.1109/JETCAS.2019.2910232.
- ^ a b Mei, Linyan; Houshmand, Pouya; Jain, Vikram; Giraldo, Sebastian; Verhelst, Marian (2021). "ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators". IEEE Transactions on Computers. 70 (8): 1160–1174. doi:10.1109/TC.2021.3059962.
- ^ Hagedorn, Bastian; Stoltzfus, Larisa; Steuwer, Michel; Gorlatch, Sergei; Dubach, Christophe (2018). "High performance stencil code generation with Lift". Proceedings of the 2018 International Symposium on Code Generation and Optimization; Vienna, Austria. CGO '18. New York, NY, USA: Association for Computing Machinery: 100–112. doi:10.1145/3168824. ISBN 9781450356176.
- ^ a b c Huang, Qijing; Kang, Minwoo; Dinh, Grace; Norell, Thomas; Kalaiah, Aravind; Demmel, James; Wawrzynek, John; Shao, Yakun Sophia (2021). "CoSA: Scheduling by Constrained Optimization for Spatial Accelerators". 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA): 554–566. doi:10.1109/ISCA52012.2021.00050.
- ^ a b Moon, Gordon Euhyun; Kwon, Hyoukjun; Jeong, Geonhwa; Chatarasi, Prasanth; Rajamanickam, Sivasankaran; Krishna, Tushar (2022). "Evaluating Spatial Accelerator Architectures with Tiled Matrix-Matrix Multiplication". IEEE Transactions on Parallel and Distributed Systems. 33 (4): 1002–1014. doi:10.1109/TPDS.2021.3104240.
- ^ a b c d Serena, Curzel; Fabrizio, Ferrandi; Leandro, Fiorin; Daniele, Ielmini; Cristina, Silvano; Francesco, Conti; Luca, Bompani; Luca, Benini; Enrico, Calore; Sebastiano, Fabio; Cristian, Zambelli; Maurizio, Palesi; Giuseppe, Ascia; Enrico, Russo; Valeria, Cardellini; Salvatore, Filippone; Francesco, Lo; Stefania, Perri (2025). "A Survey on Design Methodologies for Accelerating Deep Learning on Heterogeneous Architectures". arXiv preprint. arXiv:2311.17815.
- ^ a b Kao, Sheng-Chun; Krishna, Tushar (2020). "GAMMA: automating the HW mapping of DNN models on accelerators via genetic algorithm". Proceedings of the 39th International Conference on Computer-Aided Design • Virtual Event, USA. ICCAD '20. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3400302.3415639. ISBN 9781450380263.
- ^ a b Sehoon, Kim; Coleman, Hooper; Thanakul, Wattanawong; Minwoo, Kang; Ruohan, Yan; Hasan, Genc; Grace, Dinh; Qijing, Huang; Kurt, Keutzer; Michael, W.; Yakun, Sophia; Amir, Gholami (2023). "Full Stack Optimization of Transformer Inference: a Survey". arXiv preprint. arXiv:2302.14017.
- ^ a b Symons, Arne; Mei, Linyan; Verhelst, Marian (2021). "LOMA: Fast Auto-Scheduling on DNN Accelerators through Loop-Order-based Memory Allocation". 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS): 1–4. doi:10.1109/AICAS51828.2021.9458493.
- ^ Jung, Victor J.B.; Symons, Arne; Mei, Linyan; Verhelst, Marian; Benini, Luca (2023). "SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators". 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS): 1–5. doi:10.1109/AICAS57966.2023.10168625.
- ^ a b c d e Genc, Hasan; Kim, Seah; Amid, Alon; Haj-Ali, Ameer; Iyer, Vighnesh; Prakash, Pranav; Zhao, Jerry; Grubb, Daniel; Liew, Harrison; Mao, Howard; Ou, Albert; Schmidt, Colin; Steffl, Samuel; Wright, John; Stoica, Ion; Ragan-Kelley, Jonathan; Asanovic, Krste; Nikolic, Borivoje; Shao, Yakun Sophia (2021). "Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration". 2021 58th ACM/IEEE Design Automation Conference (DAC): 769–774. doi:10.1109/DAC18074.2021.9586216.
- ^ a b "Gemmini Tutorial at IISWC 2021". berkeley.edu. 2025-07-07.
- ^ Baas, Bevan; Yu, Zhiyi; Meeuwsen, Michael; Sattari, Omar; Apperson, Ryan; Work, Eric; Webb, Jeremy; Lai, Michael; Mohsenin, Tinoosh; Truong, Dean; Cheung, Jason (March–April 2007). "AsAP: A Fine-Grained Many-Core Platform for DSP Applications". IEEE Micro. 27 (2): 34–45. doi:10.1109/MM.2007.29. S2CID 18443228.
- ^ Jaiswal, Akhilesh; Chakraborty, Indranil; Agrawal, Amogh; Roy, Kaushik (2019). "8T SRAM Cell as a Multibit Dot-Product Engine for Beyond Von Neumann Computing". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 27 (11): 2556–2567. doi:10.1109/TVLSI.2019.2929245.
- ^ Andrulis, Tanner; Emer, Joel S.; Sze, Vivienne (2024). "CiMLoop: A Flexible, Accurate, and Fast Compute-In-Memory Modeling Tool". 2024 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS): 10–23. doi:10.1109/ISPASS61541.2024.00012.
- ^ Davies, Mike; Srinivasa, Narayan; Lin, Tsung-Han; Chinya, Gautham; Cao, Yongqiang; Choday, Sri Harsha; Dimou, Georgios; Joshi, Prasad; Imam, Nabil; Jain, Shweta; Liao, Yuyun; Lin, Chit-Kwan; Lines, Andrew; Liu, Ruokun; Mathaikutty, Deepak; McCoy, Steven; Paul, Arnab; Tse, Jonathan; Venkataramanan, Guruguhanathan; Weng, Yi-Hsin; Wild, Andreas; Yang, Yoonseok; Wang, Hong (2018). "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning". IEEE Micro. 38 (1): 82–99. doi:10.1109/MM.2018.112130359.
- ^ "NVDLA Primer". nvdla.org. 2025-07-07.
- ^ Bosshart, Pat; Gibb, Glen; Kim, Hun-Seok; Varghese, George; McKeown, Nick; Izzard, Martin; Mujica, Fernando; Horowitz, Mark (2013-08-27). "Forwarding metamorphosis: fast programmable match-action processing in hardware for SDN". Proceedings of the ACM SIGCOMM 2013 conference on SIGCOMM. SIGCOMM '13. New York, NY, USA: Association for Computing Machinery. doi:10.1145/2486001.2486011. ISBN 978-1-4503-2056-6.
- ^ Kaufmann, Antoine; Peter, SImon; Sharma, Naveen Kr.; Anderson, Thomas; Krishnamurthy, Arvind (2016-03-25). "High Performance Packet Processing with FlexNIC". Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. ASPLOS '16. New York, NY, USA: Association for Computing Machinery. doi:10.1145/2872362.2872367. ISBN 978-1-4503-4091-5.
- ^ Lin, Jiaxin; Patel, Kiran; Stephens, Brent E.; Sivaraman, Anirudh; Akella, Aditya (2020). {PANIC}: A {High-Performance} Programmable {NIC} for Multi-tenant Networks. ISBN 978-1-939133-19-9.
- ^ "Chipyard". slice.eecs.berkeley.edu. 2025-07-07.
- ^ "AMD AI Engine Technology". amd.com. 2025-07-07.
Futher reading
[edit]- Sze, Vivienne; Chen, Yu-Hsin; Yang, Tien-Ju; Emer, Joel S. (2022). Efficient Processing of Deep Neural Networks. Morgan & Claypool Publishers. ISBN 978-3-031-01766-7.
External links
[edit]- Eyeriss project (MIT)
- Tutorial on Hardware Accelerators for Deep Neural Networks (MIT)
- Timeloop/Accelergy Overview
- Nvidia Research: Simba