Affordable Access

Publisher Website

Programming Data Parallel FPGA Applications Using the SIMD/Vector Model-Chapter 10

Elsevier Inc.
DOI: 10.1016/b978-012370522-8.50015-7
  • Computer Science
  • Design
  • Mathematics


Publisher Summary This chapter explores parallel processing on reconfigurable computers using the single instruction multiple data (SIMD)/vector model. Reconfigurable computers can exploit parallelism at many different levels of granularity, from coarse-grained parallel tasks to fine-grained instruction-level parallelism. The massive amount of parallelism available in the reconfigurable computer more than compensates for its slow clock rate—one-tenth the clock rate of modern microprocessors. Raw spatial parallelism is plentiful in reconfigurable processors, especially those based on field programmable gate arrays (FPGAs). The challenge is to partition and map the application onto the inherently parallel fabric of lookup tables, DSP blocks, and memories. Parallel activities can be explicitly described and scheduled by the programmer or hardware designer, or can be inferred through analysis of the source code. SIMD/vector parallelism is very well suited to the spatial parallelism of FPGAs and other coarse-grained arithmetic logic unit (ALU) arrays. In this programming model, aggregate data such as vectors and matrices are processed in parallel on arrays of function units. The data parallel model maps naturally to the physical structure of FPGAs, with dedicated hardware blocks of arithmetic units and memories tiled regularly in a two-dimensional array, as well as a flexible interconnect. There are many degrees of freedom in an FPGA implementation. The data parallel engine can be customized to the datasets being processed in terms of geometry (one versus multidimensional arrays), interconnect (linear, mesh, torus), and even processing elements (PE) instruction set.

There are no comments yet on this publication. Be the first to share your thoughts.