High-level abstractions separate algorithm design from platform implementation, allowing programmers to focus on algorithms while building increasingly complex systems. This separation also provides system programmers and compilers an opportunity to optimize platform services for each application. In FPGAs, this platform-level malleability extends to the memory system: unlike general-purpose processors, in which memory hardware is fixed at design time, the capacity, associativity, and topology of FPGA memory systems may all be tuned to improve application performance. Since application kernels often use few memory resources, substantial memory capacity may be available to the platform for use on behalf of the user program. In this work, we perform an initial exploration of methods for automating the construction of these application- specific memory hierarchies. Although exploiting spare resources can be beneficial, naively consuming all memory resources may cause frequency degradation. To relieve timing pressure in large BRAM structures, we provide microarchitectural techniques to trade memory latency for design frequency. We demonstrate, by examining both hand-assembled and HLS-compiled benchmarks, that our application-optimized memory system can improve pre-existing application runtime by 25% on average.
This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to firstname.lastname@example.org.