Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor

Modern throughput processors such as GPUs employ thousands
of threads to drive high-bandwidth, long-latency memory systems.
These threads require substantial on-chip storage for registers, cache,
and scratchpad memory. Existing designs hard-partition this local
storage, fixing the capacities of these structures at design time. We
evaluate modern GPU workloads and find that they have widely
varying capacity needs across these different functions. Therefore,
we propose a unified local memory which can dynamically change
the partitioning among registers, cache, and scratchpad on a per-application
basis. The tuning that this flexibility enables improves
both performance and energy consumption, and broadens the scope
of applications that can be efficiently executed on GPUs. Compared
to a hard-partitioned design, we show that unified local memory
provides a performance benefit as high as 71% along with an energy
reduction up to 33%.

Mark Gebhart (NVIDIA)
Stephen Keckler (NVIDIA)
Ronny Krashinsky (NVIDIA)
Publication Date: 
Saturday, December 1, 2012
Research Area: 
Uploaded Files: