Accelerating Dependent Cache Misses with an Enhanced Memory Controller

Publication image

On-chip contention increases memory access latency for multicore processors. We identify that this additional latency has a substantial effect on performance for an important class of latency-critical memory operations: those that result in a cache miss and are dependent on data from a prior cache miss. We observe that the number of instructions between the first cache miss and its dependent cache miss is usually small. To minimize dependent cache miss latency, we propose adding just enough functionality to dynamically identify these instructions at the core and migrate them to the memory controller for execution as soon as source data arrives from DRAM. This migration allows memory requests issued by our new Enhanced Memory Controller (EMC) to experience a 20% lower latency than if issued by the core. On a set of memory intensive quad-core workloads, the EMC results in a 13% improvement in system performance and a 5% reduction in energy consumption over a system with a Global History Buffer prefetcher, the highest performing prefetcher in our evaluation.

Authors

Milad Hashemi (UT-Austin)
Khubaib (Apple)
Eiman Ebrahimi (NVIDIA)
Onur Mutlu (ETH Zurich and Carnegie Mellon University)
Yale N. Patt (UT-Austin)

Publication Date

Research Area

Uploaded Files