Context:
The relative access time to memory is growing
exponentially over time.
Problem:
How to design data structures or prefetchers that will
hide the memory latency?
What is a good way to evaluate what is a good data
structure, or prefetcher?
Solution:
Understanding: a notion of memory level
parallelism:
a prefetcher is good only if it can generate many
udeful outstanding prefetches.
Prefetching:
jump pointers, streams
Papers:
Streaming based prefetching, [coming soon] Chilimbi et
al