Memory hierarchy

related topics
{system, computer, user}
{math, number, function}
{rate, high, increase}
{group, member, jewish}
{language, word, form}
{area, part, region}

The term memory hierarchy is used in the theory of computation when discussing performance issues in computer architectural design, algorithm predictions, and the lower level programming constructs such as involving locality of reference. A 'memory hierarchy' in computer storage distinguishes each level in the 'hierarchy' by response time. Since response time, complexity, and capacity are related[1], the levels may also be distinguished by the controlling technology.

The many trade-offs in designing for high performance will include the structure of the memory hierarchy, i.e. the size and technology of each component. So the various components can be viewed as forming a hierarchy of memories (m1,m2,...,mn) in which each member mi is in a sense subordinate to the next highest member mi-1 of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling to activate the transfer.

There are four major storage levels.[1]

This is a most general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for virtual memory when designing a computer architecture.

Contents

Example use of the term

Here are some quotes.

  • Adding complexity slows down the memory hierarchy.[2]
  • CMOx memory technology stretches the Flash space in the memory hierarchy[3]
  • One of the main ways to increase system performance is minimising how far down the memory hierarchy one has to go to manipulate data.[4]
  • Latency and bandwidth are two metrics associated with caches and memory. Neither of them is uniform, but is specific to a particular component of the memory hierarchy.[5]
  • Predicting where in the memory hierarchy the data resides is difficult.[5]
  • ...the location in the memory hierarchy dictates the time required for the prefetch to occur.[5]

Application of the concept

The memory hierarchy in most computers is:

  • Processor registers – fastest possible access (usually 1 CPU cycle), only hundreds of bytes in size
  • Level 1 (L1) cache – often accessed in just a few cycles, usually tens of kilobytes
  • Level 2 (L2) cache – higher latency than L1 by 2× to 10×, often 512 KiB or more
  • Level 3 (L3) cache – higher latency than L2, often 2048 KiB or more
  • Main memory – may take hundreds of cycles, but can be multiple gigabytes. Access times may not be uniform, in the case of a NUMA machine.
  • Disk storage – millions of cycles latency if not cached, but very large
  • Tertiary storage – several seconds latency, can be huge

Full article ▸

related documents
Supervisory program
Freescale 68HC11
Multi-user
Intel 80486DX4
MOS Technology 6507
Amiga 2000
Commodore 1570
XGA
KOffice
Demon dialing
Viewdata
PC motherboard
Frame synchronization
Squeak
Digital Signal 0
KMail
Raster graphics
Intel 80486SX
Voice operated switch
BearShare
Microelectronics
Freedb
56 kbit/s line
Freeware
KA9Q
Script kiddie
Isochronous burst transmission
PagePlus
Bitstream
IBM 801