What is Cache Memory in Computer Architecture?

July 15, 2018 Author: munishmishra04_3od47tgp
Print Friendly, PDF & Email

Processors are generally able to perform operations on operands faster than the access time of large capacity main memory. Though semiconductor memory which can operate at speeds comparable with the operation of the processor exists, it is not economical to provide all the main memory with very high speed semiconductor memory. The most frequently used data or instructions are kept in cache so that it can be accessed at a very fast rate improving overall performance of the computer. There are multiple level of cache with last level being largest and slowest to the first level being fastest and smallest.



Basic Description about Cache memory

Cache memory is a small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications and data. It is the fastest memory in a computer, and is typically integrated onto the motherboard and directly embedded in the processor or main random access memory (RAM).

Cache memories are small fast memories used to temporarily hold the contents of portions of main memory that are (believed to be) likely to be used. The idea of cache memories is similar to virtual memory in that some active portion of a low-speed memory is stored in duplicate in a higher speed cache memory. When a memory request is generated, the request is first presented to the cache memory, and if the cache cannot respond, the request is then presented to main memory.

As CPU has to fetch instruction from main memory speed of CPU depending on fetching speed from main memory. CPU contains register which has fastest access but they are limited in number as well as costly. Cache is cheaper so we can access cache. Cache memory is a very high speed memory that is placed between the CPU and main memory, to operate at the speed of the CPU.

It is used to reduce the average time to access data from the main memory. The cache is a smaller and faster memory which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data.

Cache Memory

Figure: Cache memory

Type of Cache memory

Cache memory improves the speed of the CPU, but it is expensive. Type of Cache Memory is divided into different level that are L1, L2, and L3:

  • Level 1 (L1) cache or Primary Cache: L1 is the primary type cache memory. The Size of the L1 cache very small comparison to others that is between 2KB to 64KB, it depend on computer processor. It is a embedded register in the computer microprocessor (CPU).The Instructions that are required by the CPU that are firstly searched in L1 Cache. Example of register are accumulator, address register,, Program counter etc.
  • Level 2 (L2) cache or Secondary Cache: L2 is secondary type cache memory. The Size of the L2 cache is more capacious than L1 that is between 256KB to 512KB.L2 cache is Located on computer microprocessor. After searching the Instructions in L1 Cache, if not found then it searched into L2 cache by computer microprocessor. The high-speed system bus interconnecting the cache to the microprocessor.
  • Level 3 (L3) cache or Main Memory: The L3 cache is larger in size but also slower in speed than L1 and L2, its size is between 1MB to 8MB.In Multicore processors, each core may have separate L1 and L2, but all core share a common L3 cache. L3 cache double speed than the RAM.




Cache Performance

The performance of a cache can be quantified in terms of the hit and miss rates, the cost of a hit, and the miss penalty, where a cache hit is a memory access that finds data in the cache and a cache miss is one that does not. When reading, the cost of a cache hit is roughly the time to access an entry in the cache. The miss penalty is the additional cost of replacing a cache line with one containing the desired data.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit ratio = hit / (hit + miss) = Number of hits/total accesses

We can improve Cache performance using higher cache block size, higher associativity, reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.



References

[1] Peir, Jih-Kwon, Windsor W. Hsu, and Alan Jay Smith. Implementation issues in modern cache memory. University of California, Berkeley, Computer Science Division, 1998.

[2] “Introduction of Cache Memory”, available online at: https://www.cs.umd.edu/~meesh/cmsc411/website/proj01/cache/cache.pdf

[3] Dinesh Thakur, “What is Cache Memory: Types of Cache Memory”, http://ecomputernotes.com/fundamental/disk-operating-system/what-is-cache-memory

[4] “Cache Memory”, available online at: https://www.techopedia.com/definition/6307/cache-memory

[5] “Cache Memory”, available online at: https://www.geeksforgeeks.org/cache-memory/

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert