What’s L2 Cache?

Print anything with Printful



L2 cache is part of a tiered storage strategy to improve computer performance. CPUs anticipate data requests and check L1, L2, and L3 cache in order. Multiple levels of cache are used to optimize overall performance by keeping frequently used instructions in L1 cache. Cache design is important for improving CPU and system performance.

Level 2 or L2 cache is part of a tiered storage strategy to improve computer performance. This model uses up to three levels of cache, designated L1, L2, and L3, each bridging the gap between the computer’s very fast processing unit (CPU) and the much slower random access memory (RAM). As the design has evolved, L1 cache is more often integrated into the CPU, while L2 cache has typically been integrated into the motherboard (along with L3 cache, when present). However, some CPUs now incorporate L2 cache and L1 cache, and some even incorporate L3 cache.

The CPU cache’s job is to anticipate requests for data, so that when the user clicks on a frequently used program, for example, the instructions needed to run that program are ready, cached. When this happens, the CPU can process the request without delay, dramatically improving computer performance. The CPU will check L1 cache first, followed by L2 and L3 cache. If it finds the necessary bits of data, that’s a cache hit, but if the cache doesn’t anticipate the request, the CPU gets a cache miss and the data has to be pulled from slower RAM or the even slower hard drive.

Since it’s the CPU cache’s job to hold bits of data, you might wonder why there is more than one level of cache. Why have L2 cache, much less L3, when you can just make L1 cache bigger?

The answer is that the larger the cache, the higher the latency. Small caches are faster than large caches. To optimize overall performance, the best result is achieved by having the smallest and fastest cache closest to the CPU itself, followed by a slightly larger pool of L2 cache and an even larger pool of L3 cache. The idea is to keep the most frequently used instructions in L1, with the L2 cache holding the next probably needed bits of data, and L3 following suit. If the CPU needs to process a request that isn’t in the L1 cache, it can quickly check the L2 cache, then L3.

Cache design is a key strategy in the highly competitive microprocessor market, as it is directly responsible for improving CPU and system performance. The multilevel cache consists of more expensive static RAM (SRAM) chips versus cheaper dynamic RAM (DRAM) chips. DRAM chips and synchronous DRAM (SDRAM) are what we usually just call RAM. SRAM and SDRAM chips are not to be confused.
When looking at new computers check the amounts of L1, L2 and L3 cache. All else being equal, a system with more CPU caches will perform better, and synchronous cache is faster than asynchronous.




Protect your devices with Threat Protection by NordVPN


Skip to content