It's surprising that one of the most important aspects of performance in today's computers is the least understood. Cache rests at the very top of the memory hierarchy pyramid and all data that the processor uses must pass through it. Without a well designed cache, a high dollar computer filled with the latest and greatest hardware will run slower than a three legged cow. Maybe one of the reasons why this topic is so little understood is due to its complexity.
So first lets explain why cache is so important. Imagine you need a tool, if that tool is near you it only takes a second to get it and put it to use. If you don't have the tool near you, you'll have to get in the car and drive to the store. No matter how fast of a car you have, it's still hundreds if not thousands of times slower then reaching out and grabbing the tool. It would be easy to think of cache as a tool box, it's next to the processor and really fast to get data from. If the data is not there it has to go to other slower types of memory, think of DRAM as the corner store and your hard drive as that hardware store across town.
How it Works
Cache design is pretty complex so let me simplify this as much as I can here. Again if cache is the tool box, imagine it having a lot of labeled drawers, called blocks. Every time your processor needs data it looks for the label, called the index. If the indexes match, it opens the drawer and compares tag numbers to see if the data is correct for the task. If the tag numbers do not match it looks to the other memory like, in our example, driving to the store. As you can imagine if you have a small tool box, say only eight drawers, and a large task you'll be driving quite a bit. This is why computer designers created multi word blocks, it's like creating organizers for each of the drawers in your tool box. Designers do this instead of adding more blocks because to an extent a larger block size increases the chances of finding the correct data, but there is a point that too large of a size decreases those chances.
So you got curious, looked up your processor and noticed that your cache comes with various levels. Don't be confused, there is a good reason for that. Level 1 cache actually resides in the processor core and runs at the processor speed, very fast compared to the other RAM. Due to physical space constraints the size of this cache is small; on the Intel Yonah dual core processor the L1 cache is 32KB while others can be up to 128 KB. Level 2 cache rests outside the CPU core and before the DRAM. This cache will typically run at speeds below the processor speed, but it still faster then the DRAM and is far larger then L1 cache. Think of this as having a small tool box next to you and a larger one in the garage, either one you use it is still faster then driving to the store.
One might ask, “if cache is so much faster then any other type of memory why not built a system that only uses cache?” The answer is money, the cost of SRAM (cache) ranges from $4,000 to $10,000 per gigabyte. Not many people want to buy a computer that costs more then their house. As the technology advances we find access speeds of DRAM becoming quicker and the price dropping.
So now you know the basics of how cache works and why it is important. Even though manufacturers do not allow for many choices, if any, on how much cache you can buy with that new system, an informed consumer can arm themselves with the tools necessary to sift through cryptic spec sheets and marketing hype to find the components that meet their needs.