Memory Timings Analysis
These are just some general trends that I noticed when I was doing an analysis of my data:
Memory Clock Speed:
The speed of memory is most commonly measured by the clock speed, basically the number of cycles per second. Ram running at 133 MHz basically goes through 133 million clock cycles a second.
The performance gain from increasing the memory clock speed looks to be subject to the law of diminishing returns, with larger performance gains when moving from lower clock speeds.
CAS latency is basically the number of clock cycles (or Ticks, denoted with T) between the receipt of a "read" command and when the ram chip actually starts reading. Obviously, lower numbers will result in less of a delay when memory is being read from. Corsair's website claims a low single digit % gain from CAS-3 to CAS-2. Memory can be basically visualized as a table of cell locations, and the CAS delay is invoked every time the column changes (which is far more often than the row changing)..
The differences in memory bandwidth concerning CAS latency were non-existent (and it is just as likely that any recorded performance gains are attributed to random events, as performance gains were not at all consistent). There was no significant gain in memory bandwidth from adjusting CAS latencies.
In layman's terms, Bank interleaving changes the way "banks" (basically, chunks of memory) are accessed and refreshed. Basically a staggered effect is created to minimize the overall refresh and access delays, sending a read/access command to a certain bank of memory while waiting for the results of a previous read/access command. All memory chips over 64 megs have 4-banks (and can utilize this option).
Performance gains concerning bank interleave were very consistent, with a 40-50 point increase across the board, completely independent of all other settings. Of course, at higher speeds this performance gain is less significant (1% at 166 FSB compared to 4% at 100 FSB - meaning that the increase does not scale with faster speeds).
Precharge to Active (tRP):
The Precharge to Active timing controls the length of the delay between the precharge and activation commands. This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.
The gain from optimizing the tRP value seemed to scale with higher FSBs (10MB/sec at 100 FSB, 20MB/sec at 166 FSB), giving a consistent .1% increase in performance. I highly doubt that this .1% in memory bandwidth would translate to a noticeable (or significant) real world increase.
Active to Precharge (tRAS):
The Active to Precharge timing controls the length of the delay between the activation and precharge commands -- basically how long after activation can the access cycle be started again. This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.
As with CAS, the performance gain was inconsistent, and possibly could be attributed to random variables.
Active to CMD (Trcd):
This timing controls the length of the delay between when a memory bank is activated to when a read/write command is sent to that bank. This basically comes into play when the memory locations are not accessed in a linear fashion (because in a linear fashion, the current bank is already activated).
This option gave a consistent 20-30 MB/sec gain in memory bandwidth, with the results slightly pointing to a slight scaling at lower CAS latencies and higher FSBs.
DRAM Command Rate (self-abbreviated DRC):
I'm going to take a quote from Adrian's Rojak Pot in order to explain this setting:
This BIOS feature controls how long the memory controller latches on and asserts the command bus. The lower the value, the faster the the memory controller can send commands out.
A faster DRAM Command Rate results in a consistent 30MB/sec gain in memory bandwidth.
DRAM Burst Length (self-abbreviated DBL):
This option basically controls the amount of data that can be "burst" in one read/write. A "burst" has the advantages of only needing to invoke the CAS latency one time, allowing for less delay than a "non-burst" transaction. However, "burst" transactions can only be used for contiguous blocks of data (as only one column address is sent in the burst).
Our results showed no performance increase at all with changing the DRAM Burst Length, no matter what the circumstances were.
Write Recovery Time (tWR):
The Write Recovery Time memory timing determines the delay between a write command and a precharge command is set to the same bank of memory. According to Adrian's Rojak Pot, this option improves memory performance as well as provides increased overclockability.
Our results showed no performance increase at all with changing the the Write Recovery Time. This option had no influence on the overclockability of our test RAM (i.e. the test bed still crashed at more aggressive memory timings).
DRAM Access Time (self-abbreviated DAT):
I personally have no idea what this BIOS setting does (and the motherboard manual gives no clues either). My references also have no information on this setting.
From our results, the most important factor in memory bandwidth is the speed of the memory clock. This would suggest a certain desirability to sacrifice the other memory timings in hopes of pushing the memory speed higher. However, as our results revealed, the speed of the memory combined with CAS latency has the most affect on the overclockability of a memory stick (our test memory would not run at the more aggressive speeds and CAS latencies). The memory timings on our particular setup that had the most impact on performance involved setting the Bank Interleave to 4 Way, decreasing the DRAM command rate to 1T, and decreasing tRCD to 1T. Just as other websites have suggested gains from certain memory timings (the Engineers at OCZ suggest that tRAS at 3T or 4T have a very significant increase on performance), I should remind you that these are my personal results from my test setup, and my particular combination of hardware created these "patterns."
Again, we'd like to thank Crucial for making this article possible.
:: Copyright © 2002-2008 Techware Labs, LLC :: All Rights Reserved