Techware Labs Header
Home | Reviews | Articles | Downloads | Guides | Staff | Advertising | Links
Mainboards | Networking | Video | Cases | Storage | Other




News Archives

Hardware Reviews




About TWL


The Geek Weekly


Memory Timings Analysis

Review by Harry Lam on 05.16.03
Test Ram provided by Crucial, MSRP: $26.00 (per stick)


  • This can't be shown in the recorded results, but from my observations during testing, I noticed the general trend that as memory timings are set more and more aggressively, Sandra would reach it's steady-state score quicker and quicker (A gradual decrease from 5-7 trial runs to achieve a steady-state value to about 2-3 runs).
  • The overall bandwidth increase from the slowest memory timings to the fastest memory timings (1333 --> 2303) was approximately 73% (970 MB/sec).

General Trends:

These are just some general trends that I noticed when I was doing an analysis of my data:

Memory Clock Speed:

The speed of memory is most commonly measured by the clock speed, basically the number of cycles per second.  Ram running at 133 MHz basically goes through 133 million clock cycles a second.

Clock Speed: Performance Gain: % Increase: Theoretical %:
100 to 133 ~500 MB/sec 35-40% 33%
133 to 166 ~200-300MB/sec 10-15% 25%
100 to 166 ~750-800MB/sec 55-60% 66%

The performance gain from increasing the memory clock speed looks to be subject to the law of diminishing returns, with larger performance gains when moving from lower clock speeds.


CAS latency is basically the number of clock cycles (or Ticks, denoted with T) between the receipt of a "read" command and when the ram chip actually starts reading.  Obviously, lower numbers will result in less of a delay when memory is being read from.  Corsair's website claims a low single digit % gain from CAS-3 to CAS-2.  Memory can be basically visualized as a table of cell locations, and the CAS delay is invoked every time the column changes (which is far more often than the row changing)..

CAS Latency: Performance Gain: % Increase:
3.0 to 2.5 ~0-2MB/sec 0%-0.001%
2.5 to 2.0 ~0-3MB/sec 0%-0.002%
2.0 to 1.5 ~0-3MB/sec 0%-0.002%
3.0 to 2.0
(166 MHz mem clock)
~0-4MB/sec 0%-0.002%
3.0 to 1.5
(100 MHz mem clock)
~0-4MB/sec 0%-0.002%

The differences in memory bandwidth concerning CAS latency were non-existent (and it is just as likely that any recorded performance gains are attributed to random events, as performance gains were not at all consistent).  There was no significant gain in memory bandwidth from adjusting CAS latencies.

Additional Reading/References:


Bank Interleave:

In layman's terms, Bank interleaving changes the way "banks" (basically, chunks of memory) are accessed and refreshed.  Basically a staggered effect is created to minimize the overall refresh and access delays, sending a read/access command to a certain bank of memory while waiting for the results of a previous read/access command.  All memory chips over 64 megs have 4-banks (and can utilize this option).

Bank Interleave: Performance Gain: % Increase:
Disabled to 2-Way 40-50MB/sec 1%-4%
2-Way to 4-Way 40-50MB/sec 1%-4%
Disabled to 4-Way 80-100MB/sec 2%-8%

Performance gains concerning bank interleave were very consistent, with a 40-50 point increase across the board, completely independent of all other settings.  Of course, at higher speeds this performance gain is less significant (1% at 166 FSB compared to 4% at 100 FSB - meaning that the increase does not scale with faster speeds).

Additional Reading/References:


Precharge to Active (tRP):

The Precharge to Active timing controls the length of the delay between the precharge and activation commands.  This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.

tRP: Performance Gain: % Increase:
3T to 2T 10-20MB/sec .1%

The gain from optimizing the tRP value seemed to scale with higher FSBs (10MB/sec at 100 FSB, 20MB/sec at 166 FSB), giving a consistent .1% increase in performance.  I highly doubt that this .1% in memory bandwidth would translate to a noticeable (or significant) real world increase.

Additional Reading/References:


Active to Precharge (tRAS):

The Active to Precharge timing controls the length of the delay between the activation and precharge commands -- basically how long after activation can the access cycle be started again.  This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.

tRAS: Performance Gain: % Increase:
7T to 6T ~0-3MB/sec 0%-0.001%

As with CAS, the performance gain was inconsistent, and possibly could be attributed to random variables.

Additional Reading/References:


Active to CMD (Trcd):

This timing controls the length of the delay between when a memory bank is activated to when a read/write command is sent to that bank.  This basically comes into play when the memory locations are not accessed in a linear fashion (because in a linear fashion, the current bank is already activated).

tRCD: Performance Gain: % Increase:
3T to 2T 20-30MB/sec 1.0%-1.5%

This option gave a consistent 20-30 MB/sec gain in memory bandwidth, with the results slightly pointing to a slight scaling at lower CAS latencies and higher FSBs.

Additional Reading/References:


DRAM Command Rate (self-abbreviated DRC):

I'm going to take a quote from Adrian's Rojak Pot in order to explain this setting:

This BIOS feature controls how long the memory controller latches on and asserts the command bus. The lower the value, the faster the the memory controller can send commands out.

DRC: Performance Gain: % Increase:
2T to 1T ~30MB/sec 1.1%-2.1%

A faster DRAM Command Rate  results in a consistent 30MB/sec gain in memory bandwidth.

Additional Reading/References:


DRAM Burst Length (self-abbreviated DBL):

This option basically controls the amount of data that can be "burst" in one read/write.  A "burst" has the advantages of only needing to invoke the CAS latency one time, allowing for less delay than a "non-burst" transaction.  However, "burst" transactions can only be used for contiguous blocks of data (as only one column address is sent in the burst).

DBL: Performance Gain: % Increase:
4 to 8 0 MB/sec 0%

Our results showed no performance increase at all with changing the DRAM Burst Length, no matter what the circumstances were.

Additional Reading/References:


Write Recovery Time (tWR):

The Write Recovery Time memory timing determines the delay between a write command and a precharge command is set to the same bank of memory.  According to Adrian's Rojak Pot, this option improves memory performance as well as provides increased overclockability.

tWR: Performance Gain: % Increase:
3T to 2T 0% 0%

Our results showed no performance increase at all with changing the the Write Recovery Time.  This option had no influence on the overclockability of our test RAM (i.e. the test bed still crashed at more aggressive memory timings).

Additional Reading/References:


DRAM Access Time (self-abbreviated DAT):

I personally have no idea what this BIOS setting does (and the motherboard manual gives no clues either).  My references also have no information on this setting.

DAT: Performance Gain: % Increase:
3T to 2T 0-3MB/sec 0%-0.002%

From our results, the most important factor in memory bandwidth is the speed of the memory clock. This would suggest a certain desirability to sacrifice the other memory timings in hopes of pushing the memory speed higher. However, as our results revealed, the speed of the memory combined with CAS latency has the most affect on the overclockability of a memory stick (our test memory would not run at the more aggressive speeds and CAS latencies). The memory timings on our particular setup that had the most impact on performance involved setting the Bank Interleave to 4 Way, decreasing the DRAM command rate to 1T, and decreasing tRCD to 1T. Just as other websites have suggested gains from certain memory timings (the Engineers at OCZ suggest that tRAS at 3T or 4T have a very significant increase on performance), I should remind you that these are my personal results from my test setup, and my particular combination of hardware created these "patterns."


Again, we'd like to thank Crucial for making this article possible.

Discuss this article in our forums!


« Results More Articles »

Gigabyte GA-EX58-UD3R

Radeon 4890

Conficker Virus

Goliathus Mouse Pad

Hard Drive Destruction

OLPC=The Next Newton?

QPAD Gaming Mousepad

FSP BoosterX 5

Fusion Side Marker

eStarling ImpactV

Itami FiTrainer

Patriot WARP 128GB

Cyber Snipa 5.1

Game Bag 2.1

System Cache

:: Copyright © 2002-2008 Techware Labs, LLC :: All Rights Reserved

Email for spiders