- DRAM and SRAM
- DRAM Types - EDO, SIMMs, DIMMs, etc.
- Fetch that Bit!
- Access Times
- Block Memory Access
- Bit Width and RAM Pairing
DRAM and SRAM
Let's now try and put this into the context of RAM chips. RAM stands for Random Access Memory. This word random denotes that you can access any given bit or bits of information at will (or at random). This is a completely different method of information access to that which you would find, say, in a hard disk which uses serial data access. With serial access, a mechanism (such as a rotating disk) presents data to you in a fixed sequence and some other mechanism (such as the drive heads) can access this data only when it is presented; as an analogy, think of needle sitting over a record turntable. Obviously, random access is much quicker method than serial access.
There are different types of RAM in any given machine. The 'main' memory of a computer system is made up of dynamic RAM (DRAM). This type of memory relies on capacitance, i.e. each memory cell stores charge and this is how data is held. The drawback of this approach is that the capacitors in the RAM 'leak'. That is, they lose charge over time and therefore must be continually refreshed with pulses of current otherwise the data stored is lost.
This refreshing is handled by refresh circuitry. Recall from the previous page that memory cells in a given memory chip are organised as rows and columns... Well the refresh circuitry refreshes the cells a row at a time. Once the refresh is initiated, all the rows are refreshed, one after another. However, while a refresh is taking place, that row is unavailable for access.
Static RAM (SRAM) does not need to be continually refreshed. These cells retain their state as long as power is supplied. SRAM chips provide much quicker access times than DRAM and draw less power. This makes SRAM a perfect candidate for fast cache memory, such as that internal to modern CPUs. (Make sure you visit the CPU Guru page to learn about cache memory.)
However, SRAM chips are much more expensive to produce because they require more transistors per memory cell. Furthermore, this means that they occupy more space than an equivalent amount of DRAM. For this reason, SRAM chips are used sparingly, which is why cache memories are so small compared to system main memory.
DRAM Types - EDO, SIMMs, DIMMs, etc.
DRAM is assembled into memory 'sticks', much like the one shown on the right. A few years ago, SIMMs (Single In-Line Memory Modules) were the standard type of main memory used in PCs. EDO (Extended Data Out) RAM is a type of asynchronous DRAM packaged as SIMMs.
A couple of years ago, SIMMs gave way to the much quicker synchronous DRAM (SDRAM) DIMMs (Dual In-Line Memory Modules). These can be distinguished from SIMMs as they are much longer in appearance.
(Installation note: SIMMs were installed by inserting into the SIMM slots at an angle of 45 degrees, and then levering the SIMMs until they were perpendicular with the motherboard, at which time they would click into place. DIMMs, on the otherhand, are inserted by pushing them into the slot while perpendicular to the motherboard.)
Synchronous RAM is quicker than non-synchronous RAM by virtue of the fact that the regular refresh that must occur is synchronised with the clock cycles of the CPU. The earlier asynchronous RAM effectively 'wastes' CPU cycles. This will all become clearer when you delve into the CPU Guru.
Fetch that Bit!
How do SIMMs and DIMMs fit in with the memory cells and memory chips mentioned earlier? Basically, it's all about economy of space and efficient use of pins. Combining memory chips into such memory 'sticks' saves space and provides a common interface for memory. Let's consider a 32MB SIMM (yes, I know they're antiques, but it provides a gentle introduction). Note that here, we are talking about megabytes, not megabits. If you refer to the Bits and Bytes page, you will notice that a byte is 8 bits. Therefore 32MB is actually just over 268435456 bits (8 x 32 x 1048576) - that's a lot of memory cells!
A typical SIMM has 8 memory chips (housed in the small black plastic casings) attached to the silicon stick itself. Each one of these memory chips has 32 megabits of storage, i.e. 32 x 1048576 tiny memory cells in each chip. 32 megabits multiplied by 8 chips results in 32 megabytes. Once upon a time, that was considered a huge amount of storage! I remember, in the early 90s, paying over £200 for that amount of RAM for my 486 PC. And that was a bargain! Now you can buy hundreds of megabytes for a tiny fraction of that amount. My current PC has a gigabyte of RAM and that's not a particularly large amount.
So if there are so many millions and millions of bits stored on a machine, how do we manage to get hold of the data we want, rather than all the other stuff that we don't want? Well, each memory chip (recall that there are usually eight such chips on a single SIMM) has a large number of pins. These pins make electrical contact with the motherboard itself. All but one of these pins are called address pins. The other pin is called the data pin. To access a bit stored in a particular place on the SIMM or DIMM, you must specify the address of that memory. This address is encoded by applying a voltage to a combination of address pins. Just like all the other microcomponents mentioned so far, each address pin can be either a 0 or a 1. It is the large combination of 0s and 1s that can be created by all the address pins that defines how much memory can be accessed. (To learn more about addressing, check out the Bus section.)
So... you specify an address in memory by applying voltage to a combination of address pins. This address will refer to one specific memory cell (for each memory chip). The data held in that memory cell (either a 0 or a 1) will then be passed to the single data pin. In summary, you specify an address and then, as if by magic, the bit value held at that address then appears at the data pin. But we fetch more than just the one bit, as you'll soon see...
The time it takes for the bit value to appear at the data pin after applying the address to the address pins is called the memory access time. This is the standard method of defining the speed of memory. The old EDO RAM SIMMs have access times between 60 and 80 nanoseconds (ns). As already mentioned, much time is wasted waiting for the memory refresh.
SDRAM DIMMs are much quicker, although modern DIMM speeds are typically rated in MHz rather than access time, since they are synchronous with the memory bus speed - this will all become clearer later. (As a rought guide, PC133 RAM - which runs synchronously at a clock speed of 133MHz - is approximately twice as fast as 60ns EDO RAM.)
While 60ns or less may seem like a very short amount of time (and indeed it is; hard disk access times are roughly 150000 times slower!), this access time is only the time required to access just a single bit in memory... Isn't it? Well, no. Remember that a typical SIMM has eight memory chips. When you apply a memory address to the SIMM, you actually get the bit value appear at the data pin of all eight chips. I.e. you specify an address and then 8 bits (or 1 byte) appear simultaneously at the output pins. The point here is that every byte has a unique address in memory.
Modern computers have massive data buses and can fetch 64 bits - that's 8 bytes - in a single read operation. This is done by specifying the memory address of the first of the eight bytes. Older 8 bit machines could only access a single byte at a time. (Note that although most modern computers have a 64 bit address bus, most are still 32 bit CPU architectures, meaning that the registers this data will be placed in can only handle 32 bits at a time. The upshot of this is that 64 bits at a time are fetched and stored in a cache, to be used in 32 bit chunks by the processor when it is ready for the data.)
Block Memory Access
Remember the discussion of the RAS and CAS which accompanies read operations of DRAM? In short, when accessing memory, the system first performs a RAS, which 'actives' all the memory cells of a row, and then performs a CAS to obtain the correct bit. Each strobe incurs a delay which is clearly a major component of the memory access time.
However, during block memory access, i.e. obtaining sequential lumps of data from memory, it is typically only necessary to specify the row address once, since all the bits required can be obtained from the same row. For this reason, the memory access time for sequential accesses of memory during block access is typically half that of the first access. This also means that during block memory access (which predominates in a well optimised program), the CAS delay is a more important performance factor than the RAS delay.
Bit Width and RAM Pairing
Old 30 pin SIMMs have a bit width of 8 bits (more on this later). This means that for 32 bit data access (such as in 486 PCs which used this kind of memory), memory banks must be populated in multiples of four SIMMs (since 4 x 8 = 32). Therefore at least four 30 pin SIMMs must be used. EDO SIMMs are usually 'double-sided' with 72 pins and have a bit width of 32 bits. Hence these can be used in any combination on machines with 32 bit data access such (such as a 486). I.e. a single EDO SIMM would be supported. However, Pentium and later machines have 64 bit data access and therefore a Pentium system utilising 72 pin SIMMs would require the memory to be installed in pairs (since 2 x 32 = 64).
168 pin DIMMs have a bit width of 64 bits! Hence any number of DIMMs can be used in modern PCs using 64 bit data buses.
The next section looks at DDR and RDRAM, the types most commonly found in today's PCs.
|Just Too Good
Last updated: June, 2006 (DJL)