DDR and DDR

Slashhead

Newcomer
Is there any difference between local video RAM DDR and system DDR RAM ?
There must be a difference (besides BGA package), but I can't find an article. I checked Lost circuits but they are talking about system DDR only.
 
Difference = latency? What? Lower, higher, be specific, people!

I'd think it would be lower, since not having to deal with multiple loads on the memory bus, stubs, trace capacitance etc makes things easier, but then again, the chips are clocked so much faster too so that would probably make up for much of the difference...

But this is just my speculation. Feel free to fill in the blanks guys. Anyone can say "latency" just to try to look good. :devilish:

Also, bit errors. You don't want that happening on your video card either because it can crash the GPU just as well as a CPU, and it will corrupt textures and models and display lists and all kinds of other stuff.


*G*
 
Someone will correct me if I'm wrong...but I believe graphics DRAM has the additional ability to write out an entire row of data at a time (i.e. with only one command); this is useful for writing the framebuffer out to the RAMDAC and from there to the monitor every frame. PC main memory isn't called on to regularly dump its entire contents, so standard DRAM doesn't have this ability.

This capability was what differentiated so-called "SGRAM" from normal SDRAM in the olden days; I think it's now standard with graphics-card RAM, but I really don't know.
 
Dave H said:
This capability was what differentiated so-called "SGRAM" from normal SDRAM in the olden days; I think it's now standard with graphics-card RAM, but I really don't know.
Some cards use DDR-SDRAM, others use DDR-SGRAM. But not all chips can use the extended SGRAM features.
 
The latency difference between system and video RAM isn't that big. The video RAM usually runs at a higher clock speed, but memory latencies (such as CAS latency), as measured in RAM clock cycles, are much higher for video RAMs than system RAMs.

As for the capability to access entire rows at once: this functionality isn't necessarily very useful for ramdac operation - 100% bus utilization can be easily achieved with normal read/write commands, as long as you don't get bus turnarounds and too many page breaks. It may be useful for fast clearing of frame/Z buffers, though. (Although modern GPUs have other tricks available for fast clearing of these buffers.)

SGRAM (including the DDR version) also had an operation mode where you could mask memory writes on a per-bit basis - useful to avoid read-modify-write cycles when you need to modify only some bits in each byte/pixel. Nice for 2d graphics, but rarely very useful in modern 3d graphics.
 
As for the question of what makes system and video DDR RAMs different:

Video DDR RAMs are placed in a much more tightly controlled environment (wrt trace lengths, trace length matching, line loads, termination/impedance matching, signal reflections) than system DRAMs. Also, they usually have to drive a much smaller load (shorter traces, fewer chips, no slot connectors or other crap on each signal line). And: video DRAMs only need to work correctly in 1 configuration, while system DRAMs are required to work in a wide range of possible configurations.

Together, these factors allow video DRAMs to be run at much higher clock speeds than system DRAMs before bit errors start to appear on the bus. The price you pay for the higher speed is loss of upgradability.
 
Tahir said:
Does anyone remember the day you could upgrade Video RAM? Me neither!

But I do.
I had a Trident 8900 512KB (upgradable to 1MB) and later an S3 805 1MB (upgradable to 2MB).
Not that I upgraded any of them.

The fact that none of the ram chips was soldered on the Trident card came useful when I bought a Gravis Ultrasound and put that 512KB in it.
 
My Verite 2100 card (I'm pretty sure) had slots for memory upgrades. Took standard DIMMs

Or maybe that was my soundcard at the time. (It wasn't a gravis, though)

edit: was my soundcard. A turtle beach something or other. (But the G200 (not too old) took a memory upgrade module, and so did the ATI Pro)
 
Hyp-X said:
Tahir said:
Does anyone remember the day you could upgrade Video RAM? Me neither!

But I do.
I had a Trident 8900 512KB (upgradable to 1MB) and later an S3 805 1MB (upgradable to 2MB).
Not that I upgraded any of them.

The fact that none of the ram chips was soldered on the Trident card came useful when I bought a Gravis Ultrasound and put that 512KB in it.

hehe ya, i had a Rage Pro upgradable from 4MB to 8MB, which I did at the time. :)
 
I think most VESA based cards were upgradable with extra memory. It just cost too much to make it feasible and it was too hard to get hold of.
 
My old S3 Trio 64V+ had 1MB of EDO DRAM upgradeable to 2MB.

My 4MB Xpert@Play (Rage Pro) was also upgradeable to 8MB SGRAM I believe.
 
Tahir said:
Does anyone remember the day you could upgrade Video RAM? Me neither!

Oh, tere are quite a lot cards.

Such as as most Matrox 2D Cards (Impression, Millenium, Mystique) and the G200, most Permedia Cards (but only if the board manufacturer want it), some ATI Rage Cards up to RAGE PRO, most S3 Virge Cards and most ISA, VLB and some 2D PCI Cards such as 1MB S3 Trio64/V+ Cards.
 
Grall said:
Difference = latency? What? Lower, higher, be specific, people!

I'd think it would be lower, since not having to deal with multiple loads on the memory bus, stubs, trace capacitance etc makes things easier, but then again, the chips are clocked so much faster too so that would probably make up for much of the difference...

But this is just my speculation. Feel free to fill in the blanks guys. Anyone can say "latency" just to try to look good. :devilish:

Also, bit errors. You don't want that happening on your video card either because it can crash the GPU just as well as a CPU, and it will corrupt textures and models and display lists and all kinds of other stuff.


*G*

1. no, the Latency is higher, much higher.
normal PC RAM has al Latency of up to 3-3-3.
But take a look for yourself here.

2. no, with some bit errors you get only 'defective' pixels nothing really bad.
But maybe that has changed.

BTW: only SGRAM supports block write, its a feature that may make a graphics chip a little faster such as the difference between the Radeon 8500 with TSOP RAM an 8500 with 128MB FBGA RAM.
Does annybody know if the 8500/128 with FBGA RAM always uses SGRAM ??

But SGRAM is useless if the chip doesn't support block write such as nVidia Chips...
 
Dave H said:
Someone will correct me if I'm wrong...but I believe graphics DRAM has the additional ability to write out an entire row of data at a time (i.e. with only one command); this is useful for writing the framebuffer out to the RAMDAC and from there to the monitor every frame. PC main memory isn't called on to regularly dump its entire contents, so standard DRAM doesn't have this ability.

This capability was what differentiated so-called "SGRAM" from normal SDRAM in the olden days; I think it's now standard with graphics-card RAM, but I really don't know.

SGRAM could also "write-expand" data on writes. following is a quote from the 3dfx v2 docs:

Memory Architecture: The frame buffer controller of Voodoo2 Graphics (Chuck) has a 64-bit wide interleaved
datapath to RGB and alpha/depth-buffer memory with support for up to 75 MHz SGRAMs or SDRAMs. For
Gouraud-shaded or textured-mapped polygons with depth buffering enabled, one pixel is written per clock -- this
results in a 75 MPixels/sec peak fill rate. For screen or depth-buffer clears using the standard 2D BitBLT engine,
two pixels are written per clock, resulting in a 150 MPixels/sec peak fill rate. For screen or depth-buffer clears
using the color expansion capabilities specific to SGRAM, sixteen (16) pixels are written per clock, resulting in a
1.2 GPixels/sec peak fill rate.
 
1/3 answer, 1/3 question, 1/3 OT:
I though that the latency of video RAM wasn't as important as in system RAM, the reason being that typical video operations have largely consisted of reading / writing huge chunks of data, where initial latency (latency for the first bit; I can't remember the technical term) isn't really important. Not sure if that will change with whacky pixel shaders though.

To add to the choir, back in -95 I upgraded my S3 Trio64 from 1 to 2 Mb and saw huge practical benefit! Part of the reason was that the memory bus width was doubled; with 1 Mb, it was effectively a Trio32.
 
Fairly related, but not quite important / interesting enough to start a new thread:
In the early days of AGP cards, I never fully understood why card- / chipmakers didn't make it possible to put standard memory onto the cards, such as with the SB AWE 32 where you could add your own SIMMs. Back then, system RAM was faster (MHz-wise) or as fast as video RAM. I can understand if latency issues or suchlike prevented that from being a solution for the entire video memory, but remember that AGP texturing was considered viable! System RAM on board can't possibly have been worse than that!

Addendum:
Maybe that was what Real3D / intel effectively did with their PCI i740 boards, if anyone remember those. But I'm thinking that it can't have been that terribly expensive to add the possibility to other boards, and I think that in certain market segments (such as the B3D / enthusiast / spend-way-too-much-on-hardware market) the possibility to expand memory to unjustified but bragging inducing levels shuold have sold cards.
 
Back
Top