SDRAM/SGRAM...

Murakami

Regular
...which are the differences? And what about the abandonment of dual ported ram (VRAM/WRAM)? Maybe the need of esternal RAMDAC?
 
IIRC (it's been a long time since i saw the SGRAM specs), SGRAM has 2 major features over plain SDRAM:
  • SGRAM allows you to program a bitmask (of the same bit width as the SGRAM chip), which is then used to mask out bits for subsequent write operations. This is useful if you want to write, for each pixel, a datum smaller than the SGRAM bus width. Such a datum may be e.g. a single color component out of {Red, Green, Blue} or a stencil buffer value. Or you may be working with a pixel format where each pixel holds only 4 or 8 bits of data. Without SGRAM, you need a read-modify-write cycle every time you need to perform such operations. However, in practice, this doesn't seem to be much of a problem with modern, high-performance 3d graphics.
  • SGRAM has a special operation mode where you can initialize an entire DRAM memory page to the same value at once, instead of needing to write the value to every byte of the SGRAM. With older GPUs, this could dramatically reduce the time needed to clear the framebuffer - more modern GPUs use other tricks to achieve or simulate fast framebuffer clears, so this SGRAM functionality helps much less than it used to.
As for dual-ported VRAMs, they gradually disappeared when graphics chipsets started to become single-chip solutions - having 2 ports on the VRAM chip then meant that you needed twice as many pins on your graphics chip - at that point it was better to either just remove those pins and save some money, or to use the pins for a more general-purpose memory bus, giving you nearly twice the performance.
 
Thank you very much arjan de lumens.
My cheap, old Sapphire Radeon 9700 uses SGRAM instead of SDRAM... :?
 
arjan de lumens said:
As for dual-ported VRAMs, they gradually disappeared when graphics chipsets started to become single-chip solutions...
That's true, but there's another reason. VRAM originally appeared when DRAMs got so dense that the number of DRAMs on the board dropped to the point where there wasn't enough bandwidth. I recall a device back in the early 80's that only rendered during retrace, since display update took 100% of the bandwidth during active line time. Once DRAM bandwidth improved so that a single port was fast enough to both render and update the display, VRAMs became more expensive and less flexible (as well as adding cost to the graphics chip, as noted above). So VRAMs went away... and the wheel of reincarnation keeps turning, what fun!
 
Back
Top