Embedded Memory in GPU to expensive?

Killer-Kris said:
MfA said:
At 90nm they'd have to dedicate something like 1/4-1/3 of the die to memory.

So going by die size alone (ie. ignoring process problems with eDram) how much does that add to the cost?
Chips are charged per wafer so (ignoring the fact that rectangular chips don't pack very nicely onto circles) the cost is
  • partly proportional to area because you get fewer chips per wafter and
  • as errors in the silicon are ~proportional to area, you get a more than linear increase in the failure rate

Actually, that last bit might not really apply to RAM because you can put in 'spare' storage rows. A BIST (built in self test) can then check for bad rows and remap them to a spare row.
 
3DO / Matsushita MX chipset - first proposed consumer use of embedded memory in graphics processor - canned

This turns out not to be the case. Before 3DO was notes on a napkin, Alliance Semiconductor designed embedded graphics memory on the "ProMotion" line of GPUs. I think they actually shipped in 1994. (I worked at Alliance 1993-1997, and it definitely was in the first half of that time frame. )

Turned out that OEMs were interested in embedded memory on GPUs only when memory proices were high. When memory plummeted, OEMs wanted discrete commodity RAMs.

The situation may be totally different now. This was back when Intel wanted to push this newfangled bus called AGP, and there were still 30 players in PC graphics. With AGP/PCI express speeds, embedded cache and main RAMs may be more effective than the NVIDIA solution of putting Winbond RAMs on-chip for the MAP product line.
 
Dave B(TotalVR) said:
Simon said:
as errors in the silicon are ~proportional to area, you get a more than linear increase in the failure rate

Presumably it would follow the form x^2?
Not quite. If you have no redundancy in the chip and model chip defects as a so-called 'Poisson process', the yield will fall off exponentially with area, like roughly yield=e^(-a*d) where a is die area and d the average number of defects per die area unit. If you do have redundancy (which is rather foolish not to have if you have an eDRAM in the first place, as the cost of building a couple of redundant rows/columns into the eDRAM is practically Zero) the formula gets a lot more complex, as you can now sometimes tolerate a certain number of defects before you need to throw the chip away - you can however do a first-order approximation like yield=e^(-b*d) where b is the chip area NOT protected by redundancy, this is usually close enough to reality to be useful.
 
arjan de lumens said:
Dave B(TotalVR) said:
Simon said:
as errors in the silicon are ~proportional to area, you get a more than linear increase in the failure rate

Presumably it would follow the form x^2?
Not quite. If you have no redundancy in the chip and model chip defects as a so-called 'Poisson process', the yield will fall off exponentially with area, like roughly yield=e^(-a*d)
Whew! That saves me having to work it out again :)
 
Killer-Kris said:
JohnH said:
Due to the large cost of embedded memory, embedding FB memory only really works where you are able to constrain things like target resolution and bit depth i.e. its reasonably suited to a closed system like a console, but is problematic in the desktop space where these things can't be constrained...

John.

Well I imagine that they'd be able to put 32MB of eDram on chip relatively soon, which is more than enough for 1600x1200 or lower resolutions with AA. This would almost certainly be feasible and accepted when you're looking at something like the budget market where consumers don't have such lofty expectations of the product. This sort of setup would allow board producers to put only 32-64MB of on board ram for textures with a 64bit bus and see very little (if any) performance drop.

It wasn't that long ago that we all owned video cards that were physically limited to resolutions below 1600x1200, even now there are not many people with monitors capable of that resolution (especially in the target market). So any limits placed on resolution or AA by the eDram is not likely to affect anyone buying those cards.

The only real con to eDram is the cost of it versus the money saved by using smaller buses, less/cheaper ram, and the reduced PCB size from smaller buses and less ram. Once that price threshold is crossed I believe that eDram will be used in a heart beat, since it will be able to hold performance constant (possibly increasing it) while prices drop.

You're seriously underestimating the cost of that much embedded memory, no where near cheap enough for mid to low end and its likely to remain that way for some time. It would only be viable in high end parts, however most people buying a high end part have the reasonable expectation that they can run at even higher resolutions with AA enabled, so you have to put even more edram down, so even at the high end it doesn't work.

All imho of course.
John.
 
Simon F said:
Chips are charged per wafer so (ignoring the fact that rectangular chips don't pack very nicely onto circles)

Heh, curious is there any reason chips couldn't be done as hexagons to decrease wasted chips? More of a pain to cut of course which I guess might not be worth the wasted space.
 
Colourless said:
Triangles then :)
Apart from the minor fact that all (AFAIK) layout tools work in horizontal and vertical lines ( ;) ) I suspect that the corners would be more susceptible to breaking off.

Anyway, why stop there? I think the chips should use Penrose tiles :)
 
Sounds cool, but it's gotta make cooling a beotch. Anyone know how well a memory chip conducts heat compared to Arctic Silver? ;)
 
Back
Top