Next-gen tech? Rambus targets 1TB/s memory for 2010

Titanio

Legend
Interesting article on an initiative Rambus has started to hit 1TB/s of memory bandwidth to a single memory controller by 2010.

Given that it's building on XDR2 tech, their relationship with Sony, and the timeframe, it's tempting to consider the possible role of such technology in a next-generation console (which the article also notes).

http://www.realworldtech.com/page.cfm?ArticleID=RWT120307033606&p=1

The Terabyte Bandwidth Initiative is still in development hence there are no shipping products, but the goals are now public and Rambus will demonstrate a test board that achieves 1TB/s of bandwidth over this signaling technology. This article will provide an in-depth look at the history, target market, technical innovations and test vehicle for Rambus’ Terabyte Bandwidth Initiative (TBI).

Rambus-TBI-1.jpg


If they actually hit that goal, it would offer double the typical scaling in such a timeframe (as per this chart..although I think some of the numbers might be slightly off, and for the PS2, for example, they're just counting main memory BW).
 
Of course they do as it makes them look even better than they already are. ;)

The average guy in the street (or someone in PC space with limited or no console knowledge) isn't going to know that.
 
Of course they do as it makes them look even better than they already are. ;)

The average guy in the street (or someone in PC space with limited or no console knowledge) isn't going to know that.

Does the the average guy in the street even know what 1TB/s means at all?

:p
 
.. and for PS3 they count XDR+GDDR together.
I think actually it's a count of Rambus's memory performance, as it scales up to 50GB now although PS3 chose to go half and half for cost.

1TB/s would be so awesome! You'd have no memory bottlenecks on the rendering which would mean maximum IQ and effects. I doubt it'll happen though. Too much of a pipe-dream, a bandwidth unlimited system. Whatever tech they might have, I don't expect it to be commercial viable, though of course I hope it does!
 
1TB/s would be so awesome! You'd have no memory bottlenecks on the rendering which would mean maximum IQ and effects. I doubt it'll happen though. Too much of a pipe-dream, a bandwidth unlimited system. Whatever tech they might have, I don't expect it to be commercial viable, though of course I hope it does!

I'm pretty confident (& so is the article) that they will meet there targets however I'm not so sure how high we'd expect the memory bandwidth of the PS4 (if it did decide to use this tech) to be for cost reasons..

Maybe they'd opt for only 8 DRAMs instead of 16? (for half the bandwidth)

500GB/s to main memory would be quite useful for alot of things i'm sure..
 
That might be doable. We ought to be looking at 200+ GB/s using standard technologies, so somewhere between there and loads more BW with high cost ought to be a good compromise. If you compare the cost of faster main RAM with large eDRAM, would it come out as an economical alternative (ignoring implementation issues)? I suppose we won't know until nearer the time. To me, 500 GB/s sounds an easier full-function choice.
 
Don't forget that computational speed increases faster than memory bandwidth most of the time for parallel computation. If you want to feed these computation units, there's a possibility that you'll be even more bandwidth starved than today.

Longer pixel shaders aren't necessarily going to give you better graphics (and IMO they won't). For example, imagine KZ2 with 1000 local lights spread amongst the surfaces to achieve a dynamic GI approximation (like instant radiosity). BW per op doesn't go down.

We already have Xenos consuming 256 GB/s in the worst case, and devs probably wouldn't mind an order of magnitude faster fillrate for photorealistic grass/trees/smoke at 1080p. EDRAM cost only scales with capacity, and BW is almost free.
 
If this tech provides 1TB/s are equivalent price to the standard RAM of the day, how do you think it'll compare price-wise with eDRAM sufficient for 1080p+4xMSAA at 16bit HDR? One of the problems with Xenos is the trade-off for all that BW is the need for tiling, which hasn't been all roses. If enough eDRAM for no tiling is possible on die than it's a contender (I've no clue on prices!) but if you're needing the devs to tile again and so limit their rendering choices versus no tiling, wouldn't you prefer the straight BW?
 
.. and for PS3 they count XDR+GDDR together.


yes. the Rambus XDR memory in PS3 is only ~25 GB/sec the minumim announced bandwidth that (first generation) XDR could have. not counting XDR2.

I hope Rambus reaches their 1 TB/sec target by 2010.

the PS3 was a smaller leap over PS2 than PS2 was over PS1, I hope PS4 is at least as much beyond PS3, that PS2 was over PS1, in bandwidth and overall graphics performance.


also, if main memory bandwidth can be 1 TB/sec (or somewhat more) by 2011-2012, I wonder how high EDRAM bandwidth could go, tens of TB/sec ?
 
Last edited by a moderator:
If enough eDRAM for no tiling is possible on die than it's a contender (I've no clue on prices!) but if you're needing the devs to tile again and so limit their rendering choices versus no tiling, wouldn't you prefer the straight BW?
Well obviously we don't have enough information to make that call right now. I don't think tiling is a bad thing. Also, now that TSMC can do EDRAM, we wouldn't have to put it on a separate die, so it can be used for texturing as well.

The other side of the coin is the amount of RAM. If this exotic technology costs a lot more per Mbit, I would go for more slow RAM instead.

The last point to consider is that we may see diminishing returns next gen. If software and art gets good enough this gen, building a system with 20x the power of this gen may not be economically wise, since a competitor with a much cheaper product wouldn't show weaknesses as glaringly as Wii vs. 360/PS3.
 
The last point to consider is that we may see diminishing returns next gen. If software and art gets good enough this gen, building a system with 20x the power of this gen may not be economically wise, since a competitor with a much cheaper product wouldn't show weaknesses as glaringly as Wii vs. 360/PS3.


Talking about these levels of power or bandwidth now, it almost seems like a solution looking for a problem. But I guess it is early days even in this current generation, and we're still waiting for devs to even catch up to the current hardware IMO. It might be clearer how such power would be used in 5 years time...and I think there probably needs to be a large R&D effort on the software side, on the part of the platform holders, to determine how any such substantial leaps would manifest themselves in a manner which is obvious to the users.

It'll be interesting to see the approaches taken, that's for sure.
 
if you're needing the devs to tile again and so limit their rendering choices versus no tiling, wouldn't you prefer the straight BW?
I'd prefer what gives me the prettiest games from the good developers ... they can integrate a 8 MB large scratchpad buffer which can be used for tiling or simply for local storage for GPGPU for next to no area cost, even if they use SRAM!

Why the hell would you design an architecture at that time aimed at developers who know what they are doing without scratchpad RAM? (This goes for both processors and GPUs.) Next to no cost, huge benefits for those who chose to exploit it.

The only way not having it would help developers is by not showing up how poor the work of most of them is.
 
yeah, but I didn't ask how much eDRAM could be put on a chip (besides, 32 MB eDRAM on a chip was done in 2000 with the GS I-32 for GSCube), what I was asking was, how much BANDWIDTH could be achieved.

Should SCE and Toshiba outfitted e-DRAM on the BE or visualizer type GPU, they would have hit at minimum, a TB/S.
 
yeah, but I didn't ask how much eDRAM could be put on a chip (besides, 32 MB eDRAM on a chip was done in 2000 with the GS I-32 for GSCube), what I was asking was, how much BANDWIDTH could be achieved.
Well my post was not a reply to yours, it's to Shifty's ;) It's just what kind of technology is available at the forefront of eDRAM chip technology for game consoles in 2009. Another important characteristic in NEC's UX8GD is its operating clock speed, it's up to 800Mhz. So this is your limit, if you want a chip in 40nm around 800Mhz, you are pretty much capped to 32MB eDRAM.

For 1080p+4xMSAA at 16bit HDR,

16bit HDR (64bit) + Z (32bit) = 12 bytes

1920 * 1080 * 12 * 4 = 99532800

it requires 94MB eDRAM.
 
Back
Top