XBox 360 eDRAM

Dave Baumann

Gamerscore Wh...
Moderator
Legend
Just found out that the eDRAM bandwidth between the shader core and eDRAM core is 256GB/s, real bandwidth (not extrapolated) - the interlink between the two is running at 2GHz.
 
:oops: Thanks Dave. So what is the effective bandwidth?

More importantly, with this insane bandwidth what eyecandy can we expect? I read on HardOCP that is has 192 Floating Point Unit processors in the eDRAM. I guess I will wait for Jaws to see the thread ;)

I am just totally confused at MS at this point. R500/C1/Xenos is sounding killer yet MS hardly said a word about it. Same goes with some of the games, we already heard PDZ was killer according to IGN and we are now hearing the CoD trailer sucks compared to the actual gameplay.
 
therealskywolf said:
This is the part about the eDRAM bandwidth, from the anand article.
But keep in mind that Anand is a douche.
Remember the 256GB/s bandwidth figure from earlier? It turns out that that's not how much bandwidth is between the parent and daughter die, but rather the bandwidth available to this array of 192 floating point units on the daughter die itself. Clever use of words, no?
 
Anand says it's bandwidth between the eDram's own logic and memory, not between the GPU and eDram.

edit - Vysez beat me to it.
 
DaveBaumann said:
Just found out that the eDRAM bandwidth between the shader core and eDRAM core is 256GB/s, real bandwidth (not extrapolated) - the interlink between the two is running at 2GHz.


256gb/s / 2 Ghz = 128 bytes transferred each clock = 1024-bit bus interlink. Umm, no, this doesn't sound right Dave.

Or do you mean Gigabit per second? That would mean its a 128-bit bus @ 2ghz, and the real bandwidth is 32gb/s.

How in the hell do they have 1024-bit interconnect? I can't see it as being possible. It has to be effective bandwidth, meaning the real signalling is much less, but the amount of data being transferred is 256Gb/s.
 
Brimstone said:
Both consoles are built on a 90nm process, and thus ATI's GPU is also built on a 90nm process at TSMC. ATI isn't talking transistor counts just yet, but given that the chip has a full 10MB of DRAM on it, we'd expect the chip to be fairly large.

I thought it was NEC fabbing it on a .90 nm process.

It is, they got it wrong.
 
thatdude90210 said:
therealskywolf said:

This is probably not the kind of quotes Sony wants to see: "In fact, NVIDIA stated that by the time PS3 ships there will be a more powerful GPU available on the desktop."

It terms of raw performance, that's true of all the console GPUs. The stuff coming out later this year/early next year on PC will match or exceed their raw power, even if they have different features/architectures. You have to understand, there's a big gap in transistor/dollar budget between a high end PC card and a mass-volume console GPU.
 
Hey, can we stick to GB/s meaning gigabytes per second and Gb/s meaning gigabits per second?...

Jawed
 
Titanio said:
It terms of raw performance, that's true of all the console GPUs. The stuff coming out later this year/early next year on PC will match or exceed their raw power, even if they have different features/architectures. You have to understand, there's a big gap in transistor/dollar budget between a high end PC card and a mass-volume console GPU.
Well supposedly R520 is slower than R500...

Jawed
 
It terms of raw performance, that's true of all the console GPUs. The stuff coming out later this year/early next year on PC will match or exceed their raw power, even if they have different features/architectures. You have to understand, there's a big gap in transistor/dollar budget between a high end PC card and a mass-volume console GPU.

Right .

On the console side they have to make a box that costs 500$ includes optical drive , cpu , gpu , ram and everything else

On the add in market they have to provide a pcb board , gpu and ram . So the budget on those are bigger .
 
Jawed said:
Well supposedly R520 is slower than R500...

Jawed

And just imagine if the R520 is faster than the NVidia G70. Imagine the windfall that ATI & MS would have then.
 
512bit bus @ 2ghz = 128GB/s real bandwidth and a minimum of 2AA for a 256GB/s effective bandwidth
 
DemoCoder said:
DaveBaumann said:
Just found out that the eDRAM bandwidth between the shader core and eDRAM core is 256GB/s, real bandwidth (not extrapolated) - the interlink between the two is running at 2GHz.


256gb/s / 2 Ghz = 128 bytes transferred each clock = 1024-bit bus interlink. Umm, no, this doesn't sound right Dave.

Or do you mean Gigabit per second? That would mean its a 128-bit bus @ 2ghz, and the real bandwidth is 32gb/s.

How in the hell do they have 1024-bit interconnect? I can't see it as being possible. It has to be effective bandwidth, meaning the real signalling is much less, but the amount of data being transferred is 256Gb/s.


If they're gonna have a 1024 bit bus, then it'll have to be on die with the R500, similar to the GS with 2560 bit 4MB eDRAM bus. And not a seperate EDRAM module with interconnects.

Otherwise that TRUE 256 GB/s figure is the 1024 bit bus on the EDRAM module itself between it's cuctom logic and the eDRAM. AND NOT between R500 and EDRAM module, imo...

The actual interconnect between separate R500 and EDRAM IC's would still be 32 GB/s read and 16 GB/s write as the most likely...

Anyway, it was discussed somewhere here when I posted the patent,

http://www.beyond3d.com/forum/viewtopic.php?t=22260
 
I'm going to doube check this. I got this from a press conference they just held - I specifically asked if they were separate die (knowing the answer already) and then asked what the bandwidth was between them.
 
Back
Top