The most Detailed Tech Information on the Xbox360 yet

Rockster said:
Who decides what bandwidth should be counted?

The person with an agenda to push at a guess :D

Sooner or later it's going to have to sink in that these number games are all flawed PR agenda filled pieces of bullcrap, sewn with enough "truth" for plausibilty.

Or to paraphrase:

"Lies, Damned Lies, And Performance Metrics"
 
index.4.gif


::cough::

While he does touch on some meaningful observations he totally discredits what he is saying with fubar stuff like that. I mean, WTF?! The bandwidth on the eDRAM cannot just be arbitrary compared!! Yes, the eDRAM bandwidth has some advantages, but the fact CELL has 256MB of low latency XDR to itself while the RSX has as much bandwidth as the Xbox 360 UMA puts things into perspective. Different designs, different philosophies.

Everytime he makes a good point he totally screws up and misinterprets stuff. Like he makes an excellent point about the general processing power of the xCPU (which has really been ignored and downplayed), but then makes the idiotic statement that CELL is illsuited for many of the tasks. Some yes... but some code can be adpated, that which cannot can run on the PPE, and the intense stuff like PHYSICS will SCREAM on the SPEs.

Too bad the few good tidbits are totally overblown and and twisted. There are outright ERRORS about CELL (a lot of mis info about the SPEs that DO have their own cache, 128k I believe)--just sad. Typical of this time of year.

But of interest:

With the number of transistors being slightly larger on the Xbox 360 GPU (330M) it’s not surprising that the total programmable GFLOPs number is very close.

I assume that is with eDRAM. Pretty impressive number... and therefore $$$.
 
Vaan, please don't post huge retarded smilies like that. You can see no-one else is doing it!


The one intresesting thing that I could see on there was the Xenon GPU transistor count. It's somewhat larger than the unsourced, uncomfirmed 150 million figure that's been doing the rounds on the internet.

[edit]I can see now that quest55720 already mentioned this![/edit]
 
Rockster said:
Who decides what bandwidth should be counted?

Reality.

Reality tells you even if a machine has 512GB/s of bandwidth, if it is to a 1MB memory segment, that it is a skewed comparison to a 512MB memory block with 100GB/s bandwidth.

Especially when they serve different purposes.

That is why you must look at the machines HOLISTICALLY.

One extreme number in an area does not make it more powerful. e.g. Cells 218GFLOPs or xCPUs 6 hardware threads--these things alone do not make the machine more powerful. What memory architecture do they have? What GPU? What development tools?

And most significantly, where are the bottlenecks? Great example: The N64 was really quite powerful, but it had a 4k texture limit :oops: When you throw in the decision to force the blur filter on all games and you go from a machine much more powerful to a machine that is crippled.

Again, when comparing specs you need to be holistic in your approach.

As for me, I want to know more about WHO is making games and what they are actually able to get OUT of the machine. The end product is more important than some PR numbers.
 
Haha, that's the silliest article I've ever seen. Good god, the morons commenting and believing it as well on that site. Well laughing is supposed to make you live longer so I guess I have to thank major Nelson in that regard. :D
 
Geeforcer said:
Well let's see.

Even if 256GB/s shadercore->memcore figure is correct (Dave, did you have a chance to double-check that?), you don't just add up the two and pass them off a "total system bandwidth"

For example, with simple color rendering and Z testing at 550 MHz the frame buffer alone requires 52.8 GB/s at 8 pixels per clock.

What? Either I am miseterpiteing what he is saying, or this is way off.

Nope that all checks out. 8 pixels per clock is 64 bytes, 8 z-tests per clock is 32 bytes, so that's 550*96=52.8GB/s. XBox 360 can do this (though at 500MHz, i.e. 48GB/s).

Jawed
 
It should be noted that the article is was not written by Major Nelson, and should be viewed as a document from Microsoft with all associated PR BS FUD in place. That said, they can be just as legit as any numbers Sony has released.
 
Rockster said:
It should be noted that the article is was not written by Major Nelson, and should be viewed as a document from Microsoft with all associated PR BS FUD in place. That said, they can be just as legit as any numbers Sony has released.

The total memory bandwidth chart is a joke. Which numbers from Sony even begin to compare with that lunacy?
 
Rockster said:
That said, they can be just as legit as any numbers Sony has released.

I'm not sure about that....

Sony has never released such FUDdy comparisons.

Some of the numbers disagree with MS's own spec (or appear to do so).

And he's coming up with numbers for PS3 seemingly out of thin air, and making comparisons between very different things (or at least things which aren't known to be the same).

I shan't hear any more complaints about Sony's "numbers" after this little trick by MS ;) It's more clearly misleading than anything I've seen from Sony.
 
Titanio said:
Rockster said:
That said, they can be just as legit as any numbers Sony has released.

I'm not sure about that....

Sony has never released such FUDdy comparisons.

Some of the numbers disagree with MS's own spec (or appear to do so).

And he's coming up with numbers for PS3 seemingly out of thin air, and making comparisons between very different things (or at least things which aren't known to be the same).

I shan't hear any more complaints about Sony's "numbers" after this little trick by MS ;) It's more clearly misleading than anything I've seen from Sony.

lest u be Major Nelson'ed

I think up to this point, there have been good discussions on both GPUs without wasting space talking about PR bs.
 
To be clear, it is fair to compare and show to the world that the PS3 has 48GB/sec of system bandwidth vs. XBox 360 22.4GB/sec and just ignore the eDram bandwidth all together. When clearly some of the PS3's memory bandwidth is being used by those functions which are supported by eDram in the 360 case. Is size what makes it unfair, even if the 360 design spec calls for a 720p frame buffer and the eDram was sized to accomodate exactly that. And its fair to claim 136 shader ops/sec vs 92 if you aren't counting the same types of ops. I want to make sure I understand the difference.

I'm not defending the Nelson article. I don't like the tone, or opinions, and would have preferred a straight forward presentation of the facts as they see them. The architectures are so different they should have never been compared as they have been. Sony did it to demonstrate their consoles superiority. I want to know where the line is. Is it because now Microsoft is presenting some new numbers after the fact. If Microsoft would have used these new numbers from the beginning, do you not think Sony would have used some other calculations.
 
If the embedded memory bandwidth is going to be counted wouldn't it be fair to count memory bandwidth of the local store memory in the CELL SPEs?

I believe the local stores run at 128-bit at 3.2Ghz. There's seven of them, so that would total 358.4 GB/sec. So total system bandwidth for the PS3 would be 406.4 GB/sec compared to 278.4 GB/sec for X360.
 
Bowie said:
If the embedded memory bandwidth is going to be counted wouldn't it be fair to count memory bandwidth of the local store memory in the CELL SPEs?

I believe the local stores run at 128-bit at 3.2Ghz. There's seven of them, so that would total 358.4 GB/sec. So total system bandwidth for the PS3 would be 406.4 GB/sec compared to 278.4 GB/sec for X360.

Are we going to begin counting the L1 and L2 cache on the xCPU then :rolleyes:

The eDRAM bandwidth is very relevent, but only in a limited way (it definately is not total system bandwidth). The eDRAM alleviates the framebuffer bandwidth. That means the usual bandwidth the frame buffer would eat away from the UMA is now isolated.

What does this mean? It means we cannot compare the PS3-to-Xbox in bandwidth in a 1-to-1 way. They are DIFFERENT DESIGNS.

Better to just talk about the PROS and CONS.
 
Bowie said:
If the embedded memory bandwidth is going to be counted wouldn't it be fair to count memory bandwidth of the local store memory in the CELL SPEs?

I believe the local stores run at 128-bit at 3.2Ghz. There's seven of them, so that would total 358.4 GB/sec. So total system bandwidth for the PS3 would be 406.4 GB/sec compared to 278.4 GB/sec for X360.

Thats internal bandwidth. The edram chip is external as it is a daughter chip to the GPU. AFAIK, all the numbers he added up were for external bandwidth. But still it doesn't mean anything.
 
a688 said:
Thats internal bandwidth. The edram chip is external as it is a daughter chip to the GPU. AFAIK, all the numbers he added up were for external bandwidth. But still it doesn't mean anything.

The external bandwidth of the eDRAM chip is not going to be close to 256GB/s, as that would imply a 2,048 (or there abouts) wide external bus.
 
Acert93 said:
Bowie said:
If the embedded memory bandwidth is going to be counted wouldn't it be fair to count memory bandwidth of the local store memory in the CELL SPEs?

I believe the local stores run at 128-bit at 3.2Ghz. There's seven of them, so that would total 358.4 GB/sec. So total system bandwidth for the PS3 would be 406.4 GB/sec compared to 278.4 GB/sec for X360.

Are we going to begin counting the L1 and L2 cache on the xCPU then :rolleyes:

We could throw in the L1/L2 caches on the xCPU and the Cell, but that would actually increase the Cell's lead in internal bandwidth. :)

edit: I should back away from that statement before anybody holds me to it because I don't know how many L1 caches are on the xCPU. Probably one for each core, which would help xCPU close/surpass the gap, if we want to be serious about this numbers game.
 
Yea, I spelt that bull*bleep* a mile away. Suckers playin up the the 'power' advantage again. And folks make it out to seeem Sony sooooooooooo dirty. SOny and Revolution man. They getting my money....
 
What puzzles me..

I think the most honest comparision is found by taking the original leaked document..

That one had the path from the GPU to the XCPU, the path from the GPU to the GDDR ram, and a 32GB/s path from the GPU to the embedded ram.
It's still an impressive total.

However if there is 256GB/s B/W because of the subsamples then that implies that each subsample take 8 bytes ( 4x memory requirements ) which is pretty heavy for the 10MB ram size.
There have been a few topics about MSAA compression method recently - and if anythings like this is used it's net effect will be to reduce the actual B/W utilised
 
Back
Top