PS3 gpu 2x more powerful than X360 gpu?

If you bother to read the document I linked to then you will see quite clearly that this is precisely how NVidia counts "shader ops".

I'm simply comparing NVidia's calculation technique with the figure quoted in the leak.

R500's shader pool architecture is hugely different though... Peak shader op capabilities, even if they're defined the same way across architectures, are pretty meaningless.

I'm not claiming these figures are useful. Look at the disparity in X850XTPE between 43.2 and 69.1. ATI's really missing a trick there, that's 35% more shader ops in NVidia's counting.

Jawed
 
london-boy said:
For a chip to be 2X more powerful than one coming out less than 6 months before, there would need to be some serious extra cash poured into the project

Or Microsoft could be using a gpu that is a topped out extension of what we currently have, and Sony using a next gen part.
 
but dot products are easy to count and from that perspective, rsx cannot simply be a 6800 ultra scaled to 24pipes with no shader arch changes
 
Jawed said:
If you bother to read the document I linked to then you will see quite clearly that this is precisely how NVidia counts "shader ops".
I read it
I'm simply comparing NVidia's calculation technique with the figure quoted in the leak.
What I'm trying to say to you is that 'that way' to calc shader ops has no factual meaning. One time they call a shader ops a fmadd op, then a mul op, then a reciprical op, then a bias op, then a complex op (sin/cos)
I can generate any kind of numbers that way, just renaming and grouping different operations.
When we'll know how G70 architecture is we might even try to use those 'shader ops'. If yours shader ops data are 'right' then explain to me why nv40 does 22 dot4 (16 ps + 6 vs) per cycle and g70 does 92 dot4 per cycle (according nvidia numbers), more than a four fold increase.
Moreover where did you get RSX 105.6 Gigaop/s? I haven't watched all the conference but IIRC nvidia quoted 136 shader ops per cycle * 550 Mhz = 74.8 Gigaop/s (whatever it means..)
 
Fox5 said:
Or Microsoft could be using a gpu that is a topped out extension of what we currently have, and Sony using a next gen part.

Have you followed the R500? It is not an extensive but a totally new concept and design. They are not throwing an X800 in there or even a next gen G70/R520.

It is a console part designed to be a console part. The performance of this new part is a big unknown and that is what is being discussed.

As discussed before, the R500 looks to be over 2x more powerful in shader performance than the R420. And the R420 was already pretty good in this area. If that holds true (no one knows yet because this is all on paper) a chip 2x+ faster than a R420 compares nicely with a RSX that is 2,5x more powerful than a NV40.

A lot will come down to real world performance, features, and a game by game basis on what features and performance is used. Obivously the memory on each structure will have a big impact on motion blur, HDR, and other features as well.
 
Someone had post that r500 would be like a 48 pipeline gfx chip in modern games.

If RSX do have 24 and only more 50mhz, how can it be more powerfull :?: :?: :?:
 
pc999 said:
Someone had post that r500 would be like a 48 pipeline gfx chip in modern games.

If RSX do have 24 and only more 50mhz, how can it be more powerfull :?: :?: :?:

48 pipes? Where'd u hear that one!! last time i checked it was 48 ALUs, not pipes.
 
Jawed said:
With regard to XB360's system memory bus of 22.4GB/s, X700XT is the most similar conventional ATI part, with 8 pixel pipes:

Code:
GPU     Core   Memory   Fill-rate   Bandwidth
X700XT  475    525      3800MP/s    16.8GB/s
XB360   500    700      4000MP/s    22.4GB/s

Plainly while X700XT would have to share that 16.8GB/s between texturing and raster output, XB360 has more bandwidth for approximately the same fill-rate plus a subset of the raster output workload* (AA sample data read/write). XB360's main blend/Z-test/AA workload leaves the 22.4GB/s system RAM bandwidth untouched.

And:

Code:
GPU     Core   Memory   Fill-rate   Bandwidth
6600GT  500    500      2000MP/s    16GB/s

If we look at the performance we can see that a 6600GT outputs around the same number of pixels per cycle as the x700XT inspite having half the fillrate, the 6600GT also requires around the same amount of bandtwith as the x700 to reach this performance level.

The R500 has even more units per pixel pipes than the NV43 and will most likely waste even less fillrate. I actually think is conservative to say that the R500 will require at least three times as much bandwidth as the x700 to reach its potential. At 720P the eDRAM buffer should be big enough to make sure that bandwidth is not a problem in most situations.
 
ninelven said:
For a relative study wrt shader ops, Anand lists 136/clock for RSX and 53/clock for GF6 which is a 2.56x increase. Is this significant or irrelevant due to arch. differences?

Sure would like to know how he gets 53/clock. If i would be a marketing guy i would claim up to 86ops/clock for a gf6.
 
london-boy said:
pc999 said:
Someone had post that r500 would be like a 48 pipeline gfx chip in modern games.

If RSX do have 24 and only more 50mhz, how can it be more powerfull :?: :?: :?:

48 pipes? Where'd u hear that one!! last time i checked it was 48 ALUs, not pipes.

I depends on how your define pipelines - AFAIK the R500 can actually process 48 pixels at the time internally, but can only output eight pixels. It is not 8 pipes with 6 alus each.
 
Tim said:
london-boy said:
pc999 said:
Someone had post that r500 would be like a 48 pipeline gfx chip in modern games.

If RSX do have 24 and only more 50mhz, how can it be more powerfull :?: :?: :?:

48 pipes? Where'd u hear that one!! last time i checked it was 48 ALUs, not pipes.

I depends on how your define pipelines - AFAIK the R500 can actually process 48 pixels at the time internally, but can only output eight pixels. It is not 8 pipes with 6 alus each.

i think it's up to 64threads (pixel or vertex)
 
tEd said:
i think it's up to 64threads (pixel or vertex)

With 64 threads R500 could actually perform pretty close to 96 ops per cycle peak - the R500 seems like a very efficient chip. The PS3 GPU seems to have some raw power advantage (40% in shaders and hundreds in fillrate) so it going to be very interesting to see how it plays out in real life.

I am really looking forward in detail specs of these chips, there are a lot of ??? about the capabilities - things that could make big differences in real life performance.
 
Who want to sign a petition against this shader op thing?! :(
R500 doesn't do 96 ops per cycle, it does 480 floating ops per cycle.. :devilish:
EDIT: and RSX should do 644 floating ops per cycle :)
 
The 53bn dot products figure - is this for the CPU, or GPU, or combined? I believe's MS's 9bn figure was just for the CPU (?)
 
nAo said:
R500 doesn't do 96 ops per cycle, it does 480 floating ops per cycle.. :devilish:

It does 96 5D operations. Even if an operation works on more than one component it is still one operation in my book.
 
Tim said:
It does 96 5D operations. Even if an operation works on more than one component it is still one operation in my book.
Shader ops are biting you back, I told ya! ;) (don't worry, I'm kidding..)
R500 has 48 ALUs, each ALU performs a vec4 and a scalar operation, so it doest 48 5D operations in my book, not 96 ;)
Since ATI and MS are counting a vec4 op as a shader op and scalar op as a shader op (those are 2 very different things from a computational point of view) they say it does 96 ( 48+48 ) shader ops per clock cycle.
With shader ops one can't even compare architectures from the same vendor, moreover shader ops don't tell you how muck work the GPU can do for real, so why use them?
Maybe there is a good reason but I can't think of any in this moment (except inflating marketing figures)
 
Titanio said:
The 53bn dot products figure - is this for the CPU, or GPU, or combined? I believe's MS's 9bn figure was just for the CPU (?)

Apparently that was for the CPU only. Or that's what i gathred from the conference. It was one of the big selling points for the CPU. Now, whether it's real or not is another story.
 
After reading the whole thread, i come to a conclusion, even if i'm surprised by your knowledge it all comes down to 1 thing, why didnt MS talked about it?

IF MS felt that its gpu was as powerfull as the RSX (Cool name by the way) wouldnt it have mentioned it? Or is the GPU really a new type of architecture that cant be comparable by simple numbers against a more tradicional tech like the RSX?

Its hard to speculate, we know so few about the R500 in the Xbox 360....It sounds like its a very customized chip, meant for a console environment, meant to take advantages of a Console environment.

And about the transistors, Ati didnt said anything about it, neither did MS, why would we be assuming that it was only 150 M transistors?

Carry on, i like to read the things some of you say. :)
 
Back
Top