PS3 gpu 2x more powerful than X360 gpu?

This is nothing but the same pack of Marketing lies from Nvidia and Sony that these 2 companys pull year after year. Now they are just doing it together at the same time.

That wont deter the uneducated masses from falling for it. This is Dreamcast V.s. PS2 all over again. except for in this case the Xbox 3 is likely a lot better than the PS3.

Which will make this a bigger travesty of Cheating and Lying to get ahead than before. Nothing Nvidia is not totally Comfortable with though. As they do it practically every product release for the last 7 years.
 
@ Acert93:

Wouldnt be the XBox just as easily saturating its bandwith, considering the 10MB eDRam leaves little space for Textures and the GPU has to fetch every sample from the same 22GB/s Bus the CPU should be accessing the same time?

I dont know, but this scenario seems to be alot more troublesome since you effectively have the CPU and GPU fighting for access all the time.
 
Hellbinder said:
...except for in this case the Xbox 3 is likely a lot better than the PS3.
:oops: The hyperbole so far used isn't beyond possibility. That the PS3 will be maybe 2x XB360 in overall performance? Perhaps. Perhaps not. But even if not the case, I can't figure how on earth you can suggest that 'Xbox 3 is likely a lot better than the PS3'. They are, in the worst case scenario for Sony, comparable in performance.
 
Hellbinder said:
except for in this case the Xbox 3 is likely a lot better than the PS3.
I agree. But more seriously remember that the PS3 is on 90nm, and CELL is quite nicely power-efficient (there was an article that explained that iirc)

Uttar
 
Khronus said:
I thought MS said the ATI GPU had 4x MSAA not 2x? (my bold)

IIRC...

I believe it does support 4X AA, but the requirement is that all titles will support 2X. This tells me that 2X is basically "for free", and 4X will come with some performance hit.

So some titles may get 4X, but all will have 2X.
 
Hellbinder said:
This is Dreamcast V.s. PS2 all over again. except for in this case the Xbox 3 is likely a lot better than the PS3.

Ummm....don't say you have actually fallen for the Xbox360 naming marketing tricks? ;) It's still Xbox 2 in my book....
 
I finally saw the Sony conference where they really emphasized HDR and 128-bit precision.

Does the X360 GPU also support HDR and 128-bit pixel precision?

How much will HDR and 128-bit precision affect performance?
 
The specifications seem underwhelming. The full 128-bit precision out to the framebuffer and the HDR they're touting doesn't leave AA a lot of room. Seems like games will only have a low level of anti-aliasing like with X360... a shame since even handheld technology moved on to 4xAA a long time ago.
 
Npl said:
@ Acert93:

Wouldnt be the XBox just as easily saturating its bandwith, considering the 10MB eDRam leaves little space for Textures and the GPU has to fetch every sample from the same 22GB/s Bus the CPU should be accessing the same time?

You wont be putting textures in the framebuffer though (at least that is my understanding--it is for output). Jaws, Jawed, one, DemoCoder, etc... can do the math better than I can on the bandwidth needs for HDR, AA, etc... but having a very fast pool for the frame buffer should save a bit of bandwidth to the main memory. Maybe one of them could do some examples (of course we still do not know exactly what the R500 can do... can it use the high bandwidth of the frame buffer and shuffle the result to the main memory if the effects are beyond the scope of the size limiations?)

I think both systems are BW limited AND memory limited (compared to their CPU and GPU power). The CPUs alone do 100-200GFLOPs and if they are doing physics, real time destructible environments, advanced AI, advanced audio, etc... they will eat up the bandwidth.
 
Lazy8s said:
The specifications seem underwhelming. The full 128-bit precision out to the framebuffer and the HDR they're touting doesn't leave AA a lot of room. Seems like games will only have a low level of anti-aliasing like with X360... a shame since even handheld technology moved on to 4xAA a long time ago.

It might support fp32 blending but i doubt anybody actually would wanna use it. Too little gain in quality over fp16 blending and to less bandwidth to be fast enough
 
At 150 million more transistors I suppose the RSX should be more feature wise than the R500 right?If you put all the bandwidth constraints/EDRAM aside what other advantages will it have?
 
hugo said:
At 150 million more transistors I suppose the RSX should be more feature wise than the R500 right?If you put all the bandwidth constraints/EDRAM aside what other advantages will it have?

RSX doesn't have 150million more transistors. Unless somebody actually give me some solid prove for that rumor
 
I'm still thinking current R500 transistors count is completely bogus, if they can design such as beast with 150 Mtransistors (just for the main chip) how a R420 has more transistors than a R500 and it's vastly inferior on many areas?
 
With regard to XB360's system memory bus of 22.4GB/s, X700XT is the most similar conventional ATI part, with 8 pixel pipes:

Code:
GPU     Core   Memory   Fill-rate   Bandwidth
X700XT  475    525      3800MP/s    16.8GB/s
XB360   500    700      4000MP/s    22.4GB/s

Plainly while X700XT would have to share that 16.8GB/s between texturing and raster output, XB360 has more bandwidth for approximately the same fill-rate plus a subset of the raster output workload* (AA sample data read/write). XB360's main blend/Z-test/AA workload leaves the 22.4GB/s system RAM bandwidth untouched.

* I don't know how much bandwidth the AA sample data read/writes against system RAM will use... Heavily depends on triangle count, size and overdraw.

Jawed
 
Hellbinder wrote:

This is nothing but the same pack of Marketing lies from Nvidia and Sony that these 2 companys pull year after year. Now they are just doing it together at the same time.

That wont deter the uneducated masses from falling for it. This is Dreamcast V.s. PS2 all over again. except for in this case the Xbox 3 is likely a lot better than the PS3.

Which will make this a bigger travesty of Cheating and Lying to get ahead than before. Nothing Nvidia is not totally Comfortable with though. As they do it practically every product release for the last 7 years.

Hellbinder please dont make this into a flame-thread, you can go to R3D instead if that is your intention.

I think this thread got a very good commentary and people are sticking with the topic. We all know your hate against nVidia, its pretty silly so if you are going to post more of the same do it somewhere else.
 
nAo said:
I'm still thinking current R500 transistors count is completely bogus, if they can design such as beast with 150 Mtransistors (just for the main chip) how a R420 has more transistors than a R500 and it's vastly inferior on many areas?
Good question but if true it bodes well for the R6xx.
 
As I understand it (maybe, probably wrong :rolleyes: ) The 150M Trans in the XBox GPU do not include the EDRAM chip, which I guess is another 60+M Trans :?:
 
Looking at:

Technical Brief: The GeForce 6 Series of GPUs Image Quality

(click on the middle document)

NVidia counts an op per component of an ALU. So in NV40, which has 2 ALUs configured as 3+1 and 2+2, NVidia specifies 8 ops per pixel per clock.

At 400MHz for a 6800 Ultra, that's 51.2G ops per second.

Assuming that RSX is 550MHz, 24 pipes, with the same pixel pipeline architecture as NV40, produces 105.6G ops per second (the figure stated in the PS3 conference last night).

ATI states that X850XTPE operates at 43.2G ops per second (5 ALUs per pipe, 16 pipes, 540MHz).

If X850XTPE ops were counted according to NVidia's style, then it would be 2 ALUs at 4 ops per ALU (each ALU is 3+1), which is 69.1G ops per second. NVidia's style doesn't include texture address calculation as a shader op.

The leak for XB360 claims 96G ops per second. It seems to me that the leak is counting ops in the same way that NVidia does, rather than how ATI does.

So...

Jawed
 
Jawed, interesting. 105.6G and 96G are in the same ball park. But how the architectures function in the real world will be VERY interesting, especially how effecient the US architecture is and how flexible it is.
 
Jawed said:
Assuming that RSX is 550MHz, 24 pipes, with the same pixel pipeline architecture as NV40, produces 105.6G ops per second (the figure stated in the PS3 conference last night).
I don't think this calculations is meaningful, we don't even know what shader ops are.
Moreover quoted (by nvidia) dot products per second are completely out of reach of a 24 pipelines nv40.
 
Back
Top