Please Clear This Up - What PC GPU Does the XBOX 360 & PS3 Use?

Oh, I think some of that advantage comes simply from the fact that the machine was allowed to run at only 640x480. If you setup a P3 733EB and GF3/4 (NV2A is sorta like a NV25), and run 640x480, you could run FarCry PC pretty well. And that would be more than Xbox 1 ever did. The bandwidth and dedicated RAM that the vid card has to itself would probably actually beat up Xbox 1.

Sure the devs can really tweak down to the hardware, but the result depends on how much missed potential there is in the hardware. NV2x was designed as a PC accelerator and I think that means that it was probably pretty well utilized on PC. It's not like a quirky EE+GS.

I used to have a rig sort of like that...
Athlon XP 2200+, 1GB ram, Geforce 3, and it could run any games that were on both xbox and pc (like halo and doom 3) at about the same fps but at 800x600 instead of the 640x480 limit. (though xbox probably had the power to spare to run at 800x600)
Of course, xbox exclusives likely would have outdone it considering that both halo and especially doom 3 just treated the xbox as if it was a low spec pc. (well, high spec when halo came out)

Oh, and Farcry on Xbox looked pretty good.

I agree, although you also have to consider the overhead of the OS/API and background processes aswell. Accounting for this then you should probably be comparing to something like a 1200Mhz Athlon with 256MB of system RAM.

That seems a bit extreme on the cpu side, but perhaps a bit low on the ram side. Windows and its apis don't seem to eat into that much cpu time, but eat so much into ram. I'd say you'd probably wanna up to 512MB, and accounting for the higher IPC, get away with like an 800mhz athlon.
 
Xbox1's NV2A is significantly more powerful than GeForce 3 Ti 500. at least twice the polygon/vertex performance per cycle thanks to the twin VS in NV2A compared to the single VS in all GF3 cards. In addition, if I am not mistaken, NV2A has more pixel shaders than either GF3 or GF4, even though NV2A has less overall performance than any GF4Ti.

Xbox GPU had 4 pixel piplines running at 233mhz, Geforce4ti4200-4600 had 4 pixel piplines at 250-300mhz. Vertex shader amount where 2 for all of them and the xbox GPU.
 
Xbox GPU had 4 pixel piplines running at 233mhz, Geforce4ti4200-4600 had 4 pixel piplines at 250-300mhz. Vertex shader amount where 2 for all of them and the xbox GPU.

Maybe he was referring to the xbox gpu's ability to do...shadow volume buffers I believe at twice the speed. The PC gpus had this as well, but I think it was only exposed in opengl.
 

Well, with Cell, I think the biggest hurdle by far is actually taking good advantage of all of those simpler cores. In PCs, you have multiple cores of high complexity and they are identical. Even one is quite a powerhouse. In Cell, the most complex core isn't very fast, and the 6 simpler cores are, well, simpler. So it's not only a challenge of getting good parallelized code, but also leveraging the complexities of a CPU made up of two kinds of cores. And it is quite hard enough to do the parallelizing part all by itself let alone dishing out code in a way to push the right (fastest) code to the proper core. PC CPUs are the most forgiving for sure.

Do it poorly and you get a Quake 4 or Prey port as shown on 360.

At least that's what I've read.
 
Prey was a pretty good port, better than what the early marketplace demo lead to be believe. It really didn't suffer from anything imo.
 
Xbox1's NV2A is significantly more powerful than GeForce 3 Ti 500. at least twice the polygon/vertex performance per cycle thanks to the twin VS in NV2A compared to the single VS in all GF3 cards. In addition, if I am not mistaken, NV2A has more pixel shaders than either GF3 or GF4, even though NV2A has less overall performance than any GF4Ti.

NV2A was arguably slower than a Ti500, at least is none vertex limited scenario's.

They both had the same 4 pixel pipelines except NV2A ran at 233Mhz and the Ti500 ran a little faster at 240Mhz.

The Ti500 also has a dedicated 64MB providing 8GB/s compared to NV2A which had only shared access to 64MB running at 6.4GB/s.

So in those ways the Ti500 is faster. NV2A's major advantage was of course the second vertex shader where the Ti500 only had one so it would have had close to double its vertex shading power. Not sure if that would have actually efected polygon setup rates though as I understand the vertex shaders, and triangle setup engine to be different parts of the GPU. So doubling the pixel shaders wouldn't necessarily double your peak polygon performance, just what you could do to those polygons in terms of shading.

I know NV2A was always rated to have a much higher polygon output than the Ti500 but I think they were actually measuring triangle shading rates rather than setup rates. I could be way off with all this of course, but what better place to get some good answers.... ;)
 
The newly realeased midrange PC Cards from NVidia do not sound a hell of alot more impressive than the RSX. Perhaps this means that PS3 will remain competitive with mid-range PCs for quite some time. Check it out:

NVIDIA GeForce 8-series

Model
8600 GTS / 8600 GT / 8500 GT

Stream processors
32 / 32 / 16

Core clock
675 MHz / 540 MHz / 450 MHz

Shader clock
1.45 GHz / 1.18 GHz / 900 MHz

Memory clock
1.0 GHz / 700 MHz / 400 MHz

Memory interface
128-bit / 128-bit / 128-bit

Memory bandwidth
32 GB/sec / 22.4 GB/sec / 12.8 GB/sec

Texture fill rate (Billion/sec)
10.8 / 8.6 / 3.6

Same 128-bit memory interface! With 32GB/s memory bandwidth and 128-bit controller, the 8600GTS sounds like it would have a better choice than RSX for PS3. I'd like to see some more info on these cards to see how they compare to the RSX (especially the 8600GT which seems very similar).
 
Last edited by a moderator:
The newly realeased midrange PC Cards from NVidia do not sound a hell of alot more impressive than the RSX. Perhaps this means that PS3 will remain competitive with mid-range PCs for quite some time. Check it out:

NVIDIA GeForce 8-series

Model
8600 GTS / 8600 GT / 8500 GT

Stream processors
32 / 32 / 16

Core clock
675 MHz / 540 MHz / 450 MHz

Shader clock
1.45 GHz / 1.18 GHz / 900 MHz

Memory clock
1.0 GHz / 700 MHz / 400 MHz

Memory interface
128-bit / 128-bit / 128-bit

Memory bandwidth
32 GB/sec / 22.4 GB/sec / 12.8 GB/sec

Texture fill rate (Billion/sec)
10.8 / 8.6 / 3.6

Same 128-bit memory interface! With 32GB/s memory bandwidth and 128-bit controller, the 8600GTS sounds like it would have a better choice than RSX for PS3. I'd like to see some more info on these cards to see how they compare to the RSX (especially the 8600GT which seems very similar).

The 8600GTS vs RSX is quite an interesting comparison. On paper RSX has some key advantages however the 8600 has the benefit of the much improved architecture. Seeing how if compares next to the 7900GT and 7950GT should give a good indication of how it would compare to RSX. Im definatly looking forward to those benchmarks.

Im guessing as a console GPU it would have been a better choice than RSX but certainly not feasible to include is PS3 without delaying it at least 6 months and at a higher cost.
 
Last edited by a moderator:
Does RSX display G7x's texture filtering issues? I recently got a 7800 Go GTX for my notebook and was rather stunned to see how obvious the filtering is vs. ATI's and 8800's. In Oblivion, for example, there are these "lines" that I think are mip level separations that can be seen when running around. The levels just aren't filtered well enough. Never saw those before! I think it's worse than the 6800 I had in here before.

The G80 line has just amazingly good texture filtering though. ATI has also been very good. NV went a bit far in their tweaking of filtering.

And as for the G8x in PS3 thoughts, remember that RSX has things changed, removed, and added compared to its siblings in the G7x family. So, yeah there would've been some serious delays I'm sure.
 
Does RSX display G7x's texture filtering issues? I recently got a 7800 Go GTX for my notebook and was rather stunned to see how obvious the filtering is vs. ATI's and 8800's. In Oblivion, for example, there are these "lines" that I think are mip level separations that can be seen when running around. The levels just aren't filtered well enough. Never saw those before! I think it's worse than the 6800 I had in here before.

Should not be present if you choose HQ mode in the Nvidia CP. If I use High mode or lower then what you described is visible but not in HQ mode.
 
The 8600GTS vs RSX is quite an interesting comparison. On paper RSX has some key advantages however the 8600 has the benefit of the much improved architecture. Seeing how if compares next to the 7900GT and 7950GT should give a good indication of how it would compare to RSX. Im definatly looking forward to those benchmarks.

Im guessing as a console GPU it would have been a better choice than RSX but certainly not feasible to include is PS3 without delaying it at least 6 months and at a higher cost.
GeForce 8600 GTS is good at benchmark suites but terrible at games.
http://www.ocworkbench.com/2007/nvidia/8600GTS/b1.htm
 
Nope it's 24 pipes. With a 128 bit bus. There's not an exact PC comparison. But it's basically a 7900GTX with a 128 bit bus.

No a 7900GTX runs at 650MHz and the vertex shaders at 700MHz (perhaps RSX vertex shaders also run at 530 or 550MHz?). Also dont forget that the 7900GT/7900GTX have 16 ROPs compared to the RSX that has 8 ROPs.

Edit: Oh my, didn't notice the time his post was posted! :LOL:
 
Last edited by a moderator:
Back
Top