Ken Kutaragi:" I can produce the PS3 anytime"

Didnt read all of this, but what about if we define a ("true" instead of a costum) console GPU as a GPU that is designing from the ground to meet the goals (that are different from a PC one) of a particular console.

That can make a destinction between Xenus, RSX, and PC GPUs and everybody gets happy.
 
pc999 said:
Didnt read all of this, but what about if we define a ("true" instead of a costum) console GPU as a GPU that is designing from the ground to meet the goals (that are different from a PC one) of a particular console.

That can make a destinction between Xenus, RSX, and PC GPUs and everybody gets happy.

That's the obvious way of defining them.

Basically MS had 2 options:
a) pay ATI to design a clean-slate GPU specifically for the console
b) pay ATI to adapt their latest existing PC-GPU for a console

Scenario A costs the console maker more, takes longer, and overall requires more R&D, but it should provide better performance in a closed box than option b, which would be an adapted PC gpu, designed initially with PC goals and PC restraints in mind.

MS chose to do it the hard way, they footed the bill for 3 years worth of R&D, and for that they deserve a little credit, that's all.
 
london-boy said:
The fact that there's not much "custom" there apart from eDRAM

Well, apart from the EDRAM and the MEMEXPORT feature. Oh, and the L2 cache locking. So apart from the EDRAM, MEMEXPORT, cache locking... er, and then the fixed function tesselator. Okay, so, apart from...

Am I the only one who gets reminded of "The life of Brian" here?
 
Phil said:
At the end of the day, Xenos is nothing more but a GPU optimized for its environment to its task within the Xbox360 system. In that sense, RSX is nothing more and nothing less.
I think what some people may be getting at is the level of optimization of the respective GPU's for their target environments. Of course both share a lot of ancestry with PC GPU's. ATi could have gone off the deep end with an entirely novel design, but they didn't. They used what they had learned in their latest PC GPU's, and made extensive changes, deletions, and additions to suit MS's target. NVIDIA has likely done the same... but the question is how extensive are the changes, deletions, and additions to each GPU, relatively?

If NVIDIA tood a G70, deleted the video processing functions, and changed the bus to suit the flexIO, that doesn't seem like as deep a level of optimization as what ATi and MS have done. The RSX may still be a more powerful processor... who knows at this point? But if it is, I would venture that it does it through brute force (more transistors and higher clocks) and higher manufacturing costs. Nothing at all wrong with brute force... but it isn't exactly synonymous with optimized (your term of choice).
 
Laa-Yosh said:
Well, apart from the EDRAM and the MEMEXPORT feature. Oh, and the L2 cache locking. So apart from the EDRAM, MEMEXPORT, cache locking... er, and then the fixed function tesselator. Okay, so, apart from...

Am I the only one who gets reminded of "The life of Brian" here?

I thought all those features (apart from eDRAM) were present or will be present shortly on PC GPUs. :smile:
 
No. Last I heard there was no requirement for MEMEXPORT in DX10, this was something that ATI were requesting. Also note that the unified architecture capabilities of Xenos does not entirely map to SM3.0 nor does it SM4.0 - I'm now actually expecting some significant architecture changes between Xenos and R6xx.

Cache locking will almost certainly not make it to the PC, since this requires co-operation between a know graphics chip and known CPU, which just won't be the case on a PC (or at least, creating a mechanism for doing it will mean there needs to be a control & API for it and all CPU and graphics vendors creating new chips to implement it - not gonna happen any time soon).
 
Back
Top