How Important are FLOPS to Gaming Performance?

BenQ

Newcomer
It seems like regarding FLOPS, I have heard every opinion under the sun ranging from them being meaningless, all the way to them being extremely important, "proving" that the PS3 is TWICE as powerful as the Xbox 360.

So I looked to this gen of consoles for reference.

The PS2 has a total of 6.2 FLOPS and
the Xbox having a total of 82.93 FLOPS.

That means the Xbox can manage more than 13 X's the ammount of FLOPS compared to the PS2.... and then I think about the reletive gaming performance and graphical diferences between the Xbox and PS2.

Now I look at the PS3 having roughly 2 X's the ammount of FLOPS as the Xbox 360 and tht doesn't seem quite so impressive or relevant anymore.

I have heard many people claim that the PS3 having 2X's as many FLOPS "proves" that the PS3 is twice as powerful, but who among you would claim that the Xbox is more than 13X's more powerful than the PS2 due to the same spec?

How important are FLOPS?
 
Actually 82.93 is both CPU and GPU FLOPs ( btw it's GFLOPS ), I believe
that peak performance of PS2 CPU is actually higher than that of XBox.

And to answer your question, it really depends on the game. FLOPs are
important if a game is using a lot of physics, statistics, sound processing
and 3D manipulation. Games like 3D shooters and simulations
would benefit the most while RPGs and platformers not so much.

Just FYI, there's no point in looking at peak performance of a CPU/GPU
since they never achieve that in real life. Neither X360 nor PS3 will
reach 1 and 2 TFLOPs respectively.
 
Well... the XBox's CPU had the lowest GFLOP performance and the GameCube's CPU had the highest GFLOP performance with the PS2's GFLOP performance in between, now which system ended up being the most powerful? Well... more powerful is sort of relative... but the XBox was the system that ended up having generally the best graphics of the three as a consensus. My belief in a GAMING system is that the graphics processor (The GPU) is THE most important factor in a gaming machine... the faster the GPU the better your performance in games is going to be, however having a better CPU will help in other areas.

I am not a person that believes the upcoming consoles are roughly equivalent to each other, no console has EVER in history been equivalent to another before and this upcoming generation is no exception. That said the console that will produce the best graphics will be the system that has the most powerful GPU, not the system that has the better CPU (contrary to what Ken Kutugari and what a lot of Japanese press believe). This war of graphics will be decided between XENOS and the RSX... but don't forget there is another war that has yet to be fought between the CPUs as there are other areas that having a more powerful CPU will help with, and it is *NOT* graphics. In this upcoming generation I believe the enviroment itself will become more important than the graphics that render that enviroment and in that case it will come down to which processor will be more capable. The answer of which processor is more capable is not known to myself as of yet as there is NO information about XENON as of yet, though I have a great deal of faith in the procedural synthesis technology that Microsoft has been developing and seems to have incorporated into the XBox360 hardware. Only time will answer this for me however...

I am still of the belief that the Cell CPU is much better suited to Sony Pictures and the movie industry than it is in the home or being used as a gaming processor as the Cell's primary strength is it's ability to quickly process large amounts of computational data as well as distributed computing tasks so it would be extremely well suited to rendering CGIs for movies as well as SETI like projects and render farms as well as scientific computational tasks. I only question the Cell's performance in gaming and in your "Average" consumer applications as well as server applications... and I don't think anyone would want to use the Cell as a server processor.

In a gaming enviroment floating point performance is a small portion of the overall picture, but it is still important... but just because Sony claims it's CPU has twice the floating point performance of the XBox360's CPU does not mean the PS3 system as a whole is twice as powerful as the XBox360, it actually could be quite the opposite or worse. I would highly suggest ignoring all FLOP claims made by Microsoft and Sony. In the end though... the games themselves will prove which system is more powerful as it has always been.
 
BenQ said:
The PS2 has a total of 6.2 FLOPS and
the Xbox having a total of 82.93 FLOPS.
82.93 ? :?

050617b.gif
 
Couldn't the PS3 still be better at graphics due to its more powerful cpu?

If RSX and Xenos are close, well Cell is very powerful and very well suited to graphics, it could potentially help out and push RSX well over Xenos. Hey, before hardware t&l units and vertex and pixel shaders, the cpu used to matter quite a bit for graphics performance.
 
Fox5 said:
Couldn't the PS3 still be better at graphics due to its more powerful cpu?

If RSX and Xenos are close, well Cell is very powerful and very well suited to graphics, it could potentially help out and push RSX well over Xenos. Hey, before hardware t&l units and vertex and pixel shaders, the cpu used to matter quite a bit for graphics performance.

Long story short is basically... no.

I know that Ken Kutaragi of Sony has been eluding to this prospect, but the truth of the matter is we are dealing with a very PC like GPU here and it don't like to share the graphics pipeline. The second issue at hand is the sheer difficulty of trying to even get the Cell and the RSX to work together in a graphics pipeline. For the vast majority of purposes the Cell CPU is limited to post processing of images that have already been rendered. There ARE some of things that the Cell CPU could do that could improve other areas of the game however and help improve performance... but the Cell CPU is going to be quite limited in what it can do to improve graphics on the PS3. You can thank the late change to nVidia for that...

There was a time that CPUs was used for graphics processing a long long time ago... and there is a reason why we stopped using CPUs to process graphics, because the GPU can do a far better job at doing this than the CPU could ever hope to do. I would have the Cell CPU focus on other tasks... afterall I believe the enviroment is becoming more important than the graphics that render the enviroment.

There are a lot of very complex reasons for this that I am not willing to get into right now as it would require a lot of explaining and simplifying so people would understand these reasons. Maybe sometime in the future... when more is known about the RSX in the PS3 and XENON in the XBox360...
 
Burnout 4

burnout-revenge-20050715012816058.jpg


With a virtual showroom full of cars and many new and exciting tracks to be raced, Burnout Revenge is looking like the best new racer. Period. It looks stunning on Xbox, but to my surprise, it even looked better on PS2. Criterion is adamant that the PS2 still does a few things better than the Xbox hardware, one of which is the way it outputs video gamma display levels; leading to a crisper, sharper look and feel on the PS2. Amazing that these guys know the PS2 hardware so well. Both versions run at 60 frames per second in widescreen mode with 480p for those with HDTV displays. It's a true testament to the Renderware technology. /www.gamespy.com/


... "FLOPS" or not,ehh.



amd8fq.jpg


... AMD + RIVA TNT ... w/out SIMD 3D now support
... AMD + Voodoo ... with SIMD 3D now suport
 
There was a time that CPUs was used for graphics processing a long long time ago... and there is a reason why we stopped using CPUs to process graphics, because the GPU can do a far better job at doing this than the CPU could ever hope to do. I would have the Cell CPU focus on other tasks... afterall I believe the enviroment is becoming more important than the graphics that render the enviroment.

Link
link

Don’t limit graphics with CPU processing
Most games are CPU-limited
CPU is the bottleneck
Faster GPU will NOT result in faster graphics

... CPU-GPU batch index,read that *.ppt file.
 
The GameMaster said:
I know that Ken Kutaragi of Sony has been eluding to this prospect, but the truth of the matter is we are dealing with a very PC like GPU here and it don't like to share the graphics pipeline.

GPUs don't exist in a vacumn, they take data in.

The GameMaster said:
For the vast majority of purposes the Cell CPU is limited to post processing of images that have already been rendered.

That's actually quite optimistic, at least relative to some other things it could be helping with imo. Vertex processing seems more obvious.

The GameMaster said:
There are a lot of very complex reasons for this that I am not willing to get into right now as it would require a lot of explaining and simplifying so people would understand these reasons.

Heh, this is B3D :LOL:
 
The GameMaster said:
I know that Ken Kutaragi of Sony has been eluding to this prospect, but the truth of the matter is we are dealing with a very PC like GPU here and it don't like to share the graphics pipeline. The second issue at hand is the sheer difficulty of trying to even get the Cell and the RSX to work together in a graphics pipeline. For the vast majority of purposes the Cell CPU is limited to post processing of images that have already been rendered. There ARE some of things that the Cell CPU could do that could improve other areas of the game however and help improve performance... but the Cell CPU is going to be quite limited in what it can do to improve graphics on the PS3. You can thank the late change to nVidia for that...
Either you've got a final production RSX (or it's technical documents) in front of you, or you're just making this up! What evidence have you got that Cell<>RSX usage is difficult? Or Cell's limited to post processing? KK (okay, it's apparent you won't believe anything he says...) already informed us Cell and RSX can share vertex data directly.

Furthermore you're off the mark in saying graphics are totally GPU bound. CPU's generate the polys to feed the GPU's. That needs float power. It's float power that determines how much facial mesh distortion you can use for character expression, and float power that enables physics based animations instead of precaptured animations. Which would you rate the better console : a console with 700 MHz PIII and GeForce2, or an X800XT GPU coupled to an 8bit Z80A running at 1.5 MHz?

PS3 should be as able, if not more able, than XB360 in terms of procedural synthesis. The machine to me seems better set up for that. Assuming both machines are capable of reaching the same percentage of their peak FLOP performance, then PS3 will have 2x the capacity to calculate objects flying around, motion, etc. and these contribute not just to the gameplay (very important though you seem tothink otherwise :? ) but also the visuals. Which would impress you more? A half dozen nicely rendered AA'd American Footballer's with finite animation patterns, or 20 nicely rendered, no AA'd American Footballers with physics-based skeletal animation that react realistically to motion and impacts etc.? The more FLOPS on your CPU you have available, the more animation techniques you can use, and this contributes as much to graphics quality as AA or high resolutions.

I don't know many people that say it's the graphics that make for better games and consoles. I would say, traditionally, better graphics were important as games were very simplstic, but with the potential for new gaming models I think gameplay has the potential to be very diverse, with games based on complicated physics modelling, fluid dynamics, AI, all contributing to the worth of the game. Games with even mediocre graphics can be great fun, and games with amazing graphics can be stinky poo games. If Killzone3 the game looks as good as the E3 showing, but has brain dead AI like the original, would you rate it a better game than say even Halo1?

But second to this, you seem very confident that not only is Xenos more powerful than RSX, but noticeably so, and that RSX is a limited part. As GPU's don't work in isolation but are limited to the rest of the system's ability to supply it meaningful data, it seems rather irrational to be saying now that RSX is second in realworld performance to Xenos given we have no details on it.
 
Oh no, not again. :LOL:

BenQ said:
the Xbox having a total of 82.93 NVFLOPS.

Fixed. ;)

Now I look at the PS3 having roughly 2 X's the ammount of FLOPS as the Xbox 360 and tht doesn't seem quite so impressive or relevant anymore.

:rolleyes:

If car A can go at 200 miles per hour and car B is built to go double that speed (400 miles per hour), then even though car A is pretty fast, car B is significantly faster than car A. Let me put it another way, don't you consider a super computer that has double the performance of this, to be impressive?
 
The really funny thing is it doesnt matter which system is more powerful. All the ps3 fans are gonna be moaning like the Xbox fans did this gen. Becasue all the multiplatform games are still gonna look the same.
 
Pozer said:
The really funny thing is it doesnt matter which system is more powerful. All the ps3 fans are gonna be moaning like the Xbox fans did this gen. Becasue all the multiplatform games are still gonna look the same.

I would even argue multiplatform titles will look more close in terms of graphics than this generation. However, as we can't yet compare multiplatform next-generation titles, this is a question we'll be able to answer in the near future. Hardware specifications only tell so much (it comes down on real game performance).
 
Shifty Geezer said:
Furthermore you're off the mark in saying graphics are totally GPU bound. CPU's generate the polys to feed the GPU's.

No they dont, at least not for the majority of objects. They're stored in their initial space, and processed by the vertex shaders. The idea is to minimize the dataflow between the cpu and the gpu, so it goes something like this:

1. set object data pointers (vertex data, index data)
2. set shaders
3. set parameters (transformation(s), morph strengths, etc)
4. draw command

On both nextgen consoles, dynamic data can be generated on the cpu/spe, but it brings up interesting sync issues - the gpu can only receive the data feed when it's done with the "normal" buffered commands. (or some kind of callback can be used, but that would hurt the ideal overlapped-processing-situation)

everything is strictly IMHO of course.
 
reptile said:
Shifty Geezer said:
Furthermore you're off the mark in saying graphics are totally GPU bound. CPU's generate the polys to feed the GPU's.

No they dont, at least not for the majority of objects. They're stored in their initial space, and processed by the vertex shaders. The idea is to minimize the dataflow between the cpu and the gpu, so it goes something like this:

1. set object data pointers (vertex data, index data)
2. set shaders
3. set parameters (transformation(s), morph strengths, etc)
4. draw command

CPUs may not generate all vertices - though vertex creation on the CPU can be done - but for things like collision detection etc. on one level or another, the CPU does have to touch the geometry. Well, with per-vertex collision detection anyway (you'll still only touch some, of course). Some simulation may require explicit contact with the vertices too.

I think a stronger argument is for vertex processing on the CPU in tandem with the GPU - either preprocessing vertices before handing them off to the GPU for further processing, or processing in parallel - but as you say, there are some issues to overcome that'll determine how well this can be done. I think it could be very much worth the effort though.
 
Titanio said:
CPUs may not generate all vertices - though vertex creation on the CPU can be done - but for things like collision detection etc. on one level or another, the CPU does have to touch the geometry. Well, with per-vertex collision detection anyway (you'll still only touch some, of course). Some simulation may require explicit contact with the vertices too.

I think a stronger argument is for vertex processing on the CPU in tandem with the GPU - either preprocessing vertices before handing them off to the GPU for further processing, or processing in parallel - but as you say, there are some issues to overcome that'll determine how well this can be done. I think it could be very much worth the effort though.

Right, preprocessing for collision/multipassing is feasible on the cpu, but that'd mean it has to go to the main ram, so the direct connection between the cpu/gpu is not used. If collision is not required, i can very well imagine methods to have the cpu regenerate the actual "pose" when neccessary, but it's very complicated.

My point is, that from a software engineer's point of view, this kind of limited scope raw power is very difficult to use in a generic way. People coding for a specific platform will be able to build their structures around it, but fanboyism aside, we should realize that the vast majority of games/technologies are multiplatform.
 
Cell has the raw performance edge on Xenon, we all know that. So the real question is, how easily and effective can this performance be tapped and once it is, how much of a difference can it make?

As that pulled Anand article made clear, some developers are already moaning how hard it is to really make us of all the FP power in the 3 core Xenon (which is also way beyond what current PC CPUs offer), because of the required paradigm shift in programming. How much harder, if any, will it be in case of an 8 core Cell, where the 7 SPEs (which hold almost all of that power) are even further removed from the way "traditional" CPUs work?

Nobody can say yet. They certainly offer an intriguing potential and a handfull of top-developers, mostly first-party ones, will spend their time squeezing every last bit of performance out of them. That should, in theory, translate into a noticable or in some cases even significant edge in some games. Like this generation, those games will be extremely rare though and just how much of a difference there will be, remains to be seen...
 
Gollum said:
Nobody can say yet. They certainly offer an intriguing potential and a handfull of top-developers, mostly first-party ones, will spend their time squeezing every last bit of performance out of them. That should, in theory, translate into a noticable or in some cases even significant edge in some games. Like this generation, those games will be extremely rare though and just how much of a difference there will be, remains to be seen...

I would mostly agree, and this, to a degree, is true with every generation of consoles.

However, the benefits of the SPEs need not be reserved for an elite few. Alongside any less ambitious SPE usage from "general" devs, middleware that makes effective use of the SPEs could spread more significant benefit beyond just the cream of the crop (think, for example, about how UE3 licensees may benefit with no extra work on their part if the engine itself and AGEIA make good use of the SPEs). Although I expect the cream of the crop will still outperform the majority, obviously, simply because they'll have all that and then some.
 
Back
Top