PDA

View Full Version : Hardware utilization: PC vs console question


bgassassin
09-Jul-2012, 06:05
I hope this is the right board for this, but since it ties in with consoles I figured why not as it will get moved or closed if it's not. This comes from a discussion with a fellow B3D member elsewhere. :smile:

So is it really possible to estimate how much of a PC's hardware is utilized vs a console? This is based in part on a tweet from Carmack (https://twitter.com/ID_AA_Carmack/statuses/50277106856370176) that essentially says a PC's hardware is only used at about half of it's power. He makes a comparison based on a PC and console having the same hardware. This is obviously based on things like APIs, inability to focus on one hardware spec, PC OS, and whatever else I'm forgetting to list. First is it really possible to estimate how much a PC's hardware power is utilized for a game? And if so is it really 50%? That sounds like a lot.

tongue_of_colicab
09-Jul-2012, 06:30
obviously a pc is using 100% of the hardware available as well only some power gets ''wasted'' on the OS, API's etc. So it's a question of which platform is utalizing it's hardware more efficient. That's something a pc just cannot win because it's an open box and besides the OS, API's etc devs also have to keep in mind their games will likely be running on atleast 3 generations of hardware.

As Carmack wrote most of it probably comes from single platform focus. I don't believe pc overhead vs consoles means a pc could do 50% more of whatever it's doing just by using more efficient/streamlined API's etc. Most of it would come from having one single spec.

Kb-Smoker
09-Jul-2012, 06:55
We had a thread kinda about this already. Good post in that thread hat goes with carmack quote.

http://forum.beyond3d.com/showthread.php?t=61841

Thread got locked btw

Originally Posted by egoless How many X should be removed from PC for the API overhead, inefficiency, and unoptimized references you mentioned?

All of those factors apply in different ways so you can't just add them up. API overhead would be fairly universal depending on which API you're using. Just a wild guess but maybe the PC loses ~30% performance on average thanks to the API. Possibly less with DX11.

Inefficiency and unoptimised are more or less the same thing. The level of optimisation that's performed for a particular hardware set determines how efficiently that hardware is used. If you want to add it all up then the highest estimate of relative console efficiency I've heard from a reliable source is 2x from Carmack. That no doubt takes into account the API obverhead as well as the massive performance gains you can achieve from optimising your code for a specific hardware set.

From personal experience i'd say 2x is about right for reasonable ports later in a consoles life. Obviously there will always be exceptions on both sides that break that ratio.

antwan
09-Jul-2012, 07:06
Also lack of optimization. Not because of PC not being "a single spec", but in general. I believe it has to do with the market/climate; why should developers work hard; the majority of PC gamers pirate anyway. While the ones who do pay for games, probably also buy some extra GPU's, ram, faster HDD's, ...
Not a single PC game out now, feels like it really makes use of all the extra hardware. To me that is lack of optimization. They just port a console game for a little while and then call it a day, that's how PC development feels to me at the moment.

So Carmack is right in a way, but I strongly believe there is also a severe lack of optimization of which Carmack is afraid to speak of.

Squilliam
09-Jul-2012, 07:16
Why optimise towards a continually improving target? If developers want a better game on a console they have to optimise, if they want a better game on the PC they just increase the min requirements.

bgassassin
09-Jul-2012, 08:10
To help add some persperctive on this by using a rough scenario, how it sounds to me is that if let's both the console and PC have a 1 TFLOP GPU. It would seem that at best using Carmack's tweet, the console's GPU would have a max of say 900 GFLOPs used (it still deals with the same things only much leaner) while the PC GPU is only having 450 GLOPs used. I'm not saying this is the case before anyone believes that. I'm just trying to give an idea of how I'm seeing this to get a better answer.


Side note: I remember seeing that thread, but even saying "kinda" is pushing it IMO. :razz:

antwan
09-Jul-2012, 10:22
Squilliam, that kind of attitude is the reason why PC is lagging behind when you compare the specs to the things that devs do with it.

GraphicsCodeMonkey
09-Jul-2012, 10:57
I think 'power' is a poor term, hardware utilisation would be a better term.

No games could possibly hope to achieve 100% utilisation, ever, fact. There will always be occasions when the CPU is waiting on memory, memory is loaded into the cache which isn't used wasting bandwidth, SIMD units are unused, branch predictions goes wrong and we get a roll back etc.. Keeping all CPU pipes 100% utilised in near impossible even for the most optimal hand assembly coded loops, we might get near for some very specifc cases but this is extremely rare in practice.

Similarly on the GPU it is impossible to keep texture units, render target bandwidth and all the cores perfectly utilised all the time.

On consoles we have a much thinner API so less 'fat' between the game code and the metal and very little or no contention for the CPU/GPU resources.

Context switching CPU cores to perform other work is incredibly painful.

So without even trying by running on a console your title is getting more out of the machine.

Then of course there is optimisation, we can optimise for the consoles because they are fixed specifications, because we get to know what the hardware is doing and because we get good tools to help with performance tuning. The real killer on PC's is having to spend time coding scalability into your title (particularly when this means different data) rather than spending that time optimising.

ERP
09-Jul-2012, 16:34
On the technical side trying to support a wide variety of hardware is really hard and introduces a lot of overhead, you spend your optimization dollars on your min-spec machine, not the just released mega machine. CPU/GPU balance is all over the board, often lower end machines have "fast" CPU's and terrible GPU's, and even if you were just targeting the high end, it's impossible to answer simple questions like "what is a high end PC".

But to my mind the real reason PC's fall short is money, with a few notable exceptions there just isn't the $ in the PC market to justify a $70M spend on an a title aimed at high end PC's.

Kb-Smoker
09-Jul-2012, 18:04
Does anyone else think its crazy carmack can even post on twitter? :lol: To get him to limit himself to 140 characters is crazy..

Anyway some good quotes from carmack about this topic.

http://www.pcper.com/reviews/Editorial/John-Carmack-Interview-GPU-Race-Intel-Graphics-Ray-Tracing-Voxels-and-more/Intervi
Ryan Shrout: Focusing back on the hardware side of things, in previous years’ Quakecons we've had debates about what GPU was better for certain game engines, certain titles and what features AMD and NVIDIA do better. You've said previously that CPUs now, you don't worry about what features they have as they do what you want them to do. Are we at that point with GPUs? Is the hardware race over (or almost over)?

John Carmack: I don't worry about the GPU hardware at all. I worry about the drivers a lot because there is a huge difference between what the hardware can do and what we can actually get out of it if we have to control it at a fine grain level. That's really been driven home by this past project by working at a very low level of the hardware on consoles and comparing that to these PCs that are true orders of magnitude more powerful than the PS3 or something, but struggle in many cases to keep up the same minimum latency. They have tons of bandwidth, they can render at many more multi-samples, multiple megapixels per screen, but to be able to go through the cycle and get feedback... “fence here, update this here, and draw them there...” it struggles to get that done in 16ms, and that is frustrating.

Ryan Shrout: That's an API issue, API software overhead. Have you seen any improvements in that with DX 11 and multi-threaded drivers? Are those improving that or is it still not keeping up?

John Carmack: So we don't work directly with DX 11 but from the people that I talk with that are working with that, they (say) it might [have] some improvements, but it is still quite a thick layer of stuff between you and the hardware. NVIDIA has done some direct hardware address implementations where you can bypass most of the OpenGL overhead, and other ways to bypass some of the hidden state of OpenGL. Those things are good and useful, but what I most want to see is direct surfacing of the memory. It’s all memory there at some point, and the worst thing that kills Rage on the PC is texture updates. Where on the consoles we just say “we are going to update this one pixel here,” we just store it there as a pointer. On the PC it has to go through the massive texture update routine, and it takes tens of thousands of times [longer] if you just want to update one little piece. You start to advertise that overhead when you start to update larger blocks of textures, and AMD actually went and implemented a multi-texture update specifically for id Tech 5 so you can bash up and eliminate some of the overhead by saying “I need to update these 50 small things here,” but still it’s very inefficient. So I’m hoping that as we look forward, especially with Intel integrated graphics [where] it is the main memory, there is no reason we shouldn't be looking at that. With AMD and NVIDIA there's still issues of different memory banking arrangements and complicated things that they hide in their drivers, but we are moving towards integrated memory on a lot of things. I hope we wind up being able to say “give me a pointer, give me a pitch, give me a swizzle format,” and let me do things managing it with fences myself and we'll be able to do a better job.

To help add some persperctive on this by using a rough scenario, how it sounds to me is that if let's both the console and PC have a 1 TFLOP GPU. It would seem that at best using Carmack's tweet, the console's GPU would have a max of say 900 GFLOPs used (it still deals with the same things only much leaner) while the PC GPU is only having 450 GLOPs used. I'm not saying this is the case before anyone believes that. I'm just trying to give an idea of how I'm seeing this to get a better answer.


Side note: I remember seeing that thread, but even saying "kinda" is pushing it IMO. :razz:

I look at it as the console having 2 TFLOP of "pc power" and pc having 1 Tflop. :wink: Again flops are a very bad measure of "power."

Yeah that thread wasnt just about this but by page 3 it was taking about this topic. That is where I got that quote from.

bgassassin
09-Jul-2012, 20:51
Thanks for the responses so far, but the answers are saying what I have a decent understanding on already. I have a good enough grasp on the "why". I'm trying to find out is it possible to estimate how poorly utilized PC hardware is vs a console. Because when I see Carmack's comment I'm pretty much left to believe that in a perfect scenario a PC using a 7970 with no bottlenecks to the GPU and the PS4 with it's target GPU and no bottlenecks is almost on par with the aforementioned PC because of how underutilized the PC's hardware would be.

I look at it as the console having 2 TFLOP of "pc power" and pc having 1 Tflop. :wink: Again flops are a very bad measure of "power."

Yeah that thread wasnt just about this but by page 3 it was taking about this topic. That is where I got that quote from.

For those wondering this is the person I had the discussion with. Still you can't look at it as the GPU surpassing it's theoretical target. That's not logical. It's just better utilized.

pjbliverpool
09-Jul-2012, 21:00
It's worth noting that when Carmack says 2x he's talking about DX9. DX11 will reduce that somewhat.

Also, that level of optimisation will only apply to games at least a couple of years into the console lifescycle because of the time it takes developers to optimise console hardware. So I wouldn't expect a 1.8 TFLOP GPU in PS4 to be matching the 7890 on day 1. Two years down the line in newer games it might but of course by then the 7970 will be mainstream level performance.

Finally, when we say PC's have half the efficiency of consoles that would only be at the console level graphics. i.e. it would take double RSX performance to achieve PS3 level visuals in a modern game. Once you start scaling the graphics up I expect PC games get far less efficient than that due to the lack of optimisation given over to graphics beyond the console level.

ERP
09-Jul-2012, 21:11
No it's not possible, because it's not utilization of resources the way your thinking about it.

Carmack is talking about a very specific usecase, updating textures, in that one case there is an enormous overhead resulting in a speed penalty of several orders of magnitude, this is especially true if you intend to update only a portion of the texture.
The same is true to a lesser extent of most GPU level resources (index buffers/vertex buffers etc.).

In the more general case, it's not that simple, just because you're running on a PC your pixel or vertex shaders don't magically run at 1/2 speed.
The only thing that the API/driver overhead can do to hurt performance is to starve the GPU, if you are dynamically updating GPU resources this can certainly happen because of fences inserted by the driver in order to respect locks.

In practice however if you understand the restrictions the environment places on you, you can get similar utilization to consoles for the general submit triangles and render them use cases. you have to limit my batch counts and you have to be careful with resource locking, but unless you are trying to do something overly clever % utilization can be similar.

What you can't do it tailor your art/design to a known quantity and that is a huge disadvantage, but you can't quantify it in flops, or as a percentage.

FWIW the last time I sat through an MS conference some 360 games did still ptimize shaders by hand which will buy you something, and it's something you wouldn't see on a PC, but outside of pathological cases where the compiler generates stupid code, it's not going to be a huge saving.

Kb-Smoker
09-Jul-2012, 21:48
It's worth noting that when Carmack says 2x he's talking about DX9. DX11 will reduce that somewhat.

He answered just that in my quote:

Ryan Shrout: That's an API issue, API software overhead. Have you seen any improvements in that with DX 11 and multi-threaded drivers? Are those improving that or is it still not keeping up?

John Carmack: So we don't work directly with DX 11 but from the people that I talk with that are working with that, they (say) it might [have] some improvements, but it is still quite a thick layer of stuff between you and the hardware. NVIDIA has done some direct hardware address implementations where you can bypass most of the OpenGL overhead, and other ways to bypass some of the hidden state of OpenGL. Those things are good and useful, but what I most want to see is direct surfacing of the memory. It’s all memory there at some point, and the worst thing that kills Rage on the PC is texture updates. Where on the consoles we just say “we are going to update this one pixel here,” we just store it there as a pointer. On the PC it has to go through the massive texture update routine, and it takes tens of thousands of times [longer] if you just want to update one little piece. You start to advertise that overhead when you start to update larger blocks of textures, and AMD actually went and implemented a multi-texture update specifically for id Tech 5 so you can bash up and eliminate some of the overhead by saying “I need to update these 50 small things here,” but still it’s very inefficient. So I’m hoping that as we look forward, especially with Intel integrated graphics [where] it is the main memory, there is no reason we shouldn't be looking at that. With AMD and NVIDIA there's still issues of different memory banking arrangements and complicated things that they hide in their drivers, but we are moving towards integrated memory on a lot of things. I hope we wind up being able to say “give me a pointer, give me a pitch, give me a swizzle format,” and let me do things managing it with fences myself and we'll be able to do a better job.
*
Also, that level of optimisation will only apply to games at least a couple of years into the console lifescycle because of the time it takes developers to optimise console hardware. So I wouldn't expect a 1.8 TFLOP GPU in PS4 to be matching the 7890 on day 1. Two years down the line in newer games it might but of course by then the 7970 will be mainstream level performance.
*
Finally, when we say PC's have half the efficiency of consoles that would only be at the console level graphics. i.e. it would take double RSX performance to achieve PS3 level visuals in a modern game. Once you start scaling the graphics up I expect PC games get far less efficient than that due to the lack of optimisation given over to graphics beyond the console level.
I agree 100%.



For those wondering this is the person I had the discussion with. Still you can't look at it as the GPU surpassing it's theoretical target. That's not logical. It's just better utilized.

I never said it would "surpass its theoretical target."

First off the whole debate was the "PS4/nextbox would not be able to handle square enix demo, UE4 demo and starwars 1313 because those games were running on "high end pc." If even if they do they will never look like that because these games were running on a 680 series card which the console could not match."
Which i said was untrue because the given specs of the ps4 would be able to handle any PC game running on a 680 gtx and look about the same at console resolutions. I give many reasons why and look they been repeated many times in this thread.... :cool:

So they real question was, given the PS4 specs could it match a 680 GTX running a game?

bgassassin
09-Jul-2012, 22:15
No it's not possible, because it's not utilization of resources the way your thinking about it.

Carmack is talking about a very specific usecase, updating textures, in that one case there is an enormous overhead resulting in a speed penalty of several orders of magnitude, this is especially true if you intend to update only a portion of the texture.
The same is true to a lesser extent of most GPU level resources (index buffers/vertex buffers etc.).

In the more general case, it's not that simple, just because you're running on a PC your pixel or vertex shaders don't magically run at 1/2 speed.
The only thing that the API/driver overhead can do to hurt performance is to starve the GPU, if you are dynamically updating GPU resources this can certainly happen because of fences inserted by the driver in order to respect locks.

In practice however if you understand the restrictions the environment places on you, you can get similar utilization to consoles for the general submit triangles and render them use cases. you have to limit my batch counts and you have to be careful with resource locking, but unless you are trying to do something overly clever % utilization can be similar.

What you can't do it tailor your art/design to a known quantity and that is a huge disadvantage, but you can't quantify it in flops, or as a percentage.

FWIW the last time I sat through an MS conference some 360 games did still ptimize shaders by hand which will buy you something, and it's something you wouldn't see on a PC, but outside of pathological cases where the compiler generates stupid code, it's not going to be a huge saving.

Well see this is kb's fault because that perspective started with him. :razz: And that's why I'm trying to get the proper understanding. He just posted roughly how the original debate started, but here is the original post and to see why I'm asking that question from that perspective.

http://www.neogaf.com/forum/showpost.php?p=39484743&postcount=5967

WE have the specs for the ps4. Its has a 1.86 tflop GPU. The highest end pc card out there is Radeon HD 7970 at 3.79 tflops.

In a close box john carmack said you can double the performance of a gpu compare to PC. So you have the best gpu out against the PS4 at 3.72 tflops[2x 1.86 glfops]. This is from the API software overhead that you do not get on a console. The ensuing debate lead to them changing the thread title. :lol:

And with the last sentence I didn't feel like there was a dramatic benefit so you're confirming what I felt, but wasn't sure about due to no personal experience.

I never said it would "surpass its theoretical target."

I didn't say you did. I'm saying you can't look at it from that perspective like when you said the GPU in my scenario was "2 TFLOPs" in your view.

Kb-Smoker
09-Jul-2012, 23:23
I didn't say you did. I'm saying you can't look at it from that perspective like when you said the GPU in my scenario was "2 TFLOPs" in your view.
I dont know what is hard to understand. We are talking about running console games vs pc games.

The console gains performance, the pc doenst some how lose power. For you example you have 1 tflop pc gpu and 1 tflop ps4. The ps4 would get around double the performance running a game built for it compared to a pc game. The pc cannot lose performance. Like he said the pc hardware doesnt start running at 50%

Think of it this way. You have 2 stock mustangs, now you take one and dyno tune it. You still have the same engine but this improves the performance. The stock mustang doent lose performance. Now consoles take this one step farther, they design the system just to run games. Using the mustang again you remove the seats, radio, a/c and improve performance by reducing weight.

There is no debate there is performance improvements in consoles. The only debate is how much but even then there is no one answer. Not sure why you are so focus on this when I was talking about the next gen demos running at E3. I was using john carmack as an example of how it was possible, not saying it some golden rule.

Sonic
09-Jul-2012, 23:58
I dont know what is hard to understand. We are talking about running console games vs pc games.

The console gains performance, the pc doenst some how lose power. For you example you have 1 tflop pc gpu and 1 tflop ps4. The ps4 would get around double the performance running a game built for it compared to a pc game. The pc cannot lose performance. Like he said the pc hardware doesnt start running at 50%

Think of it this way. You have 2 stock mustangs, now you take one and dyno tune it. You still have the same engine but this improves the performance. The stock mustang doent lose performance. Now consoles take this one step farther, they design the system just to run games. Using the mustang again you remove the seats, radio, a/c and improve performance by reducing weight.

There is no debate there is performance improvements in consoles. The only debate is how much but even then there is no one answer. Not sure why you are so focus on this when I was talking about the next gen demos running at E3. I was using john carmack as an example of how it was possible, not saying it some golden rule.


I quite dislike your mustang analogy. Why bother ripping out the AC, seats, and other weight adders like power windows when you can keep all this shit in there and still go faster with a little investment? I'd much rather pull up in a fast and functional mustang then one that has taken comfort and shoved it out the window. Pointless to me to tune a stock Mustang when I could spend an extra couple hundred bucks at performance upgrades and get the tune for free. That and the fact is that the PS4 might be the mustang, then that would make the PC a freaking tank that is pure brute force and is faster than a mustang. So the mustang might be more efficient, but the tank makes up for it in brute power and ends up faster in any case. It's not a stock mustang vs. a tuned mustang argument...it's a stock tuned mustang vs. a loaded tank with speed. Of course most PC's aren't like that, and will be like matchbox cars compared to mustang at PS4 launch. Still, why do an apples to apples comparison when we can do a apples to oranges comparison?

Kb-Smoker
10-Jul-2012, 00:17
I quite dislike your mustang analogy. Why bother ripping out the AC, seats, and other weight adders like power windows when you can keep all this shit in there and still go faster with a little investment? I'd much rather pull up in a fast and functional mustang then one that has taken comfort and shoved it out the window. Pointless to me to tune a stock Mustang when I could spend an extra couple hundred bucks at performance upgrades and get the tune for free. That and the fact is that the PS4 might be the mustang, then that would make the PC a freaking tank that is pure brute force and is faster than a mustang. So the mustang might be more efficient, but the tank makes up for it in brute power and ends up faster in any case. It's not a stock mustang vs. a tuned mustang argument...it's a stock tuned mustang vs. a loaded tank with speed. Of course most PC's aren't like that, and will be like matchbox cars compared to mustang at PS4 launch. Still, why do an apples to apples comparison when we can do a apples to oranges comparison?

Sure comparing high end pc to console but I was comparing equal hardware. Like RSX vs 7800 gt running games. This is hard to do with benchmarks because the 7800gt cannot even run modern games like bf3.

Again this was really about PS4 running the next gen demo at E3, it got twisted into this debate.

bgassassin
10-Jul-2012, 02:23
Sure comparing high end pc to console but I was comparing equal hardware. Like RSX vs 7800 gt running games. This is hard to do with benchmarks because the 7800gt cannot even run modern games like bf3.

Again this was really about PS4 running the next gen demo at E3, it got twisted into this debate.

It wasn't twisted into this debate. You started it that way. You tried to compare different powered hardware (PS4 and demos on a PC using GTX 680), and to back it up used a tweet from Carmack comparing similar PC and console hardware.

I dont know what is hard to understand. We are talking about running console games vs pc games.

The console gains performance, the pc doenst some how lose power. For you example you have 1 tflop pc gpu and 1 tflop ps4. The ps4 would get around double the performance running a game built for it compared to a pc game. The pc cannot lose performance. Like he said the pc hardware doesnt start running at 50%

Think of it this way. You have 2 stock mustangs, now you take one and dyno tune it. You still have the same engine but this improves the performance. The stock mustang doent lose performance. Now consoles take this one step farther, they design the system just to run games. Using the mustang again you remove the seats, radio, a/c and improve performance by reducing weight.

There is no debate there is performance improvements in consoles. The only debate is how much but even then there is no one answer. Not sure why you are so focus on this when I was talking about the next gen demos running at E3. I was using john carmack as an example of how it was possible, not saying it some golden rule.

It's very easy to understand, but your explanations aren't logical. Your explanations try to make the console environment sound like it can exceed it's capability. You're even trying to twist ERP's post to justify what you are saying. Which really goes against the analogy you just made. I also agree with Sonic's analogy.

Andrew Lauritzen
10-Jul-2012, 02:44
Think of it this way. You have 2 stock mustangs, now you take one and dyno tune it.
More like you take a prius and tune it... then race it against the stock mustang that no one bothered to tune because it's way faster than the prius is ever going to get ;)

The whole topic is a bit silly TBH - outside of very specific cases like Carmack mentioning updating textures (where there's an abstraction penalty precisely because guess what, there's actually implementation differences!) you can't draw any general conclusions. Furthermore since almost no one bothers to optimize for PC (since frankly it's just a lot faster in the places that you'd typically optimize, even the day the new consoles come out), it's hard to compare the "speed of light" in both cases.

I've actually gotten a bit cynical about this entire argument lately since there's so many unsubstantiated comments flying around one way or another that are just outdated or untrue. Hell a lot of people on my twitter feed are just discovering DX11 (presumably finally moving to new console development) so I'm gonna go ahead and claim that the vast majority of game developers are not really even qualified to make a comment on this... again, excepting very specific use cases like Carmack's, but even he admitted to not having tried an API that has been out for years now.

There was talk of some of this a few months back and claims of how many draw calls or state changes could be done in one place or another, most of which turned out to be nonsense when Humus and I and a few others put them to the test on PC. Thus you can understand my cynicism to this entire discussion.

Let's just get to the heart of this - what exactly are you trying to do/figure out here? Because the question is ill-formed, and it makes it sound like you have some sort of agenda that you're just trying to justify with cherry-picked "facts". If that's not true, great, but please enlighten me to the end goal here.

Kb-Smoker
10-Jul-2012, 02:51
It wasn't twisted into this debate. You started it that way. You tried to compare different powered hardware (PS4 and demos on a PC using GTX 680), and to back it up used a tweet from Carmack comparing similar PC and console hardware.



It's very easy to understand, but your explanations aren't logical. Your explanations try to make the console environment sound like it can exceed it's capability. You're even trying to twist ERP's post to justify what you are saying. Which really goes against the analogy you just made. I also agree with Sonic's analogy. I'm not even sure what you are debating, are you saying that it doesnt improve performance? Sonic analogy was again about high end pc vs console. Not sure how that backs up anything we are talking about....


More like you take a prius and tune it... then race it against the stock mustang that no one bothered to tune because it's way faster than the prius is ever going to get ;)

Its like no one reads the thread and think people are talking about high end PC vs consoles...

:lol:

The funny thing about a prius when racing it you get terrible mpg. Like on top gear they got around 12 or something silly...

Andrew Lauritzen
10-Jul-2012, 02:53
Its like no one reads the thread and think people are talking about high end PC vs consoles...
I was just kidding around, hence the ;). See the rest of the post for the more serious reply.

Kb-Smoker
10-Jul-2012, 02:55
I was just kidding around, hence the ;). See the rest of the post for the more serious reply.

Oh :lol: That wasnt there when i posted. i thought b3d had went crazy today... :smile:

:runaway:

The whole debate started over this statement, can the ps4 at the leak spec handle the games running on single 680 gtx: UE4, starwars 1313, and square enix demo all at e3 and look about the same at console resolution.

edit: to be fair to carmack they do not use dx they use opengl. That is why he does not work with it...

Andrew Lauritzen
10-Jul-2012, 03:02
Upon rereading your original question, I guess I'd answer it like this:


First is it really possible to estimate how much a PC's hardware power is utilized for a game?
Outside of specific use cases, no. And even then, it's not about "how much is utilized"; it's utterly trivial to make the task manager say 100% ;) It's about how much time and more importantly power it takes to solve a fixed-size problem.


And if so is it really 50%?
In the general case, definitely not.

That said, I'd still say I think low level APIs are interesting, and you do lose *something* to the API. Specifically on integrated graphics, as Carmack mentions, the current APIs are not particularly well suited. But that's sort of a separate topic honestly.

bgassassin
10-Jul-2012, 03:14
I'm not even sure what you are debating, are you saying that it doesnt improve performance?

If "improve performance" means for example a 7850 in a console environment means that it will perform better than it theoretically can, then yes I disagree with that. It will never perform better then it was designed to in a console. It would just perform closer to how it was intended to perform in a console.

Sonic analogy was again about high end pc vs console. Not sure how that backs up anything we are talking about....
Its like no one reads the thread and think people are talking about high end PC vs consoles...

And here is why people think that.

The whole debate started over this statement, can the ps4 at the leak spec handle the games running on single 680 gtx: UE4, starwars 1313, and square enix demo all at e3 and look about the same at console resolution.

To me it seems like you're missing your own premise that started this. You compared a high end PC to a console and that's why people respond accordingly. I was trying to find out how logical the premise was from people more experienced because it didn't make sense to me. I'm satisfied with what I've seen from others posting.

Upon rereading your original question, I guess I'd answer it like this:


Outside of specific use cases, no. And even then, it's not about "how much is utilized"; it's utterly trivial to make the task manager say 100% :wink: It's about how much time and more importantly power it takes to solve a fixed-size problem.


In the general case, definitely not.

That said, I'd still say I think low level APIs are interesting, and you do lose *something* to the API. Specifically on integrated graphics, as Carmack mentions, the current APIs are not particularly well suited. But that's sort of a separate topic honestly.

Actually those were my questions from a previous debate with KB. But the general responses given were in line with what I was thinking.

Andrew Lauritzen
10-Jul-2012, 03:17
The whole debate started over this statement, can the ps4 at the leak spec handle the games running on single 680 gtx: UE4, starwars 1313, and square enix demo all at e3 and look about the same at console resolution.
"Console resolution" = 720p? With or without AA? 30Hz?

Remember, the difference between (720p, No AA, 30Hz) and (1080p, 4x AA, 60Hz) is ~4-10x more work! Indeed that massive difference in the amount of computational power required to hit the "baseline expected performance" on each platform is what often makes people thing PCs are vastly less efficient than they really are.

Kb-Smoker
10-Jul-2012, 03:37
If "improve performance" means for example a 7850 in a console environment means that it will perform better than it theoretically can, then yes I disagree with that. It will never perform better then it was designed to in a console. It would just perform closer to how it was intended to perform in a console.



To me it seems like you're missing your own premise that started this. You compared a high end PC to a console and that's why people respond accordingly. I was trying to find out how logical the premise was from people more experienced because it didn't make sense to me. I'm satisfied with what I've seen from others posting.



Actually those were my questions from a previous debate with KB. But the general responses given were in line with what I was thinking.

"will not perform better theoretically can, but it will perform closer. " Then wouldnt this statement be true. A console hardware would preform better than pc hardware? :lol:

I am not comparing any high end pc, I comparing a 680 gtx to a console that will release in 1-2 years. That 680 will not be high end at the time these consoles launch.

Everyone has said it will perform better, exactly what I said... :lol: Now how much better is really up for debate. I dont think we could find a exact answer unless we look game by game.


"Console resolution" = 720p? With or without AA? 30Hz?

Remember, the difference between (720p, No AA, 30Hz) and (1080p, 4x AA, 60Hz) is ~5-10x more work! Indeed that massive difference in the amount of computational power required to hit the "baseline expected performance" on each platform is what often makes people thing PCs are vastly less efficient than they really are.
with ps360 that could mean below 720p. A lot of big AAA games run under 720p.

But I do not believe that will be the case with ps470. I believe 720p will be the target for next gen games. I dont see the need for 1080p given that not all tv even support this res.

bgassassin
10-Jul-2012, 03:51
"will not perform better theoretically can, but it will perform closer. " Then wouldnt this statement be true. A console hardware would preform better than pc hardware? :lol:

I am not comparing any high end pc, I comparing a 680 gtx to a console that will release in 1-2 years. That 680 will not be high end at the time these consoles launch.

Everyone has said it will perform better, exactly what I said... :lol: Now how much better is really up for debate. I dont think we could find a exact answer unless we look game by game.

I never said the console hardware wouldn't perform better than PC hardware. This is what I'm saying is wrong.

I look at it as the console having 2 TFLOP of "pc power" and pc having 1 Tflop.

A 1 TFLOP GPU is 1 TFLOP no matter what. And in turn your original premise tried to "push" the performance of PS4's target GPU up there with a computer using a 680 based on that thought process I just quoted. So no, nobody has said what you are saying.

Kb-Smoker
10-Jul-2012, 04:14
I never said the console hardware wouldn't perform better than PC hardware. This is what I'm saying is wrong.



A 1 TFLOP GPU is 1 TFLOP no matter what. And in turn your original premise tried to "push" the performance of PS4's target GPU up there with a computer using a 680 based on that thought process I just quoted. So no, nobody has said what you are saying.
So you are saying the console hardware will perform better. But that was my point. :?: I really dont have a clue what you are going on about? Seem you agree with what i'm saying.

Was the fact I used tflop as measurement of this power difference?

But i was just using your example comparing the power in gflop in this thread. The reason I used it on gaf is because they think that gflop equal performance. Really glfop is all the care a lot which is silly IMO. Like I said on GAF I was making it easy to understand.

console's GPU would have a max of say 900 GFLOPs used (it still deals with the same things only much leaner) while the PC GPU is only having 450 GLOPs used.
I was saying the pc doesnt lose the performance[measure in your gflop for example] like you were syaing, console gains the performance[increase the gflop]

It's worth noting that when Carmack says 2x he's talking about DX9. DX11 will reduce that somewhat.

Also, that level of optimisation will only apply to games at least a couple of years into the console lifescycle because of the time it takes developers to optimise console hardware. So I wouldn't expect a 1.8 TFLOP GPU in PS4 to be matching the 7890 on day 1. Two years down the line in newer games it might but of course by then the 7970 will be mainstream level performance.

Finally, when we say PC's have half the efficiency of consoles that would only be at the console level graphics. i.e. it would take double RSX performance to achieve PS3 level visuals in a modern game. Once you start scaling the graphics up I expect PC games get far less efficient than that due to the lack of optimisation given over to graphics beyond the console level.

That is the main point I have been making. I believe these games we saw at E3 will be running on Ps4/x720. That was the whole point to this debate. I will go even farther and say by the end of gen game willl look better than tech demos/demos. This is why you made this thread because you do not believe the ps4/x720 at given specs can handle these games. That is the thread you should have posted....

Acert93
10-Jul-2012, 04:24
Hmmm what is probably missed in here is that consoles also get to "cheat" at certain workloads/optimizations as the games are designed for them. e.g. the Xbox 360 has issues with texturing (specifically AF) as it is a performance hog. So a PC may not get much, or any benefit in performance by matching 1-4x AF, but if 16x AF was applied in both scenarios the equations totally flips.

It really depends on what your workloads are and what you are willing to give up. Console developers are willing to give up a ton of IQ to match performance but may in turn complain due to some of the PC overhead makes a 60Hz game difficult on the PC. Closed boxes have a lot of advantages (specific feature set, well understood latencies, larger target audience to justify investment, etc) but the big one is being targeted as a baseline.

Maybe the best question would be to ask a handful of software engineers at a number of game studios making various games--both high end and multiform--if they could get better visual results from a 2TFLOPs GPU with 8 core CPU and 8GB of memory on a PC or a 1TFLOPs GPU with 4 Core CPU and 4GB of memory on a console?

Of course if you let them cheat and cut texture resolution, disable AF, apply full screen blur (ohhh sorry, post process AA!), cut LOD, and allow a ton of jaggies run wild and lock at 30Hz BUT call it the "same" then the closed "cheat" box will always win :P

Andrew Lauritzen
10-Jul-2012, 04:28
That is the main point I have been making. I believe these games we saw at E3 will be running on Ps4/x720. That was the whole point to this debate. I will go even farther and say by the end of gen game willl look better than tech demos/demos..
Of course they'll run in some form. I doubt it'll be at the same resolution and quality levels, but who really cares? They'll make the best use of the hardware that they can. And frankly, I'll be they could do even better on the PCs they are demoing on now if they spent some time optimizing. There really is an absurd amount of power modern high end GPUs... people just throw a lot of it away by just jacking up resolutions, shadow maps, etc. without actually implementing more efficient algorithms.

bgassassin
10-Jul-2012, 04:30
So you are saying the console hardware will perform better. But that was my point. :?: I really dont have a clue what you are going on about? Seem you agree with what i'm saying.

Was the fact I used tflop as measurement of this power difference?

But i was just using your example comparing the power in gflop in this thread. The reason I used it on gaf is because they think that gflop equal performance. Really glfop is all the care a lot which is silly IMO. Like I said on GAF I was making it easy to understand.


I was saying the pc doesnt lose the performance[measure in your gflop for example] like you were syaing, console gains the performance[increase the gflop]



That is the point I have been making. I believe these games we saw at E3 will be running on Ps4/x720. That was the whole point I was saying. I will go either farther and say by the end of gen game willl look better than tech demos/demos.

That was only a part of it, which as I was saying the problem is the example you were giving because context-wise it doesn't agree at all.

And secondly was you using Carmack's tweet to back up that point which given the proper context according to some of the posts in this thread doesn't work at all.

And to say end gen PS4/Xbox3 games will look better than those demos considering the hardware used is expecting a lot from a fully fleshed out game even that late. But they seem to want to push console gens longer before a successor is released so they may figure out some tricks to achieve it. :lol:

Of course if you let them cheat and cut texture resolution, disable AF, apply full screen blur (ohhh sorry, post process AA!), cut LOD, and allow a ton of jaggies run wild and lock at 30Hz BUT call it the "same" then the closed "cheat" box will always win :P

Of course they'll run in some form. I doubt it'll be at the same resolution and quality levels, but who really cares?

Haha. This is where I'm getting at. I don't see anyway the console version will be 1:1 with the PC version based on what we know so far.

Kb-Smoker
10-Jul-2012, 04:59
Haha. This is where I'm getting at. I don't see anyway the console version will be 1:1 with the PC version based on what we know so far.

Its funny how little thing change, people said the same thing about the FF7 HD tech demo. "No way they will be able to run that," now go back and watch how dated that thing looks. Maybe you were not around back then to know...

bgassassin
10-Jul-2012, 05:15
Its funny how little thing change, people said the same thing about the FF7 HD tech demo. "No way they will be able to run that," now go back and watch how dated that thing looks. Maybe you were not around back then to know...

That was done on PS3. Not the same comparison as to what we're talking about now.

Kb-Smoker
10-Jul-2012, 05:41
That was done on PS3. Not the same comparison as to what we're talking about now.

That was 2005. PS3 launched in end of 2006. Its was running on sli 6800s which was the first ps3 "dev kits." :wink: Not only running on the best pc gpu card out, it ran them in sli.

Now you have tech demo running on the top of the line single gpu.

bgassassin
10-Jul-2012, 05:44
That was 2005. PS3 launched in end of 2006. Its was running on sli 6800s which was the first ps3 "dev kits." :wink:

And PS4 dev kits aren't running on GTX 680s are they? :wink:

Acert93
10-Jul-2012, 05:46
And PS4 dev kits aren't running on GTX 680s are they? :wink:

I am beginning to believe you are taking a perverse joy pointing things like this out :cry:

:wink:

bgassassin
10-Jul-2012, 05:52
I am beginning to believe you are taking a perverse joy pointing things like this out :cry:

:wink:

I have no idea what you are talking about. :twisted: :razz:

Kb-Smoker
10-Jul-2012, 06:01
And PS4 dev kits aren't running on GTX 680s are they? :wink:

There is no PS4 dev kits. The rumor from June say it all on paper.

http://www.neogaf.com/forum/showthread.php?t=477540

The PlayStation 4 is still only on paper... :lol:

The gpu rumor is Tahiti which is the top of the line AMD model... ;)

bgassassin
10-Jul-2012, 06:20
There is no PS4 dev kits. The rumor from June say it all on paper.

http://www.neogaf.com/forum/showthread.php?t=477540

:lol:

The gpu rumor is Tahiti which is the top of the line AMD model... ;)

:lol:

Yes, there are no PS4 dev kits despite the target specs coming out last year. :roll:

And the rumor says 18CUs which even you've attested to the power in this very thread. I give you credit for trying though. :smile:

Kb-Smoker
10-Jul-2012, 06:26
:lol:

Yes, there are no PS4 dev kits despite the target specs coming out last year. :roll:

And the rumor says 18CUs which even you've attested to the power in this very thread. I give you credit for trying though. :smile:

Not sure how rumor target specs prove anything. You have to have it on paper before you make the chips. :shock:

So the confirm leak target specs/sdk of wiiu you discount, but the rumor target specs of the ps4 is set in stone? :lol: :wink:

I still think thing they will have apu with a gpu going back to the first ps4 rumors. It may not be a gpu/cpu apu but it going to have something on there with the cpu. Good thing about ps4/x720 they should announce all the specs with the console unlike the wiiu.

Andrew Lauritzen
10-Jul-2012, 06:27
Its funny how little thing change, people said the same thing about the FF7 HD tech demo. "No way they will be able to run that," now go back and watch how dated that thing looks. Maybe you were not around back then to know...
No one's saying it's not going to look good, etc. I don't think, just keep reasonable expectations. Obviously technology moves along but if we're talking consoles in the next year or two at reasonable price points, it's doubtful they are going to have hardware at the level of a GTX 680. And no, being a fixed console design point is not going to make up that delta in raw performance in general.

But hey like I said I doubt they are really optimizing the hell out of the PC implementation, so it's quite possible you could achieve what they have with much less hardware.

bgassassin
10-Jul-2012, 06:46
Not sure how rumor target specs prove anything. You have to have it on paper before you make the chips. :shock:

So the confirm leak target specs/sdk of wiiu you discount, but the rumor target specs of the ps4 is set in stone? :lol: :wink:

What? LOL.

I also give you credit for being consistent about twisting posts and being all over the place. I've never discounted the Wii U target specs. :smile: I've said that to you the last time you said that to me. You even quoted me in the Wii U GPU thread saying I still expect the base architecture to resemble an R700. So hopefully this time you'll remember that from here on. And now rumored target specs don't prove anything, but you sure harp on Wii U's as proving something. Don't know what Wii U has to do with this thread though.

Regardless PS4 won't have a GTX 680 or AMD equivalent. And expecting 1:1 performance with those demos and eventually surpassing them at any point next gen based on what we know is expecting a lot.

Kb-Smoker
10-Jul-2012, 07:04
No one's saying it's not going to look good, etc. I don't think, just keep reasonable expectations. Obviously technology moves along but if we're talking consoles in the next year or two at reasonable price points, it's doubtful they are going to have hardware at the level of a GTX 680. And no, being a fixed console design point is not going to make up that delta in raw performance in general.

But hey like I said I doubt they are really optimizing the hell out of the PC implementation, so it's quite possible you could achieve what they have with much less hardware.

They said no hardware optimizing for the square enix tech demo. If you notice at E3 everyone was using a single 680 gtx for their net Gen tech demos. Find that hard to believe everyone just happen to be demo on this same card. It mean something...


RPGSite pushed a little to find out what graphics card was powering Square Enix’s demo and although Hashimoto didn’t reveal its name, he said that ‘what I can say is that what we’re using is about the equivalent as what is being used by any other companies for their tech demos.‘

The equivalent as what is being used by any other companies huh? Well, we do know that Epic Games demonstrated Unreal Engine 4 on a single GTX 680. And we do know that Crytek used a GTX 680 for their CryEngine 3 tech demos. Gearbox has also used Nvidia’s GTX 680 cards to showcase the PC, PhysX accelerated, version of Borderlands 2. It’s also no secret that Nvidia’s GTX 6xx series was heavily used in this year’s E3 and we also know that the freshly released GTX 690 was not used by any company to showcase their tech demos.

Put these things together, and you get the card that powered the Agnis Philosophy Tech Demo. In other words, yes. Agnis Philosophy was running on a single GTX 680. In addition, the build that was demonstrated was not optimized at all, meaning that Square Enix could actually produce these graphics in real-time (when all physics, AI, and animations are added to the mix).

Now guess what star wars 1313 was running on? Yes a 680 gtx. Not hard to connect the dot here guys....

http://www.dsogaming.com/news/the-impressive-agnis-philosophy-tech-demo-was-running-on-a-single-gtx680/

Andrew Lauritzen
10-Jul-2012, 07:28
They said no hardware optimizing for the square enix tech demo. If you notice at E3 everyone was using a single 680 gtx for their net Gen tech demos. Find that hard to believe everyone just happen to be demo on this same card. It mean something...
I wouldn't read too much into it honestly... they're just picking the fastest single GPU available, nothing magical.

ultragpu
10-Jul-2012, 10:10
Based on the video interviews and stuffs I've read about Agni, it was run on a single 680gtx with fps ranging anywhere from 30-60fps and heavily unoptimized "polygon modeled toes inside shoes etc" all at 1080p resolution.
Now assuming we run it with the rumored PS4 spec 4g gddr5 "let's be generous" + 1.84TF gpu, plus properly optimized as in cutting the unnecessary toes and etc, would you say the demo can retain 1080p at average 30fps with fxaa and same fidelity for the rest? This is the billion dollar question isn't it;)?

Andrew Lauritzen
10-Jul-2012, 17:11
Based on the video interviews and stuffs I've read about Agni, it was run on a single 680gtx with fps ranging anywhere from 30-60fps and heavily unoptimized "polygon modeled toes inside shoes etc" all at 1080p resolution.
Now assuming we run it with the rumored PS4 spec 4g gddr5 "let's be generous" + 1.84TF gpu, plus properly optimized as in cutting the unnecessary toes and etc, would you say the demo can retain 1080p at average 30fps with fxaa and same fidelity for the rest? This is the billion dollar question isn't it;)?
Impossible to say of course without knowing the details of the demos but I wouldn't be surprised if dropping AA/30Hz and optimizing it could have it running in that sort of spec (or even lower really). But I'm sort of sceptical of that spec TBH... time will tell.

AlexV
10-Jul-2012, 17:41
Based on the video interviews and stuffs I've read about Agni, it was run on a single 680gtx with fps ranging anywhere from 30-60fps and heavily unoptimized "polygon modeled toes inside shoes etc" all at 1080p resolution.

Maybe people need to spend more time asking exactly what was pre-computed and what was computed on the fly in that particular demo, instead of running with the "it was totally unoptimized assets yo" line. Although it does look pretty.

Kb-Smoker
11-Jul-2012, 04:09
Based on the video interviews and stuffs I've read about Agni, it was run on a single 680gtx with fps ranging anywhere from 30-60fps and heavily unoptimized "polygon modeled toes inside shoes etc" all at 1080p resolution.
Now assuming we run it with the rumored PS4 spec 4g gddr5 "let's be generous" + 1.84TF gpu, plus properly optimized as in cutting the unnecessary toes and etc, would you say the demo can retain 1080p at average 30fps with fxaa and same fidelity for the rest? This is the billion dollar question isn't it;)?
I think if you change that from 1080p to 720p its a yes.... :wink: Maybe by the end of the gen you can pull this off at 1080p....


I wouldn't read too much into it honestly... they're just picking the fastest single GPU available, nothing magical.
i dunno.. You dont have a single one running SLi or even the brand new 690 gtx. Seems like they targeted this spec for a reason and everyone of these will be next console engines/games. Maybe it is nothing , doubt we would ever find out if this was true or not.

Andrew Lauritzen
11-Jul-2012, 07:58
i dunno.. You dont have a single one running SLi or even the brand new 690 gtx.
That's cause SLI/AFR is trash :)

Kb-Smoker
13-Jul-2012, 07:09
That's cause SLI/AFR is trash :)

Epic saying everyone showing Sony and ms what they can do with this power. Maybe someone said here the target gpu, make what you can... target gpu was 680.

"In determining what the next consoles will be, I'm positive that [Sony & Microsoft are] talking to lots and lots of developers and lots of middleware companies to try and shape what it is. We've certainly been talking with them and we've been creating demonstrations to show what we think.


"And obviously the Elemental demo, same thing. We're certainly showing capability if they give s that kind of power, but so is everybody else."


Epic even say if they can't do that today then delay the consoles another year.


http://www.videogamer.com/xbox360/gears_of_war_judgement/news/epic_happy_to_wait_for_massive_leap_in_next-gen_console_performance.html

aaronspink
14-Jul-2012, 10:45
:lol:

Yes, there are no PS4 dev kits despite the target specs coming out last year. :roll:

And the rumor says 18CUs which even you've attested to the power in this very thread. I give you credit for trying though. :smile:

If there aren't dev kits, don't expect a PS4 until WELL into 2014. You do realize that dev kits are released well before hardware is even taped out right?

Rangers
14-Jul-2012, 11:45
He was being sarcastic.

bgassassin
14-Jul-2012, 21:54
He was being sarcastic.

I guess the eye roll wasn't clear enough.

user542745831
27-Sep-2012, 18:03
Just came across this:




http://games.on.net/2012/09/why-the-pc-version-of-nfs-most-wanted-will-be-the-best-around-criterion-talks-tech/

[...]

games.on.net: Do you know which features of DX11 you’ll be using?

Leanne Loombe: We’re primarily leveraging the increased efficiency of DX11 to give improved performance. The move to DX11 from DX9 has given us around a 300% improvement in rendering performance.

[...]


:grin:

Rangers
27-Sep-2012, 18:57
Darn, shame Wii U missed out on DX 11 then :lol:

almighty
27-Sep-2012, 23:36
I have my 3x AMD 7950's at a constant 99% load so they're being used to the fullest :wink:

antwan
28-Sep-2012, 09:33
I have my 3x AMD 7950's at a constant 99% load so they're being used to the fullest :wink:

What are they used for though? Running console-ports in a higher resolution? :smile:

If they'd optimise for PC then you would be playing sim city, the size of new york with the detail level of crysis :cool:

almighty
28-Sep-2012, 09:45
What are they used for though? Running console-ports in a higher resolution? :smile:

If they'd optimise for PC then you would be playing sim city, the size of new york with the detail level of crysis :cool:

For playing Crysis with SGSSAA :cool:

user542745831
06-Oct-2012, 14:02
Just came across the following interview again and the following part appears to fit within this thread quite well:




http://www.eurogamer.net/articles/digitalfoundry-tech-interview-metro-2033?page=4

Digital Foundry: How would you characterise the combination of Xenos and Xenon compared to the traditional x86/GPU combo on PC? Surely on the face of it, Xbox 360 is lacking a lot of power compared to today's entry-level "enthusiast" PC hardware?

Oles Shishkovstov: You can calculate it like this: each 360 CPU core is approximately a quarter of the same-frequency Nehalem (i7) core. Add in approximately 1.5 times better performance because of the second, shared thread for 360 and around 1.3 times for Nehalem, multiply by three cores and you get around 70 to 85 per cent of a single modern CPU core on generic (but multi-threaded) code.

Bear in mind though that the above calculation will not work in the case where the code is properly vectorised. In that case 360 can actually exceed PC on a per-thread per-clock basis. So, is it enough? Nope, there is no CPU in the world that is enough for games!

The 360 GPU is a different beast. Compared to today's high-end hardware it is 5-10 times slower depending on what you do. But performance of hardware is only one side of equation. Because we as programmers can optimise for the specific GPU we can reach nearly 100 per cent utilisation of all the sub-units. That's just not possible on a PC.

In addition to this we can do dirty MSAA tricks, like treating some surfaces as multi-sampled (for example hi-stencil masking the light-influence does that), or rendering multi-sampled shadow maps, and then sampling correct sub-pixel values because we know exactly what pattern and what positions sub-samples have, etc. So, it's not directly comparable.

almighty
06-Oct-2012, 16:51
Capom once said that 360's Xenos CPU is equal to a 3Ghz Pentium Dual core, so it's really slow by today's standards.

Shifty Geezer
06-Oct-2012, 16:58
Depends what you're doing. Maths throughput should still be high, with 3 wide vector units at 3 GHz.

Mobius1aic
08-Oct-2012, 22:24
Depends what you're doing. Maths throughput should still be high, with 3 wide vector units at 3 GHz.

Which is probably why the 360 has managed to stay competitive for so long when it comes to games with good physics, animation, etc that need GFLOPS.