Gflops: Last-Gen and Current-Gen

  • Thread starter Deleted member 86764
  • Start date
Status
Not open for further replies.
D

Deleted member 86764

Guest
Seeing as we’re getting games that don’t appear to be all that much better than the last generation of consoles, I thought it’d be an interesting idea to compare the theoretical gflops between them. I’m not the most technically savvy person, so feel free to correct anything I may have misinterpreted/misunderstood - hopefully, I haven’t numberfudged anything here.

According to Wikipedia/other sources, we have the following GPUs in ascending order:

PS3: 192
Xbox 360: 240
Xbox One: 1310
PS4: 1843

Then the same for CPUs:

PS4: 102
Xbox One: 112
Xbox 360: 115
PS3: 230

A couple of things stand out immediately for me; 1 the CPUs on last-gen have theoretically more gflops than the current gen(!), and 2 the order of the GPUs is exactly reversed for the CPUs. I understand that the Jaguars in the PS4/One are easier to work with and include out-of-order processing.

Their combined totals are:

360: 355
PS3: 422
Xbox One: 1422
PS4: 1945

This means that the PS4 is ‘only’ 5.5x the power of the 360 and 4.6x the PS3. The Xbox One is 4x the 360 and 3.3x the PS3.

Considering the last generation had one of the longest cycles, these numbers do seem to be especially poor. I understand that there are other efficiencies going on with these machines, like it’s not easy to measure the eSRAM in the One, I also understand that this isn’t the only way to measure potential power of consoles.

Anyway, am I the only one that’s disappointed with this?
 
Last edited by a moderator:
You can't really compare last and current gen processors though. And floating point operations aren't necessarily the most important factor for measuring the value of a CPU's performance to begin with.
 
You can't really compare last and current gen processors though. And floating point operations aren't necessarily the most important factor for measuring the value of a CPU's performance to begin with.

How about Zodiac signs? The Xbox One is a Scorpio, so is looking for a brave, trusting partner who will see it's inner beauty.
 
Seeing as we’re getting games that don’t appear to be all that much better than the last generation of consoles, I thought it’d be an interesting idea to compare the theoretical gflops between them. I’m not the most technically savvy person, so feel free to correct anything I may have misinterpreted/misunderstood - hopefully, I haven’t numberfudged anything here.

According to Wikipedia/other sources, we have the following GPUs in ascending order:

PS3: 192
Xbox 360: 240
Xbox One: 1310
PS4: 1843

Then the same for CPUs:

PS4: 102
Xbox One: 112
Xbox 360: 115
PS3: 230

A couple of things stand out immediately for me; 1 the CPUs on last-gen have theoretically more gflops than the current gen(!), and 2 the order of the GPUs is exactly reversed for the CPUs. I understand that the Jaguars in the PS4/One are easier to work with and include out-of-order processing.

Their combined totals are:

360: 355
PS3: 422
Xbox One: 1422
PS4: 1945

This means that the PS4 is ‘only’ 5.5x the power of the 360 and 4.6x the PS3. The Xbox One is 4x the 360 and 3.3x the PS4.

Considering the last generation had one of the longest cycles, these numbers do seem to be especially poor. I understand that there are other efficiencies going on with these machines, like it’s not easy to measure the eSRAM in the One, I also understand that this isn’t the only way to measure potential power of consoles.

Anyway, am I the only one that’s disappointed with this?

For what it's worth I think the actual numbers for the last gen CPU's are:

Cell: 204.8 GF
Xenon: 76.8 GF

Also the RSX packed 232 GFLOPs total, 192 was just the pixel shaders while I *think* Xenos was 216 rather than 240
 
You can't really compare last and current gen processors though. And floating point operations aren't necessarily the most important factor for measuring the value of a CPU's performance to begin with.

Yep, not even close. I couldn't tell you how many FLOPs a Intel Haswell core is versus a AMD Vishera one. It's just not relevant in CPU's.

Basically OP is making last gen consoles look unduly good by including CPU FLOP count, which is the one area last gen consoles excelled at. And ironically, they were actually regarded as some of the worst CPU's ever, even for their time.

To see the folly of this, imagine a PS3 with 2 Cell's and no RSX. That'd be 460 GFLOPs, it must be better! No, it probably would produce sub-PS2 graphics in reality... Rumor of course was Sony almost tried this, but backed out after realizing what a disaster it would have been.

All that said, yes I think current gen is a little light, but still an argument can be made it's the typical ~10X jump in real terms (for example in raw RAM it's a full 16X, in other specs maybe a little less than 10X). I think we're conflating diminishing returns with weak consoles in some cases. In other cases it may be valid. If diminishing returns is factored in, maybe we needed a 20X power jump to see a typical generation difference that 10X used to provide, onscreen, for example. In that case we might unfairly blame the hardware.

All that said, Crytek agrees with you

“We are delighted with the updates to the next-gen hardware but of course always want more,” Tracey added. “Though the PS4 and Xbox-One don’t offer an enormous jump over the previous generation in terms of raw processing power, the custom AMD APU’s within both platforms represent a huge leap forward in terms of integration and capability.”
 
Seeing as we’re getting games that don’t appear to be all that much better than the last generation of consoles, I thought it’d be an interesting idea to compare the theoretical gflops between them. I’m not the most technically savvy person, so feel free to correct anything I may have misinterpreted/misunderstood

This is not a forum for your opinions, especially when you admit in the next sentence you know nothing of the subject.

Lock/delete thread.
 
Last edited by a moderator:
Economics.

The days of one-off exotic purpose built hardware are long gone because of the cost involved in bringing them to market (Just ask Sony). Not to mention that you also have to get developers to learn and devote resources to make software for it with minimal kicking and screaming.

Continuing to raise the bar visually is getting more and more expensive each generation so the gaps between aren't as amazing as in the past.

I personally thought this generation would once again have separate CPU/GPU setups but AMD bought ATI long ago for the very purpose of what is now in both consoles. You simply cannot beat the price performance offered when getting both in a package from the same vendor. You also avoid creating an entire development ecosystem from the ground up. All that the cost of mystique (for whatever that is worth) and differentiation.

While overall computing power is disappointing at least there is an order of magnitude more memory.

Time will tell if the purpose built set top gaming console is going to continue to be a viable business over this generation because markets and peoples interests constantly change. I personally feel that there wont be enough system differentiation to wow consumers enough to continue buying locked systems with what amounts to nearly the same software libraries. I don't think it makes sense for consumers and software developers anymore. IMO, for the industry to continue growing it has to break from that model.

Because after all, the game creators dont give a shit what brand you buy into as long as you play their game.
 
nVidias was able to do more with less FLOPS by creating a better scheduler, so I suppose not all FLOPS are equal.
 
The Metro dev suggested in 2010 that the 360's CPU was comparable to PC.
The dev might have been misquoted, or wanted to flatter microsoft perhaps. Xenon CPU is INCREDIBLY far from a PC CPU. On general code, it alledgedly accomplishes roughly 0.3 instructions per clock; that's terrible by any measure. Anything PC absolutely steamrolls that CPU on any workload except heavily optimized streaming floating point workloads, where on that very specific task (mostly audio-related, in gaming) it can compare favorably up to intel CPUs roughly up until the sandy bridge generation IIRC when again it gets stomped.

Of course, audio is only a tiny fraction of the work of running a game, so on the whole it is hugely slow and inefficient.
 
The previous gen managed to launch prior to a bunch of inflection points.
The 90nm SOI process Cell and Xenon launched with was just about the last high-performance process before shrinks became non-guaranteed for scaling, and process troubles have only gotten worse since then.
The previous gen managed to launch without significant dynamic clock and voltage adjustment, something that is at this point an absolute requirement.

Semiconductor manufacturing has become even more top-heavy and expensive. Cheap shrinks, or shrinks that have an economic benefit, are taking more and more time to come about.

Console budgets and specs were provisioned for more loss-leading hardware, and the game industry in general became massively top-heavy in the meantime.
The last gen hit the cap for power consumption for an entertainment consumer appliance, and that cap hasn't budged.

The room to bloat is gone, and ideal exponential scaling has not materialized.
Then there's just diminishing returns in terms of how much effort it takes to improve on quite high-quality results from the trailing edge of the previous gen.
 
The Metro dev suggested in 2010 that the 360's CPU was comparable to PC.

http://www.eurogamer.net/articles/digitalfoundry-tech-interview-metro-2033?page=4

He suggested that vectorised code code was a little faster on a per clock/per thread basis, not that the CPU was overall faster than a Nehalem.

SSE4 used by Nehalem was a little behind VMX in terms of featureset as far as I'm aware but was theoretically capable of the same peak flops. So a 3 core Nehalem at 3.2Ghz might be a little slower in vectorized code due to it's slightly less advanced featureset but I'd expect a normal quad Nehalem to be at bit faster at the same clock speed.

And of course in non vectorized code the Nehalem would be about 5x faster as per the same article.

Everything changes with Sandybridge though with AVX both exceeding the VMX featureset and doubling the theoretical throughput. If properly utilised, Sandybridge would have stomped all over Xenon in vecotrized code and probably performed similarly to Cell.

Haswell has since expanded the featureset and doubles the throughput again. In fact an 8 core Haswell-E would be closer to the XB1's GPU than it's CPU in terms of raw FLOPs.
 
The Cell SPE, and to a lesser extent the VPUs in the PPC cores, were also a strong example of the strength of specialization for a specific task.
Cell in particular is an example of where specialization with a decently high-power vector processor can bring higher performance than what multiple nodes can allow for a CPU architecture that is a low-power generalist core. Nevermind that it's x86, which had an earlier FP deficit that took multiple chip generations to mostly stort out.

Outside of the ideal work range, however, the benefits of all those years of general performance improvement do show very clearly.
 
As the argument is predicated on the subjective assertion that current-gen isn't much better than last gen, the discussion can't be held as a technical argument, so thread locked.
 
Status
Not open for further replies.
Back
Top