Is PS4's focus on Compute a proper substitute for a more powerful CPU?

Inuhanyou

Veteran
Pardon the stupid question. But it just seems that with Sony talking up the HSA, hUMA, APU's and unified memory pool architectures, they are putting a lot of stock into making up for the low powered CPU they have with an abundance of left over GPU computing ability to do a lot of the same tasks a CPU would do in games.

I'm just wondering if anyone even knows if that's actually feasible as a stand in for not having an i5 or an i7 or some other higher powered CPU equivalent. I've heard that the CPU that is inside the PS4, while weak, is decently flexible, so i guess that would help shift tasks somewhat??
 
Pardon the stupid question. But it just seems that with Sony talking up the HSA, hUMA, APU's and unified memory pool architectures, they are putting a lot of stock into making up for the low powered CPU they have with an abundance of left over GPU computing ability to do a lot of the same tasks a CPU would do in games.

I'm just wondering if anyone even knows if that's actually feasible as a stand in for not having an i5 or an i7 or some other higher powered CPU equivalent. I've heard that the CPU that is inside the PS4, while weak, is decently flexible, so i guess that would help shift tasks somewhat??

It depends on what tasks you're going to perform. In a closed system like a console your main focus is the running of game code, and certain features of that code are far better being transferred to the equivalent of a massively parallel compute stream like you have with a GPU. Of course you can still some most problems by chucking massive amounts of serial processing at them. But you need far more for the same solution in any given problem space of sufficient complexity. And that increases the physical problems of fitting all that into a capped silicon budget with a fixed TDP.
 
I think the six additional CUs were a better solution overall compared to the fixed function logic found in Xbox one. the GPU compute solution is very flexible.
 
If it's a proper substitute - no, probably not. GPU compute can be very hard to get right even for tasks it's fairly suited for and many tasks it isn't. However strong CPUs take up a lot of silicon die space, they consume a lot of power (especially if designed by AMD...), so what we have in today's consoles really is the best we could hope for.

Whatever shortcomings the hardware has, developers will work around it. They always have.
 
Thanks for the information. I'm sure devs will find the right balance for what they have in mind.

I'm just wondering what GPU compute can actually add to the experience for Sony to base a design around it. I'm also assuming that the spare GPU floppage is not proportional to the flops of a CPU for computations right? Cause 200 or 300% more computational grunt would be insane. Also, i'm a bit in the dark as to what tasks GPU compute is even able to be used for at this point.
 
So basically, they are stumbling around in the dark..? I know from playing Resogun non stop for the past week that Housemarque's voxel physics based level design is based purely on harnessing that GPU compute. Infact they said they are using GPU compute to the point where they are hardly using the CPU.

But then that seems like a huge thing for GPU compute to actually do. And the applications for that are still unknown. I'm guessing only first party devs would be able to harness it in a game cause parity for multiplatforms. But PC gets specific physics based stuff, so who knows i guess
 
So are game developers and IHVs...

Erm.. was that a joke?
There's a whole multi-million dollar HPC market that uses a lot of GPGPUs for computing tasks.

Even if you're referring to game code exclusively, we've had real-time physics and texture decompression in GPUs for years.
Sure, they'll need to be creative and find more uses for GPGPU if they want to take full advantage of that hardware, but it's not like they're all completely "in the dark".
 
Erm.. was that a joke?
There's a whole multi-million dollar HPC market that uses a lot of GPGPUs for computing tasks.
HPC != computer games. Devs haven't had (commercially viable) access to GPGPU so haven't explored what they can do with. They'll learn what they can do over the coming years, but at the moment they can't say, "we can do this." Neither can the console companies nor GPU vendors, because they are talking about potential and not specific examples. See Cerny's comments on the topic. There are a few known cases like particle physics, but not (yet) rigid body physics, but mostly it's virgin territory to be explored. Devs aren't looking at GPGPU and going, "Aha! We can do this, this and this!" They're looking at it going, "okay, let's poke around and see what we can do," much as they do with any new hardware, like Cell.

Obviously they're not completely in the dark. That was tongue-in-cheek. But they are not at all clear on what they can do with compute yet.
 
I disagree to some extent. Stuff like SPE programming was harder last gen, and you needed to go through hoops to be able to even do so (devkits, simulators). Housemarque already indicated that using CUs is way, way easier and they'd never want to go back.

Crucially, CUs are mostly the same as they are in PC GPUs, and they are easily available. Programming them just like on the PS4 will be harder, but in general we can expect some high end stuff early (and I'm considering Housemarque's work pretty high end already, by the way).

And speaking of the SPEs, work done for those I think in general translates quite well to CUs. Sure, there are some differences, but there are also similarities, especially when they are used for the 'proper' thing that SPEs and CUs have in common, rather than recompiling PPU code on SPUs because the PS3 only had 1 PPU core and the 360 3, so to speak.

Right away, I think we should see a very easy increase in physics in next-gen, and on consoles in particular. Mantle API should give a nice boost to CPU/GPU combined rendering as well.
 
I disagree to some extent. Stuff like SPE programming was harder last gen, and you needed to go through hoops to be able to even do so (devkits, simulators). Housemarque already indicated that using CUs is way, way easier and they'd never want to go back.

Crucially, CUs are mostly the same as they are in PC GPUs, and they are easily available. Programming them just like on the PS4 will be harder, but in general we can expect some high end stuff early (and I'm considering Housemarque's work pretty high end already, by the way).

And speaking of the SPEs, work done for those I think in general translates quite well to CUs. Sure, there are some differences, but there are also similarities, especially when they are used for the 'proper' thing that SPEs and CUs have in common, rather than recompiling PPU code on SPUs because the PS3 only had 1 PPU core and the 360 3, so to speak.

Right away, I think we should see a very easy increase in physics in next-gen, and on consoles in particular. Mantle API should give a nice boost to CPU/GPU combined rendering as well.

I think that PS devs are probably finding using compute resources easier as they were kind of doing the same with thing with the CELL. They probably will have a harder time utilising multiple threads on the CPU efficiently.

As for benefits, physics is the easy one, temporal based effects, certain elements of AI, brownian motion etc.
 
I disagree to some extent. Stuff like SPE programming was harder last gen
I never said GPGPU was hard, nor would go un(der)utilised. Only that to date, devs aren't entirely sure what they can and can't do with it. Housemarque has used it because they tried. If you asked them two years ago, "what can you use GPGPU for?" they'd probably reply, like everyone else has so far, "you can probably use it for this, or that. Looks like you could. We're not sure. We'll give it a go." There's a reason said, "we offer this resource fo rdevs to probably use in later years," and not "computer provides, this, that and the other for all devs to use." ;)

No-one looks at a new programming paradigm and instantly sees what can and can't be done with it. There are notions as to what it might be good for in general terms, but devs need hands on to really understand and master. Case in point - physics. At first glance, rigid body physics seemed like maybe it might work. Turns out it's not a great fit, but particle physics is. Expect stuff on compute that no-one anticipates, much like no-one looked at Cell and thought 'post processed antialising".
 
Not to troll the MS PR department, but i'd think a "more powerful CPU" would be unbalanced.
Sure, there are some tasks that would get an immediate boost, but when you think about the concept how data is manipulated and how GPGPU is a lot more efficient for a lot of tasks, then it makes sense that you create a system were all the components work together in perfect tandem. It is then when you can outperform a more powerful CPU. An i5 would be overkill in a lot of situations, it will only help devs who write un-optimised code. And if you train them to program like that, then they will need an i7 really fast. Bethesda is a good example; single and zero digit framerates.
 
I think you're talking rubbish. It's people who have never coded a line in their life who think GPGPU is 'da bomba'. CPUs are very important still, both as the best approach to many tasks, and as the most sane approach for affordable development. GPGPU provides ancillary performance, like a massive FPU. Cell and Xenon show how massive float performance was a waste of time in a lot of cases and completely inefficient for a lot of workloads, and their performance was mostly used for graphics tasks. We've had many a dev post on this board that, more often than not, the CPU proves the bottleneck in any game.

There are two different, complimentary processing resources in the current-gen consoles, which each brings something specific for devs to use. It's why AMD have invested in HSA instead of just putting in monster GPUs and atrophied CPUs. GPGPU is not a cure-all, not a magic bullet, nor a super-fast CPU that also does graphics, and it's disingenuous to claim only weak-source devs with bad practices and poor training need overpowered CPUs to make up for their inability to run their code on GPU.
 
Every tech company is putting a big focus on GPGPU. Wether they call it Cuda or OpenCL or something else, the concept is the same.
(PS4) HSA is just a logical result of maximising that GPGPU potential.
Also TressFX is soft body physics, so apparently that's a good 'compute'/GPGPU task.
Complex audio is a great fit for compute as well.
Then again, maybe AMD is wrong. Just don't shoot the messenger, that is all :)
 
When the PS3 has a powerful CPU, the developers mostly use it to handle graphics work. This is also partly because the GPU was relatively weaker.

On PS4, the natural approach is to replace those SPU graphics jobs with GPU ones.


For audio, it would be more for integrating audio into graphics scenes (e.g., using raytracing around graphics objects to decide how to process audio). There are also some on-going R&D on GPU-based audio processing. e.g., http://dl.acm.org/citation.cfm?id=2484010&bnc=1

For pure compute jobs, I think the GPU has to use a different approach from the CPU. I remember MLAA is done differently on the CPU and GPU. The latter may be an approximation (instead of solving the real equations).

EDIT: As I recall, AMD has a GPU-based path finding AI module too.
 
No it's not, there's a huge thread on the topic on this very forum.

Well to be precise, tracing, echo and occlusion stuff should be a good fit.

The biggest unknown right now is thhat this is the first time hybrid CPU/GPU algoritms are viable (ignoring Cell for a moment) because they are so tightly linked now.
 
Back
Top