I don't like CPUs. OK *spin*

The move to APU like systems on PC might take a while.

The sheer amount of brute force a big GPU and big CPU can throw at any problem means a move to a 65 or 95 or even 125 W APU/Soc would be a huge net loss, especially in terms of graphics.

In the short term a fast quad core PC could probably forgo GPU simulation that was being run using a sliver of PS4Bone GPU time and still be faster.
 
In the short term a fast quad core PC could probably forgo GPU simulation that was being run using a sliver of PS4Bone GPU time and still be faster.

No, if you design it correctly, it will die very fast when synchronizing GPU <-> CPU.

P.S. not to mention that obviously CPU need to die already. For gaming PCs. You don't need them, you can obviously run OS on modern GPU.
 
Is it not an option to just leave the consoles GPGPU stuff on the CPU SIMD for the PC version? Or would that mean major changes to the game engine?

EDIT: didn't see the above 2 posts before posting
 
No, if you design it correctly, it will die very fast when synchronizing GPU <-> CPU.

P.S. not to mention that obviously CPU need to die already. For gaming PCs. You don't need them, you can obviously run OS on modern GPU.


Run OS on GPU? Never heard this before. You need a CPU even a simple one.
 
That's fine, it can be written. Or we better write another OS, GameOS! Wait a minute.... :)

If this is such a better approach, why haven't the latest generation of consoles gone down this route?

P.S. PCs are a bad platform for modern interactive computer graphics products. That's all I want to say.

Someone should tell that to all the devs who keep making the best versions of their multiplatform games for the PC. They don't seem to be aware.
 
That's fine, it can be written. Or we better write another OS, GameOS! Wait a minute.... :)

P.S. PCs are a bad platform for modern interactive computer graphics products. That's all I want to say.

You're really talking about running the network stack, file system, content store, ui, etc all on the gnu?
 
You're really talking about running the network stack, file system, content store, ui, etc all on the gnu?

Yes, GPU is the new CPU, just without all the backward compatibility baggage. More than that, each modern Intel/AMD CPU is just a giant x86 emulator on top of totally different hardware, that is just a "narrower GPU" with more cache. :)
 
If this is such a better approach, why haven't the latest generation of consoles gone down this route?

To make easier to understand machine? The amounts of unjustified whine Sony got for PS3 were totally ridiculous.
For the same reason XBO is still DX-only and PS4 has a whole "scene-emulation" layer: make it easier for PC-centric people to understand the hardware.

Someone should tell that to all the devs who keep making the best versions of their multiplatform games for the PC

I'm talking about technical stuff, business stuff is totally different. From business perspective making a PC game makes a lot of sense right now: the install base is better than ever, there are good content distribution networks, etc.
 
To make easier to understand machine? The amounts of unjustified whine Sony got for PS3 were totally ridiculous.
For the same reason XBO is still DX-only and PS4 has a whole "scene-emulation" layer: make it easier for PC-centric people to understand the hardware.

Good point. But to be fair, for that reason specifically, it's unrealistic for this change to take place in the PC space. If we're ever going to move to a GPU only model I'd expect consoles would need to lead the charge bcause that's both where the incentive lies and where the dedicated developer support would come from.

I'm talking about technical stuff, business stuff is totally different. From business perspective making a PC game makes a lot of sense right now: the install base is better than ever, there are good content distribution networks, etc.

My point was though that the platform which is "bad for modern interactive computer graphics products" is generally seeing the better version of multiplatform games and that's not because developers are putting more effort into the PC versions as compared to the console versions, but rather because raw power (on this scale) can usually trump the ability to run latency sensitive CPU tasks via GPU async compute. At least as long as you're not going out of your way to design the game in such a way that would specifically hobble the discrete architecture by running lots of latency sensitive CPU tasks on the GPU.

There's no denying that at the system architecture level, a HSA style/shared memory APU is superior to a discrete CPU and GPU setup (although I guess it does still come with it's own disadvantages like memory contention) but to say that equates to a more capable machine for running games while completely ignoring the higher performance and feature set of the discrete system doesn't add up to me. What can be gained through running latency sensitive CPU tasks on the GPU that can't be outweighed by a discrete system with both a CPU and GPU that are twice as powerful (or more)? That's assuming of course you optimise your code for the discrete setup and don't just ty to run the HSA optimised code on it with no change.
 
Yes, GPU is the new CPU, just without all the backward compatibility baggage. More than that, each modern Intel/AMD CPU is just a giant x86 emulator on top of totally different hardware, that is just a "narrower GPU" with more cache. :)

I could see in the future there being some convergence between the CPU/GPU, having one processor that fits both applications. I didn't think things were there yet.
 
If we're ever going to move to a GPU only model I'd expect consoles would need to lead the charge

I agree. But currently console platform-holders are heavily influenced by developer mindset. And that's why I would like to preach about the change. :)

while completely ignoring the higher performance and feature set of the discrete system

I would argue that PC is not a hardware platform. PC platform = DX11. And DX11 does not have better performance than PS4, for example (it does, in some places, but it's not universally better).
More than that, PC is a combination of DX11 and some hardware quirks that you need to account for (i.e. you cannot build your game around GTX980, because you will get unbelievably low performance even on GTX960, for example). PC has "on market" GPUs that have 50x difference in raw power, which means that whole programming approaches do not always apply to lower-end, not just some effects.
I.e. I would argue that current PC multiplatform games are designed for "PC with GCN and DX11" architecture that has worse performance than PS4 in 99% of cases.

would specifically hobble the discrete architecture

But you may need to do it for some "real next-gen" graphics. Take for example Tomorrow's Children, they do 3 draw calls per each voxel in the scene! I think PC would die very very fast there. And they do it not because they want to cripple something, but because it gets them real-time GI with fully dynamic environment (destruction and such), and it looks gorgeous.
 
Last edited:
GPUs as we know them are not capable of performing the necessary accesses to privileged memory locations, nor do they have a very rich interrupt and exception handling scheme beyond "if something happens, defer to Host". Kaveri apparently had some physical capability to write to kernel memory, if it were ever set in firmware. The next APU is hard-wired to prevent this.
Some of the design choices that made it possible to treat certain components of a GPU as user space visible front ends included architecturally defining the GPU as a dependent on the CPU and treating allocations to the GPU's memory space as a guest in an IO virtualization scheme.

The GPU plugs into a CPU-provided memory system managed by a CPU-run OS, and it accepts whatever enumerations and commands a slave device should from the Host.
The code that makes up a post-processing shader does not exist in a space or a level of control that can exist independent of the central processor. At the lowest level of system management where it could, I do not know if the 1000-foot level perception of a GPU would really exist.

That's not to say there aren't systems that lack memory protections or have extremely restricted abilities to handle a dynamic environment. In that regard, however, I would say a GPU-system would not be an abandoning of CPU legacy functionality as much as a discarding of newfangled system protections and embracing the most primitive of embedded or earliest electronic computer architectures.
That's not to go into the reality that CPU architectures like the latency-optimized ARM cores can measure their interrupt latencies in microseconds, whereas the delay for dispatch launch under load for the PS4 has been measured in the tens of milliseconds.
 
The GPU plugs into a CPU-provided memory system managed by a CPU-run OS

Modern GPUs have full virtual memory support. They will have full virtualization support very soon (gaming on-demand, i.e. streaming, requires it to be profitable).
All the dependencies on CPU in PC architecture were explicitly created by Intel based on their fear that GPUs will replace CPUs eventually. My prediction will be: when Intel designs GPU that will be on par with newest NV/AMD all the "APU restrictions" will be "magically" lifted.
You can quote my message when it happens. :)

whereas the delay for dispatch launch under load for the PS4 has been measured in the tens of milliseconds

Problems with dispatch are problems of current legacy architecture. In the end, a dispatch is a just memory write.
 
Modern GPUs have full virtual memory support.
They fully support the role of a VM guest. They are not trusted with the equivalent of a ball of yarn in kernel memory.

All the dependencies on CPU in PC architecture were explicitly created by Intel based on their fear that GPUs will replace CPUs eventually.
The dependencies on the CPU stem from GPUs having a history and a present of being too primitive to handle the full scope of a modern functioning system.

Problems with dispatch are problems of current legacy architecture. In the end, a dispatch is a just memory write.
It's not, except at the level of abstraction the drivers, OS, CPU, and GPU (or rather a long chain of individual processors, handlers, and embedded firmware programs that look like a GPU if you're far away) have given to a game programmer.
 
Back
Top