Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
Right, and I believe you're talking to developers of which there will be thousands, probably tens of thousands, of individuals who know the details of the SDK and PS4's memory allocation, but what I don't believe is the experiences of PS4 developers - all PS4 developers - are known to other PS4 developers.
If you are in discussion with one of the coordination types who receives feedback from devs (or someone who has talked with someone like that, or someone who has attended a meeting on the current state of the development tools and where work needs to be invested based on current developer hardware usage, or someone who has seen the minutes of the meeting and a slide saying only 1 developer out of all SCEW studios is investing in GPU compute), then yes, you can hear the opinions of all of Sony's developers, as it were.

Regardless, this is all very boring and OT. Either believe (((interference))) or not. Every opinion is just an opinion until proven anyway, so it makes little difference whether (((interference))) knows anything or not when discussing what's happening. Believing devs are using compute without any evidence is just as misguided as believing they aren't using compute because some anonymous voice on the internet said they weren't.
 
I have a huge & fun talk with some folks that are more "technology savvy" than me.

According to them, It seems that the real problem for the next gen platforms will be the CPU, for both systems.

So, in brief, it seems that we have not to expect some huge changing or revolution in game design area.
It seems that in linear / heavy scripted game like uncharted, GOW etc... the new 2 consolles will shine.
In evolved on-line games, RTS, open world, heavy RPG, persisten world, or more evolved game design that dwell with interaction, evolved IA, phisic etc.. and in general "cpu eavy games", the new consolle will struggle a lot, and PC will be a super clear winner. I am not talking in term of graphic, but purely in terms of game design opportunity.

Moreover, they are sure that cpu, for several reasons, in the end will act as a bottleneck for the systems.

(((interference))), as you have some connection, have you heard something about that?
 
I have a huge & fun talk with some folks that are more "technology savvy" than me.

According to them, It seems that the real problem for the next gen platforms will be the CPU, for both systems.

So, in brief, it seems that we have not to expect some huge changing or revolution in game design area.
It seems that in linear / heavy scripted game like uncharted, GOW etc... the new 2 consolles will shine.
In evolved on-line games, RTS, open world, heavy RPG, persisten world, or more evolved game design that dwell with interaction, evolved IA, phisic etc.. and in general "cpu eavy games", the new consolle will struggle a lot, and PC will be a super clear winner. I am not talking in term of graphic, but purely in terms of game design opportunity.

Moreover, they are sure that cpu, for several reasons, in the end will act as a bottleneck for the systems.

(((interference))), as you have some connection, have you heard something about that?

This has been my chief concern as well, but people here and elsewhere generally seem non-concerned. Ostensibly because of how absolutely terrible the last generation's CPU's were and how reasonable the 8x jaguar x86 solution seems to be + dedicated silicon to offload burdensome tasks.

I'd love to hear some technical break down on why or why not the CPU's will be an issue.
 
No doubt, like Cell's SPUs, it's going to be alien at first, but those who master the hardware may find it solves all sorts of technical challenges for the CPU and frees up time (particularly optimisation) later in the development cycle.
I'm not sure the GPU compute option is going to be as alien as the SPUs were at their introduction, or even now. The parallel mindset that was ramped up with Cell hasn't gone away, and the high-level optimizations for that kind of programming still apply. There is a pool of knowledge for at least some use of compute shaders in PC development already.

While GPU programming does require an awareness of occupancy, there are a lot of things that are now handled more intuitively than Cell. The CUs have a read/write cache hierarchy, with instruction and data caches like standard CPUs.
The CUs are multithreaded and the GPU has a decent amount of self-scheduling capability along with whatever architectural tweaks it has that have not been broadly deployed. Besides Sony's own tweaks, there are elements to Sea Islands which make it more amenable to HSA. Even if the PS4 doesn't have that, it would have some of the features that make it easier.

We should have a better idea of how helpful they are once we get past the initial phase of just getting things running for the first time on a new platform.



So, in brief, it seems that we have not to expect some huge changing or revolution in game design area.
It seems that in linear / heavy scripted game like uncharted, GOW etc... the new 2 consolles will shine.
At the very least, I'm hoping the expanded RAM amounts and standard HDD installations will give devs the option of reducing the stop-start-play-ten-seconds-then-cutscene-or-ladder-stops-progress-then-the-way-back-to-the-previous-bread-box-sized-room-is-blocked cadence that has been all the rage these days.
 
So devs don't talk to each other?

Haha, no. I spend as much time on Stack as I do writing code, perhaps more.

And I never said the experiences of all PS4 developers are known to other PS4 developers.

No, but you did make a huge sweeping statement about the state of the use of compute by nextgen console developers. There are a lot of folks experienced in this discipline, just take a look in Stack. And they are very willing to help.

Just that the general sentiment is that compute is hard to do.

Where does the difficulty lie? Are they using OpenCL, CUDA, another abstracted HLL or trying to program the GPGPU hardware directly?
 
Can someone please exactly explain what is meant by GPGPU?

I always thought that it is e.g. doing physics simulation on a GPU. On PC, that would be for instance PhysX. One of the best examples out there is Borderlands 2, which looks outstanding with PhysX on high. And there are many other games out there, like Mirrors Edge, Batman,...doing physics, cloth, destruction etc on GPU.

But you guys keep talking about GPGPU being SPU alien tec which will be super difficult to get working...which confuses me??

So please, it would be great if someone could clarify this for me. Thanks!
 
Can someone please exactly explain what is meant by GPGPU?

I always thought that it is e.g. doing physics simulation on a GPU. On PC, that would be for instance PhysX. One of the best examples out there is Borderlands 2, which looks outstanding with PhysX on high. And there are many other games out there, like Mirrors Edge, Batman,...doing physics, cloth, destruction etc on GPU.

But you guys keep talking about GPGPU being SPU alien tec which will be super difficult to get working...which confuses me??

So please, it would be great if someone could clarify this for me. Thanks!

I always though GPGPU basically meant anything not dealign with rendering done on the GPU. However, it's always been used as an eye-candy effect, it never effects actual gameplay.
 
Developers are calling it either Async Compute or Fine-Grain Compute whenever they talk about the PS4, I had never heard either expression before. They descibe it in layman's terms, and it seems to be what I know as GPGPU. So is GPGPU and Async Compute any different, or just a more precise term, or more generic term?
 
They're both pretty nebulous terms for making the GPU do stuff not traditionally considered graphics.

Asynchronous Compute might be more specific, since GPGPU can be done in a synchronous fashion and also not in a fine-grained way. That those approaches are not generally appealing for a bunch of applications is something of an aside.
 
It's an empty term. Some take it to mean any workload done on the gpu that's not directly related to graphics/pixels, but imo it really just means utilizing the GPU without going through the traditional graphics pipeline (i.e not having to use pixel/vertex shaders to achieve a result).
 
It's an empty term. Some take it to mean any workload done on the gpu that's not directly related to graphics/pixels, but imo it really just means utilizing the GPU without going through the traditional graphics pipeline (i.e not having to use pixel/vertex shaders to achieve a result).

That's pretty much how I think of it as well. Game developer's sometimes use compute for graphics workloads because it's more efficient than going through pixel/vertex shaders.

Regards,
SB
 
It's an empty term. Some take it to mean any workload done on the gpu that's not directly related to graphics/pixels, but imo it really just means utilizing the GPU without going through the traditional graphics pipeline (i.e not having to use pixel/vertex shaders to achieve a result).

I see. Nevermind.
 
Last edited by a moderator:
Developers are calling it either Async Compute or Fine-Grain Compute whenever they talk about the PS4, I had never heard either expression before. They descibe it in layman's terms, and it seems to be what I know as GPGPU. So is GPGPU and Async Compute any different, or just a more precise term, or more generic term?

Are people extrapolating Async Compute from the GCN convention of Asynchronous Compute Engines, which serve to (i think) manage both graphic and non-graphical compute tasks simultaneously?
 
Are people extrapolating Async Compute from the GCN convention of Asynchronous Compute Engines, which serve to (i think) manage both graphic and non-graphical compute tasks simultaneously?
ACEs are handling compute shaders only. The graphics command processors can handle both graphics shaders as well as compute shaders (the only exception so far is the high priority/VSHELL command processor of the PS4 which can't handle compute shaders). Maybe one could call some compute shader handled by the graphics command processor and dependent on the output of usual graphics shader (or the other way around) synchronous compute.

Async compute basically tells us that the program fires up a task, can do something else in between and gets notified when the compute task finishes (or has to poll the status) with no guarantee on any intrinsic synchronisation (not manually enforced).
 
Last edited by a moderator:
Can someone please exactly explain what is meant by GPGPU?
General Purpose [computing] on the GPU, however it's a bit of a misnomer as GPU architectures aren't great at dealing with what are traditional general computing problems. What they are good at are crunching through numbers and algorithms that can be addressed in a linear fashion and parallelised.

I always thought that it is e.g. doing physics simulation on a GPU. On PC, that would be for instance PhysX. One of the best examples out there is Borderlands 2, which looks outstanding with PhysX on high. And there are many other games out there, like Mirrors Edge, Batman,...doing physics, cloth, destruction etc on GPU.

Physics and cloth simulation are examples of things that translate well to GPGPU. Havok's using compute on the nextgen consoles, much as it used the SPUs on PS3.

But you guys keep talking about GPGPU being SPU alien tec which will be super difficult to get working...which confuses me??

I used "alien" because I didn't think "different" really covered it sufficiently. GPGPU or Cell SPUs make you think differently about achieving your desired result. Writing code as you would for a CPU with cache, tight loop performance, branch prediction, will yield poor performance on most GPUs and not all problems translate well or easily.

But it's hardly a dark art. Apple added support for OpenCL on supported AMD and Nvidia GPUs two operating systems back (OSX 10.6 Snow Leopard) and they've been making more use of it ever since. It's now fully supported by OSX's task/thread scheduling system, Grand Central Dispatch (GCD), so if you write OpenCL code for OSX, the kernel will distribute it across GPUs as easily and transparently as CPU cores and they're making more use of this in their professional applications like Final Cut Pro and Lightroom.

This is why I am perplexed at people's apparent difficulties with GPGPU. Yeah, it's different but if developers were parallelising their job code for Cell or Xenon's six-threads, they've made it over the biggest hurdle.
 
Are people extrapolating Async Compute from the GCN convention of Asynchronous Compute Engines, which serve to (i think) manage both graphic and non-graphical compute tasks simultaneously?

That's what Sony is calling it. It could be because the hardware units are called Asynchronous Compute Engines, or because Sony wanted to have computation that is asynchronous and needed to call it something.
 
I believe asynchronous compute refers to the ability to interleave compute tasks along side normal rendering. Before the ACEs your gpgpu task went through the main graphics command system which I suppose could result in inefficiencies.
 
I believe asynchronous compute refers to the ability to interleave compute tasks along side normal rendering. Before the ACEs your gpgpu task went through the main graphics command system which I suppose could result in inefficiencies.

This is my understanding and is how we know it works. Fine grained may be a reference to the 64 queue enhancement Sony made (or AMD made for Sony) which I'm certain is there to support their prioritisation scheme.
 
it's different but if developers were parallelising their job code for Cell or Xenon's six-threads, they've made it over the biggest hurdle.

This is a gross oversimplification and tbh is wrong. There can be many hurdles to GPGPU and "spawning many threads" is not the biggest one (in fact, imo I wouldn't even consider that a hurdle...). Perhaps I am misinterpreting you.
 
Status
Not open for further replies.
Back
Top