Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
Cerny specifically said the PS4 was designed to be easy to learn, hard to master, and that compute was more for 2-3 years down the road, for those teams looking to really dig deep.

So by that timetable we shouldn't be surprised.

I didn't know compute shaders where supposed to be on SPU level of difficulty, you hear a lot about them on PC and I never got that vibe.
That is what Tim Sweeney once stated:
if the cost (time, money, pain) to develop an efficient single-threaded algorithm for central processing unit is X, then it will cost two times more to develop a multithreaded version costs, three times more to develop Cell/PlayStation 3 version and ten times more to create a current GPGPU version. Meanwhile, considering the game budgets, over two times higher expenses are uneconomical for the majority of software companies!
Though it is pretty old and possibly before pretty "C like" languages made it to GPU bag of tools.
Still might still be a significant effort. For the ref, I've low qualification in IT and looked for what I could learn to make a better living. I know that there is a need for developers able to leverage modern hardware (be it many cores or GPU), so I search for cursus available in that field.
Here is what I found, you can easily find IT cursus for web development, or your usual database, C or java, etc. with overall low academic requirements but to even get started in the field of GPU programming you pretty much have to be well advanced in an tough (strong math background) engineering/university cursus.
From there I guess it ain't easy at all /one of the toughest field in IT.
 
Last edited by a moderator:
What I have been told is that effectively using GPU compute is hard to do and, like with the PS3's SPUs, teams will struggle with it at first.

It depends on the developer and what tools are provided, if Sony provides a general compute library like they did for the SPU's then the adoption rate of compute will massively increase, although it should be A LOT easier to use then the SPU's its a (relative to the SPU's) a mature platform and much nicer to work with.


Here is what I found, you can easily find IT cursus for web development, or your usual database, C or java, etc. with overall low academic requirements but to even get started in the field of GPU programming you pretty much have to be well advanced in an tough (strong math background) engineering/university cursus.
From there I guess it ain't easy at all /one of the toughest field in IT.

It would seem like this could easily include the entire game development side of things. Not just GPU compute.
 
It depends on the developer and what tools are provided, if Sony provides a general compute library like they did for the SPU's then the adoption rate of compute will massively increase, although it should be A LOT easier to use then the SPU's its a (relative to the SPU's) a mature platform and much nicer to work with.

The problems associated with compute don't seem to be trivially solvable with a magic library. For SPU Sony just provided a scheduler, but compute already has a scheduler (the GPU).

I haven't done much compute yet, but I wrote a simple copy compute shader (almost the simplest compute shader possible) and noticed while researching that there are a large number of ways to write it, all with wildly varying performance. Scheduling will be a problem as there are many numbers to tweak. Debugging will be a problem. Data layout may be a problem. Ensuring memory is synchronized correctly will be a problem... potentially quite a large one.

If the trade-offs are good, though, compute will see a lot of use. I'm always annoyed by this idea that developers are idiot children and can't wrap their heads around something unless Sony makes Baby Einstein videos about it or something. Every place shipping complicated titles has smart people behind their tech. They'll continue to make informed decisions about what tech to use and not use and people will continue to make uninformed second-guesses online.
 
The problems associated with compute don't seem to be trivially solvable with a magic library. For SPU Sony just provided a scheduler, but compute already has a scheduler (the GPU).

I haven't done much compute yet, but I wrote a simple copy compute shader (almost the simplest compute shader possible) and noticed while researching that there are a large number of ways to write it, all with wildly varying performance. Scheduling will be a problem as there are many numbers to tweak. Debugging will be a problem. Data layout may be a problem. Ensuring memory is synchronized correctly will be a problem... potentially quite a large one.

If the trade-offs are good, though, compute will see a lot of use. I'm always annoyed by this idea that developers are idiot children and can't wrap their heads around something unless Sony makes Baby Einstein videos about it or something. Every place shipping complicated titles has smart people behind their tech. They'll continue to make informed decisions about what tech to use and not use and people will continue to make uninformed second-guesses online.

Exactly, developers are smart people, other wise they wouldn't be game developers!. Its not the easiest problem to solve but being a closed platform should contribute a bit, at least you don't need to worry about how the code runs on other platforms.
 
The only dedicated hardware the PS4 lacks compared to the Xbox One are there to solve problems that don't exist on the PS4 (Kinect/ESRAM/Move Engines).

Richard's declaration that the PS4 is unbalanced is complete conjecture and based on assuming everything about the Xbox One is better than it seems, and everything about the PS4 is somehow worse than it seems.

Agree.

Exactly how can this be.?

Considering that Cerny has been very vocal about how well balance the PS4 hardware is,and MS has been very obscure about anything GPU and CPU related about the xbox one,they talk more about the cloud than the GPU and CPU.

This article is made to fit the xbox one,the 7850 is 1.76TF yet it perform better than the 1.79TF 7790,even the OC one doesn't top the 7850 from benchmarks i have seen,so how could a 12CU Bonaire GPU with 200mhz lower clock get represented by a 16CU GPU with 32ROP's and more than 500Gflops different.


I would have respect more if they used a 7790 downclock to 700mhz or 650mhz to compensate for the 2 extra CU,than what they did,but i guess that would have yield a much different result,also their representation if the PS4 GPU is just as bad.
 
Agree.

Exactly how can this be.?

Considering that Cerny has been very vocal about how well balance the PS4 hardware is,and MS has been very obscure about anything GPU and CPU related about the xbox one,they talk more about the cloud than the GPU and CPU.

This article is made to fit the xbox one,the 7850 is 1.76TF yet it perform better than the 1.79TF 7790,even the OC one doesn't top the 7850 from benchmarks i have seen,so how could a 12CU Bonaire GPU with 200mhz lower clock get represented by a 16CU GPU with 32ROP's and more than 500Gflops different.


I would have respect more if they used a 7790 downclock to 700mhz or 650mhz to compensate for the 2 extra CU,than what they did,but i guess that would have yield a much different result,also their representation if the PS4 GPU is just as bad.


huh? what are you complaining about?

They downclocked the GPU's to get to the correct flop numbers...

Bonaire may have been a better match on the XBO side, because I think it's 16 ROPS, but maybe it was as simple as they didn't have one laying around?

That would have also left it more BW limited than the PS4 stand-in GPU, which right or wrong DF states they wanted to avoid, so it was a tradeoff.

Anyways the precept of the article, everything equal except for the compute power, was made clear. So why all the kvetching? I suspect if the results were different, there would be no complaining, ironically.

An 7790 with the memory OC'ed as much as possible and downclocked to hit the flops may have been interesting, but I'm sure it still got you nowhere near 50%, and it also gasp, may have underrepresented the XBO since it's peak BW is much less.

Also, Cerny said PS3 hardware "isn't a 100% round". I'm not aware of him talking about how balanced it is (though obviously, he's not going to speak ill of his own hardware if asked anyway).
 
Well, besides the middleware solutions devs aren't using GPGPU for much in their engines - or at least not yet.
With respect, how on earth do you know what developers in hundreds of teams across the globe are doing?

Having compute resources available on both systems as well as PCs is a fairly major difference between the industry's interest in SPUs and CUs today. So progress, if it can be made, will occur more rapidly since all cross-platform games could benefit.
Very true. On current gen, Sony's first party developers and their Advanced Technology Group guys did a great job of helping third parties leverage the SPUs. You could argue that was more out of necessity than benevolence but researching and sharing is now their culture and I hope this will continue with compute next gen. This will also help Xbox One devs.
 
Also, Cerny said PS3 hardware "isn't a 100% round". I'm not aware of him talking about how balanced it is (though obviously, he's not going to speak ill of his own hardware if asked anyway).

What he actually said, in full with context, was:

Mark Cerny said:
"The point is the hardware is intentionally not 100 per cent round. It has a little bit more ALU in it than it would if you were thinking strictly about graphics. As a result of that you have an opportunity, you could say an incentivisation, to use that ALU for GPGPU.".
I don't understand the ambiguity, it's clear as day to me but allow me to translate: If you look at the PS4 from the traditional balance of CPU and GPU resources required for games, there is more ALU than you would expect. This is intentional. We think, in a year or two, the use of compute to achieve tasks will be greater and we don't want devs to have to scale back on graphics to free up ALU resources to make it happen.

He's been saying this for some time - from his April interview with Gamasutra.

Mark Cerny said:
"The vision is using the GPU for graphics and compute simultaneously. Our belief is that by the middle of the PlayStation 4 console lifetime, asynchronous compute is a very large and important part of games technology."

Bottom line: it looks unbalanced now but its been designed for developers to grow into. If Sony's prediction pans out, Xbox One devs are either going to have to scale back on compute for whatever task (less particles or physics interactions for example) or scale back on graphics some to accommodate the compute workload.
 
Bottom line: it looks unbalanced now but its been designed for developers to grow into. If Sony's prediction pans out, Xbox One devs are either going to have to scale back on compute for whatever task (less particles or physics interactions for example) or scale back on graphics some to accommodate the compute workload.

That's not dissimilar to how AMD back with the Radeon 1900 -> Radeon 1950 were forward looking in predicting that shaders would start to dominate graphics workloads more and more. That didn't generally happen, however, until years after the lifetime of the product.

Just because something forward looking is put in, doesn't necessarily mean it'll get the amount of use the hardware designer thought predicted it would.

There's a great opportunity here for GPU compute, and I'm sure AMD, Sony, and Microsoft will all be involved with pushing it to some degree or other.

However, at the end of the day, it's still going to be up to developers to determine whether the benefits (visible or not) are worth it from a development time and cost point of view. For some developers it will be. For some developers it won't be.

Sony 1st party developers will likely do all they can to take advantage of it. Sony as a publisher is greatly invested in making the most of their decisions as a console hardware designer.

3rd party multiplatform developers? That's where it becomes far more questionable as to whether they think it is worthwhile to invest in it. It all depends on the development costs in terms of time and money when compared to their allocated time and money budget.

Regards,
SB
 
The described relationship thus far is that there is bandwidth during phases when the GPU is not contending for memory or eSRAM bandwidth, or if it is running instructions that leave spare bandwidth. It's not so much a question of the GPU devoting its cycles to controlling DMA, but whether it is currently getting in the way.
 
That's not dissimilar to how AMD back with the Radeon 1900 -> Radeon 1950 were forward looking in predicting that shaders would start to dominate graphics workloads more and more. That didn't generally happen, however, until years after the lifetime of the product.

Just because something forward looking is put in, doesn't necessarily mean it'll get the amount of use the hardware designer thought predicted it would.

True, but PC developers can't just go all in on a new architecture or feature on a GPU because, chances are, only a tiny fraction of their potential customer base will have it. The only folks who do this are the likes of Crytek, where the games works as a technical showcase for the company's engine. On a console, the developer knows every customer has the hardware. Compute isn't new though, for years Apple have been offloading more and more CPU functions to OpenCL so they can run on the GPU.

No doubt, like Cell's SPUs, it's going to be alien at first, but those who master the hardware may find it solves all sorts of technical challenges for the CPU and frees up time (particularly optimisation) later in the development cycle.

The same way I could correct Richard on the details of the PS4 memory reservation...

Right, and I believe you're talking to developers of which there will be thousands, probably tens of thousands, of individuals who know the details of the SDK and PS4's memory allocation, but what I don't believe is the experiences of PS4 developers - all PS4 developers - are known to other PS4 developers.

The people you are talking too may be of the view that leveraging GPU compute is difficult, but it's clearly not that difficult if engines and middleware have leveraged it already. Nor is it technology being introduced with the nextgen consoles. There will be developers who have a lot of experience with compute.
 
GPU compute may not appear to be very straight forward but with everything we know about PS4's development its fair to say they likely thought this through. Cerny has spoken publicly about the approach towards the design and the collaboration between hardware and software engineers so saying PS4 is unbalanced maybe akin to saying modern CPUs use longer pipelines. In other words perhaps context and understanding mechanically how data and compute are intended to operate is paramount to decoding the platform.

Its also important to note that Sony has some very talented developers who were able to get incredible results out of PS3 and with the simpler design and efficiency's of the new platform they will probably focus much of their attention directly on this issue.

Perhaps we'll see some interesting tools coming out of the ICE team which ironically benefit PS4, XB1 and PC. :LOL:
 
Right, and I believe you're talking to developers of which there will be thousands, probably tens of thousands, of individuals who know the details of the SDK and PS4's memory allocation, but what I don't believe is the experiences of PS4 developers - all PS4 developers - are known to other PS4 developers.

So devs don't talk to each other?

And I never said the experiences of all PS4 developers are known to other PS4 developers.
Just that the general sentiment is that compute is hard to do.
 
Status
Not open for further replies.
Back
Top