PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Digital Foundry: Going back to GPU compute for a moment, I wouldn't call it a rumour - it was more than that. There was a recommendation - a suggestion? - for 14 cores [GPU compute units] allocated to visuals and four to GPU compute...

Mark Cerny: That is bad leaks and not any sort form of formal evangelisation. The point is the hardware is intentionally not 100 per cent round. It has a little bit more ALU in it than if you were thinking strictly about graphics. As a result of that you have an opportunity, almost like an incentivisation, to use that ALU for GPGPU.

That reads closely to the speculation we had going around here ages back that there was extra ALU for compute.

Sweetvar's 'Scalar ALU’s 320'

My old post about it: http://forum.beyond3d.com/showpost.php?p=1699565&postcount=154

New Starsha GNB 28nm TSMC
Milos
Southern Islands

DX11
SM 5.0
Open CL 1.0
Quad Pixel pipes 4
SIMD’s 5
Texture Units 5TCP/2TCC
Render back ends 2
Scalar ALU’s 320

320 / 4 = 80
So 80 ALU's per each of the CU's?
That would give an extra 16 ALU
And then 5 SIMD "blocks"
 
Well, AMD would seem to be the only player who could deliver an SOC w/capable CPU and GPU.

Maybe Nvidia could have delivered a design with ARM, but the Forbes piece said ARM CPU performance wasn't ready in time.

Haswell is beefy enough (800+ gflops) Intel may have been in play too, but, they're Intel.

Both Nvidia and Intel dont seem to be the easiest to work with.
 
Big honking Mark Cerny interview, or "B3D aks Cerny questions" :p

http://www.eurogamer.net/articles/digitalfoundry-face-to-face-with-mark-cerny

Some interesting stuff I picked out as I go



Nothing earth shattering I guess.

But some new stuff. Ray casting according to Mark as generalized concept rather than a specific implementation according to him.

A slight bias towards ALUs in the hardware is new I THINK.

Now his comment about the lateness of the memory doubling rumor is interesting. It may mean that the decision wasn't as late as we was thought. Does that have any bearing on the 12 and 16 GB upgrades for the XB1 and their probabilities ? Sorry it's a PS4 thread not a XB1 rumor thread.
 
From the DF article

Digital Foundry: So Jaguar was the best fit?

Mark Cerny: To be honest, when we asked people, we heard absolutely every answer you could think of. One developer even told me their technology could accommodate a thousand cores!

Gee wonder who that was. :LOL:
 
Why so many developers claim that computing will make next-generation graphic quite different? For example, if we choose over 1Tflops for GPU compute and we may have less than 1Tflops for graphic operation. Why developers say GPU compute will improve graphic a lot just because we remove some resource to GPU compute?
 
Why so many developers claim that computing will make next-generation graphic quite different? For example, if we choose over 1Tflops for GPU compute and we may have less than 1Tflops for graphic operation. Why developers say GPU compute will improve graphic a lot just because we remove some resource to GPU compute?
Not so much to improve graphics, but other stuff previously left to CPU.
 
Its one of my concerns that the Compute part wont be tapped, since it eats into another resource. It would be horrible if launch titles looked better than titles some years down the line.
This is something I would have loved to ask Cerny - in the past, unlocking the hidden potential just brought you more. With GPU Compute you have to take power from somewhere, if you get close to the peak withing 1-2 months as he claimed in the presentation then theres not much untapped potential.

Sure, ideally gpu-compute could allow smarter algorithms to make up for it, but the brute-force power is just lower. If you eg. use GPU compute for something else than graphics then a game that focusses everything on graphics will just ultimately have more resources.

Now I would like Compute explored, but graphics are a main selling point and I dont believe many will be willing to take a substancial hit in this area.
 
That sounds a lot like PR bs. It seems that Sony decided to unilaterally orient its communication on hardware and hammering (though in clever manner) every choices MSFT made.
That's what you took from:

GameReactor said:
As part of a longer GRTV interview, which will be shown in full this Sunday on the site, we asked Cerny how the PlayStation 4 would cope with the intense summer heat that Spain's currently sweltering under.

"I think it will be fine," he replied. "They know how to design the console so it doesn't overheat. If you notice that PlayStation 4 is smaller, it's because power consumption is less; simple as that."

A question about the PS4 possibly overheating in intense summer heat? :rolleyes:

It's the 14+4 thing restated.

I don't think it is. He stated back in his April Gamasutra interview:

Mark Cerny said:
"There are many, many ways to control how the resources within the GPU are allocated between graphics and compute. Of course, what you can do, and what most launch titles will do, is allocate all of the resources to graphics. And that’s perfectly fine, that's great. It's just that the vision is that by the middle of the console lifecycle, that there's a bit more going on with compute."

They're giving control of graphics/compute split and prioritization of task within those allocations, to developers. They're merely betting big on compute and they don't want developers have have to cut back too much on graphics performance to make compute effective. But it's clear he thinks compute can be done with a minimal impact on graphics:

Mark Cerny said:
"If you look at the portion of the GPU available to compute throughout the frame, it varies dramatically from instant to instant. For example, something like opaque shadow map rendering doesn't even use a pixel shader, it’s entirely done by vertex shaders and the rasterization hardware -- so graphics aren't using most of the 1.8 teraflops of ALU available in the CUs. Times like that during the game frame are an opportunity to say, 'Okay, all that compute you wanted to do, turn it up to 11 now.'"
 
That's what you took

They're giving control of graphics/compute split and prioritization of task within those allocations, to developers. They're merely betting big on compute and they don't want developers have have to cut back too much on graphics performance to make compute effective. But it's clear he thinks compute can be done with a minimal impact on graphics:

Even PS4 fully utilizes all the idle ALUs so GPU compute can be realized without affecting graphic tasks, but how many Gflops can we have for compute? 300 G? And what huge difference can we expect just using idle ALUs for compute?

On the other hand if we remove much recource for GPU compute(e.g. 50 %) how can't we not affecting graphic tasks? Because graphic tasks can't have all of the resource.
 
That sounds a lot like PR bs. It seems that Sony decided to unilaterally orient its communication on hardware and hammering (though in clever manner) every choices MSFT made.
I would indeed be surprised if the PS4 consumes less than a radeon 7850, the clock is barely lower, there are 8 CPU cores in there, a lot more memory chips, the HDD, the optical drive.
Though the chip should be bigger (it has too) than Pitcairn, I think that the "Watts per mm^2" you have to dissipate should be lower than in the HD7850 making cooling easier ( the same applies to durango possibly to a greater extend). The result is the cooling system should run more efficiently.
I think Cerny speaks about the power consumption of the original PS3, that makes more sense, the elusiveness of his statement is a clever PR twist.

There are others, like the ease of development, no lies here the ps4 is the most straight forward system to ever land in the console realm, though there is something underlying in his talk that is not exactly fair: they make it sounds like MSFT are bad, that the 360 was a hell to code for, etc.
Imo that is the reason about his talk about the "others PS4", I absolutely believe that they considered the option but as the guy is clever he leverages that to make competition looks bad. In the process he forgets to speak about the API and related tools and the overhead as far as "easiness of coding" of having low level access to the hardware.

Anyway I guess it is fair from a business POV, actually I think he does it really well, in a subtle enough manner, I think he is as clever as he looks ;)

It could be that you read a lot more anti ms into his comments than there really is? ;-)

He did a great job, he ended up with the most powerful next gen console at a lower price and in a smaller box than the competition. Imho he doesn't need to take jabs at the competition he can just point to the ps4 as it is.

I think his development comments all come from his own experiences with the ps3 and isn't in no way aimed at the 360.
 
Even PS4 fully utilizes all the idle ALUs so GPU compute can be realized without affecting graphic tasks, but how many Gflops can we have for compute? 300 G? And what huge difference can we expect just using idle ALUs for compute?

On the other hand if we remove much recource for GPU compute(e.g. 50 %) how can't we not affecting graphic tasks? Because graphic tasks can't have all of the resource.

It's never going to be free, compute tasks will be competing for resources outside of just ALU's. Notably bandwidth and cache resources on the GPU.

But if you talk to the GPU vendors there is an awful lot of underutilized ALU resources when a modern game is running. I'm sure there are probably figures somewhere in an AMD or NVidia white paper.
 
Its one of my concerns that the Compute part wont be tapped, since it eats into another resource. It would be horrible if launch titles looked better than titles some years down the line.
This is something I would have loved to ask Cerny - in the past, unlocking the hidden potential just brought you more. With GPU Compute you have to take power from somewhere, if you get close to the peak withing 1-2 months as he claimed in the presentation then theres not much untapped potential.

Sure, ideally gpu-compute could allow smarter algorithms to make up for it, but the brute-force power is just lower. If you eg. use GPU compute for something else than graphics then a game that focusses everything on graphics will just ultimately have more resources.

Now I would like Compute explored, but graphics are a main selling point and I dont believe many will be willing to take a substancial hit in this area.

I suppose not every developer will want to put all his resources on graphics either due to costs or design philosophy.
Some studios/devs will surely focus on unlocking new graphic potential and thus use PS4 in a "traditional" way, so to speak, but other studios now have a chance to exploit PS4 is unique ways.
I think developers will see the value in using GPU for compute, just not every developer but that is fine in my opinion.
 
It's never going to be free, compute tasks will be competing for resources outside of just ALU's. Notably bandwidth and cache resources on the GPU.

But if you talk to the GPU vendors there is an awful lot of underutilized ALU resources when a modern game is running. I'm sure there are probably figures somewhere in an AMD or NVidia white paper.

Doesnt that where onion and onion+ buses come into play? Onion+ bypasses the gpu cache.
 
Status
Not open for further replies.
Back
Top