PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
PlayStation Shading Language for PS4

GDC Europe 2013: http://schedule2013.gdceurope.com/session-id/825424

The PS4 is powered by a shading language that is familiar, but specific and extended for PS4. PSSL enables a degree of cross-compatibility with the PC, but extends far into specific PS4 hardware extensions that truly unlock features of the PS4 GPU in ways not seen before in this class of modern graphics hardware. This talk will feature the PC compatibility features of PSSL, as well as the hardware oriented extensions that will enable PS4 developers to 'Push the Boundaries of Play' on PS4.

Takeaway
The PlayStation Shading Language, or PSSL, for PS4 is very approachable and features all of the modern PC shader features a developer would expect. However, PSSL also pushes well beyond convention for PS4. Come see a glimpse into the present and future of GPU Shaders for PS4.

Intended Audience
The target audience is any game developer interested in the Graphics Shader Pipeline for PlayStation 4.

Speakers
Richard Stenson | Graphics Software Engineer, SCEA R&D

Richard Stenson is a graphics software engineer at SCEA R&D. He was one of the originators of PSGL, PlayStation OpenGL ES for PS3 based on the Khronos OpenGL ES 1.1. PSGL was one of the first champions of the OpenGL ES specification and has been used by hundreds of PS3 games, AAA and Indie alike. Richard was also heavily involved in the development and ratification of the COLLADA format, which is now also a Khronos Open Standard for graphics. Now for PS4 he started and has contributed heavily to the PlayStation Shading Language project, or PSSL, of which all PS4 games will use to do graphics for PS4.

Chris Ho | Engineer, SCEA

Chris Ho graduated from UC Davis with a degree in Computer Science, and remained there as a research assistant in a graphics and visualization lab before joining SCEA as an engineer to work with the PlayStation 4. Since joining SCEA, he has made frequent contributions to the PlayStation Shader Language project, ranging from new feature specifications and researching use-cases to building tools for the validation of the correctness of the shading language.
 
This part?
I'll do a limited translation on my limited Japanese.


――PS4で、OSが占有するメモリーやプロセッサーのリソース量はどうですか?


On the PS4, how much processor and memory resource is taken by the OS?



吉田:ゲー ムの占有リソースの問題ですよね? それは決まっていて、開発の方々にはお伝えしてます。しかし、情報としてはオープンにしない、という方針です。ご容赦 ください。噂されている容量についてですか? それについても真偽はノーコメントで(笑)まあ。8GBありますからね……。これは相当に膨大な領地ですよね。


This question is about the game's available resource right? That has been decided and is being told to the developers. But our policy is to not release this as open information. Please bear with me. The rumored capacity right? About the truth of that rumor I will also have no comment (laughs). Well, there is 8GB there, so... This is a great amount of territory there.


Hope this clears that up.

Much better now, thanks.
The bolded part is the one I was talking about.
 
Hence why an OpenGL API makes sense, to use GCN tech in all its glory.
OpenGL makes it easier for devs, but will be more restrictive than Sony's low-level libraries. I think OGL will be important for a lot of lower tier games though, like download titles. It'll make something like Unity integration much easier.
 
OpenGL makes it easier for devs, but will be more restrictive than Sony's low-level libraries. I think OGL will be important for a lot of lower tier games though, like download titles. It'll make something like Unity integration much easier.

My thoughts too. Goes back to Cerny harping on the much smaller time to triangle for PS4.
 
Tiled resources are a DirectX API feature, not a GCN one. Sony will have to implement that for their own developers to use, which is why it's a DirectX 11.2 feature that will work on all DirectX 11 cards, no new special hardware needed.

As others here have pointed out it is already implemented in OGL as 'Sparse Textures' and as the spec text says it was an AMD extension (which can be used by NV but they call it summat else). Of course Sony/AMD will need to extend libGCM or PSSL to support it but I'd imagine it's easier to implement in a new language than design from scratch

http://www.opengl.org/registry/specs/AMD/sparse_texture.txt
 
Mynd just posted this on PSU when talking to a poster named Foraeli

4 of the CU's are specifically designed to run as compute units.
They have no equivalently balanced output ROP's. In other words as a balanced GPU, this units would have 12-14 CU's.

Using the extra 4 CU's will help in possible shader claculations, but only for very very complex shaders.


Not only that but you can run these 4 completly seperaltey from the other 14.

In other words CU's can be running calculaitons, while others can be doing GPU rendering.

This is not an either/or situation.
Both next gen consoles can split the workload between Compute and rendering in parallel.

This statement interests me. I thought the idea that there was some division of GPU resources and\or the architecture was not suited to have those other 4 CU's do normal rendering workload like the other 14 was bunk?

Mynd seems to suggest that they're not really set up in a way to do the same kind of work as the majority of the GPU as optimally.. Due to some kind of ROP deficiency?

Again, Mynd is going off publicly released information. Not inside knowledge.
 
I think Mynd misunderstand's PS4's architecture. Those rumours were never substantiated, and everything official contradicts them. The consensus is that PS4 has 18 CUs, without any limits in functionality. At this point it's probably best if his opinions weren't posted in this forum unless he posts them directly, as you can't hold a decent discussion across two different domains. ;)
 
NOOOOOOOOOO! Not this 14+4 CU shit again. Let's leave that as dead as all 18 CUs are identical to one another. There is no magical 4CUs.
 
Mynd just posted this on PSU when talking to a poster named Foraeli



This statement interests me. I thought the idea that there was some division of GPU resources and\or the architecture was not suited to have those other 4 CU's do normal rendering workload like the other 14 was bunk?

Mynd seems to suggest that they're not really set up in a way to do the same kind of work as the majority of the GPU as optimally.. Due to some kind of ROP deficiency?

Again, Mynd is going off publicly released information. Not inside knowledge.

PS4 is supposed to have 32 ROPs. Should be more than enough relative to the CUs and system bandwidth. If anything, I thought the system bandwidth was supposed to be low for a 32 ROP part. In other words, there should be more than enough ROPs, I think.
 
Why on earth would OpenGL API make sense when they could use "new version of libGCM" which is far "closer to metal"?
Imo they should give up on low level optimizations that tie them to any specific hardware and compromise compatibility with future products.
Then some piece of software are actually doing a good job at exploiting resource of a system.

Imo I think they won't do it but not granting access to close the the metal is a mistake.
 
Imo they should give up on low level optimizations that tie them to any specific hardware and compromise compatibility with future products.
Then some piece of software are actually doing a good job at exploiting resource of a system.

Imo I think they won't do it but not granting access to close the the metal is a mistake.

Why? BC is no longer guaranteed anymore. Gakai is supposedly going to fill their BC gap, and who's to say future hardware won't be identical enough, or future versions of libGCM can't interpret PS4 game code (or do some dynamic recompilation)?

To discourage to-the-metal optimization is to dispense one of the advantages of consoles.
 
I think Mynd misunderstand's PS4's architecture. Those rumours were never substantiated, and everything official contradicts them. The consensus is that PS4 has 18 CUs, without any limits in functionality. At this point it's probably best if his opinions weren't posted in this forum unless he posts them directly, as you can't hold a decent discussion across two different domains. ;)

Apologies, def won't happen again. I can see how that would be problematic. :oops:
 

Probably compute and scheduling related extensions ?

In his first interview, Cerny mentioned that developers have to code using more than one language systems to use the GPGPU efficiently. I assume that means compiling into different binaries and then bundle them together (GPU and CPU parts ?) explicitly.

Their goal is to be able to code using just one language system to exploit the GPU "fully".
 
Why? BC is no longer guaranteed anymore. Gakai is supposedly going to fill their BC gap, and who's to say future hardware won't be identical enough, or future versions of libGCM can't interpret PS4 game code (or do some dynamic recompilation)?
I do not believe in it, and it think running Gakkai will prove incredibly costly, it is imo a loosing approach, you have to burn more power "in the cloud" than to run the software locally, there move the data around, etc.
Onlive is not doing well, Gakkai was searching for an acquisition and succeed but the business model is as unproved and risky as it get.
To discourage to-the-metal optimization is to dispense one of the advantages of consoles.
Well it is disputable as Andrew L did (at least in some regards), especially in a console environment where you don't have actor X that design a APi for every and then every other vendors that run its drivers on its hardware for many different confs, etc.
I would that a single actors could do something both thinner/lighter and better.
Then there is the assunmption that programers always does better, I read a presentation about ISPC Andrew gave a while ago, the compiler usually keeps up with people coding in intrinsic (really light penalty) and sometime wins.
Actually if you run something as gakkai you may want to be able to run your code without having to port code to better hardware when it gets available.
It offers greater option than the usual constrained shrink consoles manufacturers has to deal with to reduce price.

I think that the loss is worse it, cloud or not.
 
Last edited by a moderator:
Gaikai is probably a parallel track. Shuhei mentioned that its rollout schedule is independent of PS4. It doesn't seem like a PS3 b/c substitute either.
 
I think Gaikai is more targeted for tablets, tvs and phones. That would be a pretty cool USP for all those Sony products.
 
The thing tha surprised me is the PS3 game streaming. They probably have custom built Gaikai servers for PS3 games.

Those devices you mentioned will need a controller for PS3 games. And their sheer number may require a huge infrastructure from the get go. But yes, that's likely the eventual goal, perhaps for smaller games.
 
They haven't said much about gaikai, it's puzzling. The 22nm cell is almost confirmed, and there's a rumor of an RSX at 28nm. It could explain why Gaikai will be deployed only in 2014. They can probably fit a LOT of these in a 42U rack, I'd guess 500 per rack, maybe using an existing x86 backplane for the service side which gaikai already developed? They'd be in high demand only for the first two years of the generation until people stop being interested in BC. Usually BC is great because the new console doesn't have enough games, it doesn't look like it's going to be the case with PS4, hopefully.

I'm a bit angry they didn't announce a BC module for PS4 and only mentioned gaikai as a solution. Oh well, my PS3 slim will fit nicely on top of the PS4, and this combo will have a space requirement that isn't any bigger than an xbone (if you do the math correctly). So I can't complain much.
 
Gaikai is probably a parallel track. Shuhei mentioned that its rollout schedule is independent of PS4. It doesn't seem like a PS3 b/c substitute either.
This sounds like a plausible theory. And seeing how small the PS4 is, now that according to Mr Fox the 22nm Cell is about to become a reality, they could attach the PS3 motherboard to the PS4 and create an entire new box from it. It's not like they don't have experience with a similar approach.
 
I think Gaikai is more targeted for tablets, tvs and phones. That would be a pretty cool USP for all those Sony products.

If that really is the long term plan, then you shouldn't buy any of your games on physical disks. Gaikai will be the carrot to go all digital.
 
Status
Not open for further replies.
Back
Top