Confirmation: PlayStation 3 will use an in-house GPU(proof)

The successful candidate will develop a state-of-the-art shading language compiler for an advanced forth-generation graphics processing unit (GPU). With the assistance of other team members, the individual must be capable of designing and implementing the major components of the compiler backend.

To achieve this goal, the individual should have extensive recent experience with backend internal representations suitable for advanced code optimization. Detailed knowledge of modern code optimization techniques, register allocation, and code generation expertise is also required, as well as experience with programming language front ends, assemblers, linkers, and runtime libraries. Exposure to shading languages, such as nVidia CG, Microsoft HLSL, Brook, or StreamIt, and exposure to 3D graphics APIs, such as OpenGL and DirectX, is also desirable.

http://hotjobs.yahoo.com/jobs/CA/Foster-City/Technology/J900906BR;_ylt=AjoV4U.AaTskU1tcllsAlpyxQ6IX

This would not be needed if they relied on ATI, nVIDIA, IMGTech.'s PowerVR or BitBoys made GPUs as all of those would come with a shading language compiler designed by the GPU provider: especially big boys like ATI, nVIDIA and IMG Technologies.


Good news:

Well, we know the GPU will have Shaders.

We know the shaders will not be trivial in length: no need to step-up from the ASM level shaders of the DirectX 8.0 era and go with High Level Shading Languages otherwise.


Bad news:

We still have no official PR or technical documentation stating more details where Vertex and Pixel Shading is done: if it is split between CPU and GPU or if it is done all on the GPU.

Is SCE going at it alone ? Are they receiving help ? Or are they the ones helping their partner (it would have to be either Toshiba or IBM to cut down the costs and add another partner to the PlayStation 3 project at this point... people on Beyond3D seemed to say it would have to be Toshiba).
 
Panajev said:
Good news:

Well, we know now the GPU will have uber high paper specs, regardless of their practical usability.


Bad news:

We still have no official PR or technical documentation stating more details if the GPU will actually feature a proper (and usable) implementation of any of the basic rendering operations - such as clipping.

Here, fixed it for ya :p
 
Fafalada said:
Panajev said:
Good news:

Well, we know now the GPU will have uber high paper specs, regardless of their practical usability.


Bad news:

We still have no official PR or technical documentation stating more details if the GPU will actually feature a proper (and usable) implementation of any of the basic rendering operations - such as clipping.

Here, fixed it for ya :p

You know it won't feature clipping... :devilish:.
 
Isn't it kind of late in the day for them to be looking for someone to design such an essential component of the system? What are they going to show or talk about at E3 if 6 months out, they're looking for what looks like an architect?

And advertise such a position at Hot Jobs?

Seems like someone that essential would be recruited or acquired by a headhunter.
 
Isn't it kind of late in the day for them to be looking for someone to design such an essential component of the system?
Since when is a compiler a component of the system? :p

And if we want to talk LATE - PS2 didn't get a VU compiler until nearly 3 years after it was released... :oops:
 
Re: Confirmation: PlayStation 3 will use an in-house GPU(pro

That they are just now getting around to advertising for a shader compiler writer probably indicates that they are two to three years away from having a good shader compiler for PS3.

However, it doesn't tell us anything about the PS3 launch date. Sony can ship without a shader compiler -- that just puts the burden on the game developers to program the chip directly.

Notice this is a Sony-of-America job posting. My guess is that Sony-of-Japan doesn't think a shader compiler is nescessary, but Sony's American developer support people have been flamed by American developers for the PS3 lacking a shader compiler. (Since presumably Xbox 2 and certainly PCs will have HLSL compilers, the PS3 is the odd-man-out for cross-platform games.)

Alternately, perhaps they had an internal effort, but it didn't pan out.

If they are lucky they will get some NVIDIA cg engineer to jump ship, since Foster City is only a few miles from Santa Clara.

Hey, here's a clever short-cut for Sony (or for third parties trying to get games done before the PS3 compiler is available): Use Microsoft's HLSL compiler to generate DX tokens, and then write a DX token to PS3 translater. That's got to be way easier than writing a full HLSL or cg compiler, and probably gets you 50% to 80% of the performance. (It may also break the licensing terms of the Microsoft HLSL compiler, but that's a seperate issue.)
 
But itsn't the majority of the complexity of writing a shader compiler in the optimizing backend ? I mean most CS students would be able to produce a CG/HLSL compiler (that can produce working code) so it can't take a long time to get something up and running.

Cheers
Gubbi
 
Re: Confirmation: PlayStation 3 will use an in-house GPU(pro

Panajev2001a said:
Well, we know the GPU will have Shaders.

It may be a 'virtual shader', a virtual software interface for porting purpose only rather than a specific hardware, which is made almost from the ground up by those shading language engineers... 8)

FatherJohn said:
That they are just now getting around to advertising for a shader compiler writer probably indicates that they are two to three years away from having a good shader compiler for PS3.

What's wrong with recruiting additional taskforce for optimization for forthcoming devkit versions? :rolleyes:
 
Is this a surprise to anyone? Didn't sony in an interview once say there was no way they would license graphics technology since they develop their own?
 
Not in the sense of a physical component necessarily.

Are these shaders purely software or could they have committed it to silicon? (or would you want to since they could be continually refined if in software?).
 
Gubbi said:
But itsn't the majority of the complexity of writing a shader compiler in the optimizing backend ? I mean most CS students would be able to produce a CG/HLSL compiler (that can produce working code) so it can't take a long time to get something up and running.

Cheers
Gubbi

It depends on how close the GPU's actual architecture is to DX tokens. DX tokens are just a pre-parsed version of DX shader assembly language. The way HLSL is supposed to work is that Microsoft does a ton of optimization in the HLSL compiler, and all the GPU vendor does is translate the DX tokens into their native microcode.

Most vendors do additional optimization during the translation step, to get the utmost performance. But since the tokens are already optimized this additional optimization step doesn't usually gain very much performance. It's done more to guard against being fed unoptimized hand-written assembly language.

All this optimization work is unfortunately kind of pointless. It turns out that modern GPU ALUs are really fast relative to memory access speeds. As a result, all the sophisticated optimization doesn't usually buy you very much in terms of real-world performance. Your shaders tend to end up vertex or texture-fetch or fill bound, rather than ALU bound. As a result, extra unoptimized ALU instructions don't affect performance very much.

:idea: Actually, I just realized that Sony probably doesn't have a "modern GPU". They probably just have their Cells with some sort of high-speed frame buffer blend hardware tacked on. If so their shader compiler's going to have to do something like the Stanford Imagine Reyes renderer: it's going to have to manage all the memory latency itself, in software, rather than having the GPU hardware handle it. (Modern GPUs run hundreds or thousands of threads to hide latency. But Cells' just got the 8 hardware threads, so it can't afford to do that.)

Using a normal shading language (like HLSL) is good for compatability, but it seems like you'd be missing a chance to use all the extra power of the APUs. I suppose they will be extending cg / HLSL to expose the extra power.

Anyway, sounds like there will be a couple of PhD thesis worth of work in getting that architecture to run HLSL/cg fast! :D
 
FatherJohn said:
All this optimization work is unfortunately kind of pointless. It turns out that modern GPU ALUs are really fast relative to memory access speeds. As a result, all the sophisticated optimization doesn't usually buy you very much in terms of real-world performance. Your shaders tend to end up vertex or texture-fetch or fill bound, rather than ALU bound. As a result, extra unoptimized ALU instructions don't affect performance very much.

Could the optimisations be more about scheduling code to max out the ALUs? While it's true that operations are load bound, properly feeding all ALUs (akin to properly feeding the superscalar core in a Pentium) is also crucial to reach theoretical performance. Without proper optimisation, it could be that Cell is fed 'general purpose' code, which is pretty much sequential. That couldn't be good for performance even if computing systems are generally load bound.

But I have to say, a job to decompose high level shader code to run parallel on Cell... have fun! :)
 
JF_Aidan_Pryde said:
But I have to say, a job to decompose high level shader code to run parallel on Cell... have fun! :)
what if shaders (pixel shaders at least) don't run on CELL?
We still don't know if PS3 GPU is CELL based.

ciao,
Marco
 
nAo said:
JF_Aidan_Pryde said:
But I have to say, a job to decompose high level shader code to run parallel on Cell... have fun! :)
what if shaders (pixel shaders at least) don't run on CELL?
We still don't know if PS3 GPU is CELL based.

ciao,
Marco

If they aren't going to use Cell for shaders, then what the heck is all that floating point capacity going to be used for? There's way too much of it for just vertex shaders.

And the early patents always said "APUs modified for graphics" or something like that, which seems to indicate that they are trying to use Cell for pixel shaders as well as for vertex shaders.
 
FatherJohn said:
nAo said:
JF_Aidan_Pryde said:
But I have to say, a job to decompose high level shader code to run parallel on Cell... have fun! :)
what if shaders (pixel shaders at least) don't run on CELL?
We still don't know if PS3 GPU is CELL based.

ciao,
Marco

If they aren't going to use Cell for shaders, then what the heck is all that floating point capacity going to be used for? There's way too much of it for just vertex shaders.

And the early patents always said "APUs modified for graphics" or something like that, which seems to indicate that they are trying to use Cell for pixel shaders as well as for vertex shaders.

http://www.beyond3d.com/forum/viewtopic.php?p=411740#411740 ...maybe...
 
FatherJohn said:
If they aren't going to use Cell for shaders, then what the heck is all that floating point capacity going to be used for? There's way too much of it for just vertex shaders.
Physics? LOD? tesselation? and so on... ;)

And the early patents always said "APUs modified for graphics" or something like that, which seems to indicate that they are trying to use Cell for pixel shaders as well as for vertex shaders.
Actually there is no patent that I'm aware of that refers to some kind of customized APUs.

ciao,
Marco
 
Back
Top