Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
The Cell is even more flexible than the RSX meaning that it will enable the PS3 platform to do even more things (graphicaly).

Yes but that "flexibility" comes at a price.

Deano mentioned something along these lines before

Originally Posted by Deano Calver
I'm close to screaming (partly because I can't say alot about next-gen GPUs) but hopefully this will explain enough to show GPUs are good at maths as well.

GPUs (not surprisingly) are good at what they do, why do it on a CPU that isn't designed for the job? On there home turf (doing maths on lots of seperate bits of data) they are very good.

GPUs are seriously SIMD. They have a number of units (quads are an old name for them but that distinction will go away) each working on lots of data.

So lets compare an SPU SIMD to an imagninary near future GPU SIMD unit (this is meant as a order of magnitude thought experiment, so read nothing into the numbers).

1 instruction FMAC in a SPU will operate on 4 floats per cycle
1 instruction FMAC in a GPU unit will operate on 48 floats per cycle

It we take 4GHz for SPU and 500Mhz for a GPU, then a SPU can preform 8 times as many instructions.
So in 1 GPU cycle
8 FMAC instructions in a SPU will operate on 32 floats
1 FMAC instruction in a GPU unit will operate on 48 floats

So even using this crude back of the envelope calculation show that a GPU isn't exactly outclassed...

Just in case there any doubt, GPUs will ship with multiple 'units' as illustrated here just as Cell ships with multiple SPUs...

And we are not even thinking about all the 'free' data conversion, low latency memory reads and lerps that are the part of the fixed hardware...
 
Terarrim, I apologize for the name calling. It's just that you seemed to tenaciously cling to a false notion despite what actual developers were telling you. Many people in the Console Forum do this, and more often than not it is done merely to advance someone's agenda or just general fan boy drivel.

No problem thank you for appolagising. The reason I may have seemed to be clinging to a false notion was due to the fact I genuinly did not understand where he was coming from, until you actually said about the RSX then I understood where he was coming from. I was trying to discuss where I was coming from in the hope that something would come out of it that would make sense of what he was saying.

Luckily enough you came out with what you said and then it clicked thankyou!
 
Shifty thank you for that very informative (and easily understood post).
 
Last edited by a moderator:
Yes but that "flexibility" comes at a price.

Deano mentioned something along these lines before

I wonder if this will be where future PC's will get their physics boost. If the GPU's are that powerfull I wonder if they could be used to calculate physics and graphics. Especally as physics and graphics basicly go hand in hand with each other. I would imagine that the GPU's would have to be a little less rigid than they are know to do both at the same time.

The new Nvidia CPU/GPU seems to share some similarities with the Cell in that it can do high end graphics on more scientific matters and use outside of "just" graphics processing (just as Cell is being used for medical imaging).

Thanks for the post :).
 
Nerve-Damage:
You really need to get a clue. It's not just a "Microsoft DX API" thing, OpenGL has these restrictions too. Read what DeanoC, Fran, nAo, ERP, et al have said about the flexibility afforded by a fixed platform.

I never stated it was the only API. :rolleyes:

I mentioned DirectX specifically because so many people love to bring it up, especially with PS3 hardware.

Edit: Very nice post Shifty.
 
Last edited by a moderator:
Great post, Shifty!

I think that in general, it's the overal architecture of how the Cell is able to talk to the GPU and vice versa, how SPUs are completely independent entities that can take care of themselves and each other, and so on, those are what make this a flexible system. This is what we'll see happen in the PC space eventually as well, I'm sure (think AMD creating a CPU that is very tightly integrated with a former-ATI type GPU).

In that configuration, it doesn't matter whether CPU/SPU/GPGPU is better at this, or Pixel Shader/Vertex Processor/SPU/CPU is better at that. What matters is that the architecture allows the programmer to determine the best locations for the bits of program he needs to run to achieve this or that effect.
 
Not sure what your getting at.
Have you ever thought that a lot of GPUs out there have features that were never exposed cause APIs (that have to embrace many GPUs..) were not allowing driver teams to do so?
For example XBOX's GPU (NV2A) has many features that have never seen the light of the day on the NV2x series, while all these GPUs share most of their architecture.

Marco
EDIT: did't read shifty post before writing this, could have saved a couple of minutes :)
 
They should make Shifty's post required reading before someone can post in the Console Technology forum.
 
Thanks for the insight Shifty. I had heard about the "constraints" of making PC games compatible across the multitude of systems before, but it was pretty cool to get a concrete example.

I don't play PC games at all, so this may sound like a rather simple question, but I was wondering if there are ever examples when a high profile game is Direct X # compatible, while at the same time including settings to access some new GPU directly. I can kind of imagine this happening via marketing tie-ups, etc. Half Life 3 = made for Invidia Card #! type of thing.

Thanks,
Oninotsume

On a side note. I've been playing more of Killzone 1 on the PS2 recently. I just got to the docks, and I must say I'm really enjoy the game. The AI is kind of weak, and the repeating voices are just terrible, but aside from that I think it's got great atmosphere, and some really cool effects, (I noticed some faux HDR ala ICO in one of the outside levels) especially for a PS2 game. The PSP Killzone on the other hand is an A+ game, so overall I'm really looking forward to seeing what Killzone 2 has to offer.
 
Thanks for the insight Shifty. I had heard about the "constraints" of making PC games compatible across the multitude of systems before, but it was pretty cool to get a concrete example.

I don't play PC games at all, so this may sound like a rather simple question, but I was wondering if there are ever examples when a high profile game is Direct X # compatible, while at the same time including settings to access some new GPU directly. I can kind of imagine this happening via marketing tie-ups, etc. Half Life 3 = made for Invidia Card #! type of thing.

Yup. I know that John Carmack did that for Doom 3.. he implemented different rendering paths for different classes of hardware, according to their strengths.

I imagine it's pretty common for developers to do that, though I don't know from any personal experience.
 
I have to chime in with the praise for Shifty's post, very nice.

Fran said:
The main limitation I've found in a deferred renderer is that when you are happy about the way you have nicely packed all your parameters in the G buffer... an artist always comes to you with that new parameter you haven't thought about.
If experience has taught me anything, it's that too much freedom with tools stiffles artists creativity.
Hence in such a case, you should either bitchslap them, or train them in advance to never even dare suggest violating the sacred balance of specification you've worked so hard on. :p

jonabbey said:
Yup. I know that John Carmack did that for Doom 3.. he implemented different rendering paths for different classes of hardware, according to their strengths.
Those rendering paths were still done under limitations of the respective API used.
 
They should make Shifty's post required reading before someone can post in the Console Technology forum.
:oops: :mrgreen: I'm glad it's been received well, as that means its done its job. It just occured to me that a number of people aren't being daft or awkward, but just haven't got a full and proper education. The education they're sure of is sort of right, in a watered down kind of way. It's like being at school and being told in biology 'such and such happens'. You explain how it works to your mates. Then you go to college and are told 'well, it's not actually like that. Really it's like this...' And then you get to university and are told 'it's not like they taught you in college. It's actually more complex...'! Each step your are educated and discuss thing intelligently on the understanding you have, but end up being wrong none-the-less! Most people come to B3D with that school education (me included), but as we're aiming for a higher level of understanding here, they come a cropper over principles and ideas the rest of us probably take for granted.

It may be worth B3D compiling a series of articles accessible to all and sundry explaining certain basics, like clock speeds and APIs and processing terms like IOP and OoOP, and point newbies there. That'd probably save a lot of aggro on both parts (if people actually read the articles that is!), with less posts with (what we consider obvious) technical faults and less posts correcting them!
 
If experience has taught me anything, it's that too much freedom with tools stiffles artists creativity.
Hence in such a case, you should either bitchslap them, or train them in advance to never even dare suggest violating the sacred balance of specification you've worked so hard on. :p

I can't help it, when an artist comes to me asking for something I can't say no :D
If it's remotely feasible, of course. I like to give artists as much freedom they want and then bitchslap a tool programmer to make it easy to use.

And I don't love specifications set in stone, because they are so hard to predict in advance; I like simple and flexible systems that I can easily expand when needed. That's probably the main reason why I completely disagree with that statement about Deferred Rendering.
Said that, what I really like of a deferred approach is that clear separation between two different rendering phases: laying down the G buffer and then lighting. It makes things (apparently) so simple. I would choose a deferred approach for this design simplicity over a forward approach.
But, at the end of the day, to handle special cases you always need a forward renderer, so you need both to achieve that "rich dynamic environment". That's why I currently prefer a forward renderer mixed with some deferred ideas here and there (the main shadow for example).
 
Actually, there are few games on the 360 with a deferred approach, being it purely deferred or hybrid. PD0 and Crackdown spring to mind.

I just don't think that a pure deferred renderer is always the best way to go if you want a rich and dynamic environment with lots of dynamic lighting. Too many limitations, too inflexible and you still need a forward renderer to treat many special cases (transparencies, interesting materials like subsurface scattering for example).

The main limitation I've found in a deferred renderer is that when you are happy about the way you have nicely packed all your parameters in the G buffer... an artist always comes to you with that new parameter you haven't thought about. That's not a position where I want to be.


Knowing next to nothing about renderers, I though that the main point of a deferred renderer is its simplicity, heard that it takes a week to get a deferred renderered up and running compaired to month for a forward and that when you want to use advanced lighting with many lights things are much easier with a deferred renderer where the rendering cost is supposed to be easily calculated. So is it a flexibility issue that is the negative part of a deferred renderer?...
 
:oops: :mrgreen: I'm glad it's been received well, as that means its done its job. It just occured to me that a number of people aren't being daft or awkward, but just haven't got a full and proper education. The education they're sure of is sort of right, in a watered down kind of way. It's like being at school and being told in biology 'such and such happens'. You explain how it works to your mates. Then you go to college and are told 'well, it's not actually like that. Really it's like this...' And then you get to university and are told 'it's not like they taught you in college. It's actually more complex...'! Each step your are educated and discuss thing intelligently on the understanding you have, but end up being wrong none-the-less! Most people come to B3D with that school education (me included), but as we're aiming for a higher level of understanding here, they come a cropper over principles and ideas the rest of us probably take for granted.

It may be worth B3D compiling a series of articles accessible to all and sundry explaining certain basics, like clock speeds and APIs and processing terms like IOP and OoOP, and point newbies there. That'd probably save a lot of aggro on both parts (if people actually read the articles that is!), with less posts with (what we consider obvious) technical faults and less posts correcting them!

you could go on with a second post of the month about what a "deferred renderer is" ;) If not, any voluntaries in little words ?
 
Status
Not open for further replies.
Back
Top