Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
So if we go back to the original Killzone 2 devs posts we can speculate that either.

The RSX has some of the Nvidia G80 componants.

or

The Cell is able to emulate DX10 functions (or something completly new) for this type of thing to work?
Oh please, not this thing with the software layers that don't even exist in the space you're talking about. Get that clutter out of your brain, it's not useful anymore.

If you have access to the multisample buffer's memory (check), know how it's laid out (check), and are certain that these parameters are never going to change (check), that alone enables you to resolve the buffer, in any way you please. Agree/Disagree?
 
Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it’s hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting.

I completely disagree with this statement.
 
I think it'd still be false. Look at all the games out there not using deferred rendering...

Wait, tell me what games I missed so far that had "rich, dynamic environments with extensive dynamic lighting"??? I'll order them right now!
 
I think it'd still be false. Look at all the games out there not using deferred rendering...

Actually, there are few games on the 360 with a deferred approach, being it purely deferred or hybrid. PD0 and Crackdown spring to mind.

I just don't think that a pure deferred renderer is always the best way to go if you want a rich and dynamic environment with lots of dynamic lighting. Too many limitations, too inflexible and you still need a forward renderer to treat many special cases (transparencies, interesting materials like subsurface scattering for example).

The main limitation I've found in a deferred renderer is that when you are happy about the way you have nicely packed all your parameters in the G buffer... an artist always comes to you with that new parameter you haven't thought about. That's not a position where I want to be.
 
Oh please, not this thing with the software layers that don't even exist in the space you're talking about. Get that clutter out of your brain, it's not useful anymore.

If you have access to the multisample buffer's memory (check), know how it's laid out (check), and are certain that these parameters are never going to change (check), that alone enables you to resolve the buffer, in any way you please. Agree/Disagree?

Not sure what your getting at.

The fact is that DX 9.3 does not allow MSAA with deferred rendering. Or if you do use it the results are not good or even worse than what you started with (see my OP).

DX10 allows for MSAA with deferred rending as quoted from in the OP.

Since the PS3 can aslo use MSAA with deferred rendering this means 1 of 3 things:

RSX has some G80 attributes (unlikely), Cell is so felxible it can get around DX constraints and achieve DX10 results (likely). There is mixer of both involved (unlikely).

I would state that due to the fact that Cell has no restraints whatsoever on it regarding Open GL or DX (number) that it can do things that G80 under DX10 coudn't do. I'm not advocating that Cell is more powerfull (with the possible exception of things like raytracing and raycasting) in the graphics department but it is more flexible.

I suppose thats the advantage of the Cell because it combines CPU/GPU/PPU (phyiscs processor) in one package.
 
Actually, there are few games on the 360 with a deferred approach, being it purely deferred or hybrid. PD0 and Crackdown spring to mind.

I just don't think that a pure deferred renderer is always the best way to go if you want a rich and dynamic environment with lots of dynamic lighting. Too many limitations, too inflexible and you still need a forward renderer to treat many special cases (transparencies, interesting materials like subsurface scattering for example).

The main limitation I've found in a deferred renderer is that when you are happy about the way you have nicely packed all your parameters in the G buffer... an artist always comes to you with that new parameter you haven't thought about. That's not a position where I want to be.

Yet KZ seem to have cracked having both dynamic lighting and deferred rendering, even having it shine through holes that have been shot through (i.e. distructable environment). If KZD are using the whitelight IP from the third party company this would certanly go some way to explain how they are doing this.

Since whitelight seems to be dynaicly using lighting (and shadows) maybe this is how they are acheiving such high levels of realtime lighting changes while being able to use the deferred rendering technquie.
 
So thats basicly confurming what I said yes? That Cell can go around software restrictions that the DX software forces devs to work within.

No, it simply says that D3D10 is exposing details about the layout of the samples in memory, so any programmer can reliably know these details even if the actual implementation depends on the hardware and will vary in base of the particular hardware.

A console, being a PS3 or a Xbox360 or a Xbox1, doesn't have this limitation, because the layout is fixed and we can know it and reliably use it to our advantage.

This has nothing to do with Cell going around restrictions, more to do with Consoles being fixed hardware with a thinner level of abstraction compared to PC GPUs.
 
Wait, tell me what games I missed so far that had "rich, dynamic environments with extensive dynamic lighting"??? I'll order them right now!
All of 'em, according to the PR blurbs that accompnay every game release!

Before I accept this deferred rendering thing is anything special that manages more than conventional renderers , I'll need to see their game doing things other games aren't. Just saying they're achieving all these amazing things isn't enough for me. There are upcoming PS3 titles (cross platform) that have destructible environments and dynamic lighting AFAIK, with no word that they're only achieving this through deferred rendering. Then again, perhaps they are?

I'll add as an example of my skepticism, Factor 5 did presentations on using maquettes for amazing models, and promptly show a game that doesn't seem to beenfit from anything at all. Nothing in Lair looks beyond what artists were achieving with traditional computer modelling techniques.
 
So thats basicly confurming what I said yes? That Cell can go around software restrictions that the DX software forces devs to work within.

Pretty much!!

When are people going to learn Microsoft DirectX API doesn’t have the final say-so on/about hardware’s architectural strengths or limitations?!
 
No, it simply says that D3D10 is exposing details about the layout of the samples in memory, so any programmer can reliably know these details even if the actual implementation depends on the hardware and will vary in base of the particular hardware.

A console, being a PS3 or a Xbox360 or a Xbox1, doesn't have this limitation, because the layout is fixed and we can know it and reliably use it to our advantage.

This has nothing to do with Cell going around restrictions, more to do with Consoles being fixed hardware with a thinner level of abstraction compared to PC GPUs.

So it sounds to me like your agreeing with me that even though the rsx may be based on older architecture than the G80 that the PS3 will be able to do similar graphics effects that are possible with the release of DX10 yes?

And your last sentance is also agreeing with me. Youre saying that PC have a restriction due to DX software and Cell does not. Hence the reason that Cell can do DX10 like effects. Which was exactly my point.

Besides even on a hardware level while the Cell is not more powerfull than the G80 (as allready stated) the Cell is more flexible.

Correct me if I am wrong but GPU's are for the most part fairly rigid in that you feed them information one end it goes through the processors inside the GPU and result comes out the other end. Now while there is some flexibility there, the program must go through certain processor elements in a certain order.

Where as the Cell can be told to do anything, in any order that you want (as long as this results in the finished product of course). And you can program the SPE's to give almost any graphical effect you want to. And you can put other SPE's to work on that program in parallel.

Basicly the Cell could probably do certain DX 11 tasks to due to the flexibilty of both the hardware and software. I suppose the question would be how good would it be at those tasks? i.e. would the effects be usable in a game. Knowing the maths calculation power of the Cell I would guess that they would.

Taken from PS3


There is NOTHING to support any theory other than that, anywhere on the net, any devs talking about doing effects not already possible on G70/G71, are doing them on the cell and arent relevant to my point. You may see certain DX10 effects in use on ps3, and youll see other ones in use on 360, but neither machine is good enough at those specific kinds of effects to matter at all.

Nao Replied:
You're so wrong. Just to give you a pratical example you must know that on NV2A that was derived from NV20 you could have vertex shaders that can generate/writa data into vertex constant registers, while this was not doable and it's still not doable with any API out there, nonetheless the hw supported that feature and many others you probably never heard about.


http://www.ps3forums.com/showthread.php?t=77437&highlight=cell+writes+to+rsx+cache&page=13
 
Nerve-Damage:
You really need to get a clue. It's not just a "Microsoft DX API" thing, OpenGL has these restrictions too. Read what DeanoC, Fran, nAo, ERP, et al have said about the flexibility afforded by a fixed platform.

Terarrim:
It doesn't have anything to do with Cell!!! It has to do with a fixed platform! Xbox's NV2A gpu could do things beyond what DX 8 allowed on a PC, and it had nothing to do with the Intel Celeron cpu.

Goddamn you fanboys need to pay attention to the technical discussions on this Forum.
 
So it sounds to me like your agreeing with me that even though the rsx may be based on older architecture than the G80 that the PS3 will be able to do similar graphics effects that are possible with the release of DX10 yes?

And your last sentance is also agreeing with me. Youre saying that PC have a restriction due to DX software and Cell does not. Hence the reason that Cell can do DX10 like effects. Which was exactly my point.

No, I'm not agreing with you on either of these two points.
 
Nerve-Damage:
You really need to get a clue. It's not just a "Microsoft DX API" thing, OpenGL has these restrictions too. Read what DeanoC, Fran, nAo, ERP, et al have said about the flexibility afforded by a fixed platform.

Terarrim:
It doesn't have anything to do with Cell!!! It has to do with a fixed platform! Xbox's NV2A gpu could do things beyond what DX 8 allowed on a PC, and it had nothing to do with the Intel Celeron cpu.

Goddamn you fanboys need to pay attention to the technical discussions on this Forum.

Excuse me I am here to learn and I have read these forums for some time if I am wrong tell me and maybe I will come back and dicuss with you. I don't take kindly to being called a "fan boy" (or insulting anyone for that matter!) while I do love the PS3 architecture I am not slamming any other platform now am I????

Besides the fact the the RSX is more flexible as you can go down to the metal (as said in GDC 06 if I remember correctly). The Cell is even more flexible than the RSX meaning that it will enable the PS3 platform to do even more things (graphicaly).

I do see know however the point that was being made (I didn't quite understand where Fran was coming from) what your saying is that the RSX is probably capable of doing defered rending and MSAA at the same time without the Cell's help (due to the reasons you give).

Please do not call me a "Fan boy" again.

If I am out of line report me and If I am in the wrong I am sure a moderator will send me a PM. However I have been civil and been discussing with Fran and have not once gone down to the level of namecalling etc.
 
Last edited by a moderator:
Terarrim, I apologize for the name calling. It's just that you seemed to tenaciously cling to a false notion despite what actual developers were telling you. Many people in the Console Forum do this, and more often than not it is done merely to advance someone's agenda or just general fan boy drivel.
 
And your last sentance is also agreeing with me. Youre saying that PC have a restriction due to DX software and Cell does not. Hence the reason that Cell can do DX10 like effects.
It's not Cell! It's the whole system. You need to understand the idea of a software abstraction layer. On a PC you can have all sorts of different chips in it. The only ways to make games run on all those different builds of PCs are either 1) Write 2000 different versions of your game to cater to all the different hardware combinations (actually way more than 2000!), or 2) Use a software abstraction layer. This allows developers to write general code, and the hardware manufacturers provide translation software (drivers) which turn that general code into specific instructions for each chip in the PC.

A very simple example : To draw a circle on the screen, for GPU 1 you might write something like -

Code:
SetPoint 400,300
SetRadius 50,50
DrawCircle

And for GPU 2 you might write

Code:
Do
   x = 400 + sin(angle)*50
   y = 300 + cos(angle)*50
   DrawPoint x,y
   angle = angle + 1
Repeat until angle = 360

These are not at all indicative of GPU programming, but they illustrate the point! If your game draws a circle on the screen, you'll need to include both bits of code in your game and choose the right one for the GPU the player has. The two are totally incompatible otherwise. Now if you add a software abstraction layer, the developer can write :

Code:
DX_DrawCircle(400,300,50,50)

GPU 1's driver translates that into its code and executes that on the GPU. GPU 2's driver takes exactly the same DX instruction but translates it differently, so GPU 2 can understand it and draw a circle exactly the same.

The benefit of DirectX is a standard interface for writing to the hardware that works for all the different GPUs etc. as long as the drivers are there (ha ha ha! Well, in theory...). The downside is that it limits you to what you can do on the hardware. Imagine GPU 2 has another feature where it can change the colour of the point drawn...

Code:
Do
   x = 400 + sin(angle)*50
   y = 300 + cos(angle)*50
[B]  ChangeColour (random colour)[/B]
   DrawPoint x,y
   angle = angle + 1
Repeat until angle = 360

In this GPU code, the circle is randomized in the colours that draw it. However, using the (imaginary) DX command DX_DrawCircle, this doesn't include the instruction to change the colour. Using DX thus prevents the developer from using the complete features of the GPU.

This is the key point!

The API makes things easier for developers, but also adds limits. GPU's are sold on the level of hardware acceleration they provide for certain features that DirectX implements. eg. You can use pixel shaders in DirectX, but limited to a certain number of instructions. If DX has a limit of 256 instructions for a pixel shader, and your GPU actually has a limit of 65,536 instructions, DX will prevent you using the GPU to it's full capacity. If you don't use DirectX, you can use a pixel shader of length up to 65,536 as the GPU supports.

This isn't an option for PCs. They need an interface layer because all the hardware differs. GPU manufacturers thus target DirectX standards when the promote their hardware. This GPU is DirectX 9 compatible, or SM3.0, or DX10. These are rough bands though, and don't give an exact description of what the hardware is capable of. A part can be called DX9 for example, yet run some parts of DX9 too slowly to be any use. When you take a part and plug it into a closed box, where the devs don't need to use an abstraction layer, these rough bands mean even less. Xenos is a great example of this. In the PC space it would be called a DirectX 9 part because it doesn't feature all the needed DX10 bits to be DX10. But it has more features than a DX9 GPU. The end result is people trying to call it a DX9.5 or such GPU. This is a nonsense! DX is a software interface, not a GPU technical specification! There's no such thing as DirectX 9.5! That's like saying a boxer of 71 kg comes between welterweight (68 kg) and middleweight (74kg) so he weighs welterweight-and-a-half! The way you'd actually class this boxer's weight is 'he weighs 71 kg'. The way you'd class a GPU properly is 'it has these features', but of course those features are long and complicated and will confuse people anyhow, so the graphics cards sell them in the broad band of DXn.

Even more importantly, you can use the hardware in the closed box however you want. You know exactly what chips you have, and can write code for them, and time how long it takes to do stuff, and pass data between them. If you have the above GPU2 in there, you can write the code for it that uses the colour changing function, without limiting yourself to the DirectX instruction. If you have a sound processor that can take a sample input, apply an effect, and output it back into memory, you could if you chose feed it with some texture data, have it process the data as though it were a sample, and take the resultant output and use that a texture. On the PC you probably couldn't do that because the software abstraction doesn't allow it, and even if you can the system isn't setup up for nicely sharing data between processors. On a PC where a DirectX9 GPU like RSX doesn't have geometry shaders, a DX10game that includes them won't have any effect. In the closed box console the developer can implement geometry shaders themselves using the CPU in combination with the GPU (or whatever other processors are to hand). A DirectX 9 GPU is in the PS3 doesn't limit the whole system to only DirectX 9 capabilities, because DirectX 9 is a software limit of the DirectX PC hardware interface that only has relevance on the PC. For this reason, consoles cannot be considered as DirectX n parts. You can't call RSX DX9 and Cell DX10+. They don't run DirectX! You can categorize RSX as a DirectX 9 level GPU, as that describes certain hardware features. But that doesn't describe the limits of the box. The oft used and most apt example of DirectX's irrelevance in gaming is the PS2. It doesn't run DirectX, and hasn't got a DirectX compatible GPU. Yet the system manages to do lots of things that DirectX compatible GPUs on PCs could do. PS2 doesn't not support any shader model, yet can execute pixel and vertex shader effects.

This is why it's bad practice to refer to consoles with DX part numbers and references. What you really mean to say is 'Where a DirectX 10 GPU supports features x,y,z in hardware, and RSX doesn't support these features, Cell is in a position to implement these features.' From that position you can then debate Cell's efficacy in these features, without referring to a broad and somewhat meaningless metric. You need to talk specifically in hardware features that are independent of the software interfaces, because in a console a developer can write without recourse to software layers at all if they're feeling crazy enough!
 
Status
Not open for further replies.
Back
Top