Devs Speak on PS3.0

ChrisRay said:
What I find curious is people want the opinions of people not in Nvidia TWIMP, Yet TWMIP is probably effects the majority of big game titles. Shouldnt people care how the people in this program feel about features?

I would be more interested in their opinions if I was sure that was really what I was reading.

They are in a situation where they can't really say anything negative about nvidia features.
 
AlphaWolf said:
ChrisRay said:
What I find curious is people want the opinions of people not in Nvidia TWIMP, Yet TWMIP is probably effects the majority of big game titles. Shouldnt people care how the people in this program feel about features?

I would be more interested in their opinions if I was sure that was really what I was reading.

They are in a situation where they can't really say anything negative about nvidia features.

Perhaps but that doesn't mean they have to say anything positive.

You guys read way to much into what these IHV sponsored programs actually entail from the developer.
 
ChrisRay said:
What I find curious is people want the opinions of people not in Nvidia TWIMP, Yet TWMIP is probably effects the majority of big game titles.

because there are more differences in speed between ATI and NV on non TWIMP games there on them
 
reever said:
ChrisRay said:
What I find curious is people want the opinions of people not in Nvidia TWIMP, Yet TWMIP is probably effects the majority of big game titles.

because there are more differences in speed between ATI and NV on non TWIMP games there on them


So, A game that isnt optimised for said hardware doesnt perform as well on it? Now thats a big shock. No offense, But whats your point?

I never said ignore People not in Nvidia TWIMP campaign, I just said its silly to sit there and pretend the campaign doesnt exist,

It's Very real. It's not going anywhere. And Ignoring it is like covering your ears and going "La la la la" Plenty of games in this campaign. And I dont think ignoring them is a good idea at all.
 
I get the feeling those campaigns are very different for devs then they are when aimed at consumers..

Can anyone give me some insight on what it acctually means for devs?
I guess they´re not trying to impress the game devs with shiny stickers and loose PR mumbo jumbo like you would a consumer...
 
I find it hilarious that nvidia decides to push PS2.0/3.0 and HDR now that they have hardware that actually supports it, whereas the R300 has supported these features for more then two years already.

It is also distressing to see devs cave to marketing strategies like the TWIMTBP bs for money. In the end it only hurts the consumer.
 
hstewarth said:
Anybody who still believes that PS2.0 is better than PS3.0 should read this article. For a development standpoint, I get the feeling that PS3.0 is 100's of times better than PS2.0 because of the flow control and other parts like multiple light sources.

I don't think anyone have said or even think that ps2.0 is better than ps3.0, but the interesting question is if it's a big step forward or a small. 100 times better is certainly an exaggeration. IMO it has one big new thing and that's flow control. Another interesting thing is gradient instructions. In addition to that there's a bunch of minor tweaks. In total it ends up in my opinion as a smaller step forward than from say 1.4 to 2.0. There are other features I'm more interested in, like floating point blending.

Consider also the points Andrey Khonich of Crytek made. Of these most are just performance optimisations and doesn't enhance rendering quality.

- Handle several light sources in single pixel shaders by using dynamic loops in PS3.0. Convenience and a little about performance
- Decrease number of passes for 2-sided lighting using additional face register in PS3.0. Performance. Could also be done in a single pass in ps2.0, though it's simpler in ps3.0
- Use geometry instancing to decrease number of draw-calls (remove CPU limitations as much as possible). Performance
- Unrestricted dependent texture read capabilities to produce more advanced post-processing effects and other in-game complex particles/surfaces effects (like water). Quality enhancing, though he's unspecific
- Full swizzle support in PS3.0 to make better instructions co-issue and as result speedup performance. Performance
- Increase quality of lighting calculations using 32 bit precision in pixel shaders on NV40. Quality, but hardly noticable
- Take advantage of using 10 texture interpolators in PS3.0 Shader model to reduce number of passes in some cases. There are 10 interpolators in ps2.0 too, but two are color registers. The difference will seldom be significant.
- Easily do multiple pass high-dynamic-range rendering on FP16 targets. A great feature for sure, but has nothing to do with ps3.0.
- Speed up post-processing a lot by using MRT capabilities and mip-mapping support on non-power-of-two textures. A great feature for sure, but again has nothing to do with ps3.0.
 
I rather think the real issue is weather it will be used to any real extent or not in the near future..
It would stand to reason that SM3.0 is a improvment over SM2.0, but if its not utilized, it will be all for nothing in the end..

And ATI not supporting it would prolly hamper the software support in some way.. perhaps even to the extent that we wont see any real heavy use of it until the next (next) gen...
by that time there might be a new ShaderModel with hardware support in both Nvidia and ATI.. reducing the importance of SM3.0 to a small footnote in history..
 
Except that Shader model 3.0 really completes shader functonality.

Anything after this is just building on what's there.
We'll see unified vertex and pixel shaders (removing the few asymetries)and perhaps another programable unit for HOS, but the only thing really missing from shader model 3.0 is arbitrary output (being able to compute your output address) and removing that restriction (which I don't see happening in DX10) basically makes you a general purpose CPU.

The majority of the functionality we're ever going to see in shaders is available in 3.0 (the one real exception is being able to use the frame buffer color as an input) 3.0->4.0 just isn't going to be the same leap as 1.0->2.0, or even 2.0->3.0.
 
ChrisRay said:
So, A game that isnt optimised for said hardware doesnt perform as well on it? Now thats a big shock. No offense, But whats your point?

I never said ignore People not in Nvidia TWIMP campaign, I just said its silly to sit there and pretend the campaign doesnt exist,

It's Very real. It's not going anywhere. And Ignoring it is like covering your ears and going "La la la la" Plenty of games in this campaign. And I dont think ignoring them is a good idea at all.

What about developers removing features just because your device ID, limiting resoltions, lowering details, loading lower quality shaders (EA Games). What about not lettting someone with a true DX9 card not even able to run a Demo (Gunmetal)?? What about the retail game 'bridge it' that doesn't even run on ATI cards period, must have a Nvidia card. How about releasing a patch that shows Nvidia FX cards in a different light (TRAOD).

Yes I ignore it alright, I do my best not to buy their titles...because these little games worked years ago due to some hardware being more advanced but todays hardware is not the case.

If I pay $50 for a game title and I own a DX9 card, it had better perform the same as Nvidia card owner. The product is the game code, and as a consumer I should not be forced to buy 'inferior' hardware due to some cash pay out BS.

TWIMTBP can kiss may ass.
 
ANova said:
I find it hilarious that nvidia decides to push PS2.0/3.0 and HDR now that they have hardware that actually supports it, whereas the R300 has supported these features for more then two years already.


It is also distressing to see devs cave to marketing strategies like the TWIMTBP bs for money. In the end it only hurts the consumer.



i bet if ati had ps 3.0 and nvidia didnt the would push it also its all about marketing both companies will lie their asses off to out do the other


so the dev in under the ati GITG program any difrente for the dev's in the TWIMTBP they are all under the bs for money
 
Doomtrooper said:
ChrisRay said:
So, A game that isnt optimised for said hardware doesnt perform as well on it? Now thats a big shock. No offense, But whats your point?

I never said ignore People not in Nvidia TWIMP campaign, I just said its silly to sit there and pretend the campaign doesnt exist,

It's Very real. It's not going anywhere. And Ignoring it is like covering your ears and going "La la la la" Plenty of games in this campaign. And I dont think ignoring them is a good idea at all.

What about developers removing features just because your device ID, limiting resoltions, lowering details, loading lower quality shaders (EA Games). What about not lettting someone with a true DX9 card not even able to run a Demo (Gunmetal)?? What about the retail game 'bridge it' that doesn't even run on ATI cards period, must have a Nvidia card. How about releasing a patch that shows Nvidia FX cards in a different light (TRAOD).

Yes I ignore it alright, I do my best not to buy their titles...because these little games worked years ago due to some hardware being more advanced but todays hardware is not the case.

If I pay $50 for a game title and I own a DX9 card, it had better perform the same as Nvidia card owner. The product is the game code, and as a consumer I should not be forced to buy 'inferior' hardware due to some cash pay out BS.

TWIMTBP can kiss may ass.


I think everyone should read this quote. It's a perfect example of my "covering yours ears" analogy. And exactly why people should care what developers have to say.

Thank you doomtrooper for providing a better example than I ever could have.
 
Some of these comments are very disturbing, especially the ones where they say that dynamic flow control allows them to use one shader where they previously had to use several, and hence require less state changes, thus improving performance.

It's that kind of flawed thinking that annoys the hell out of me with a lot of game programmers out there :)

What they're doing is taking a very small, per-object cost (state changes), and changing it into a very big, per-pixel cost. So, instead of figuring out exactly which texture and shader path is required to render a chunk of geometry on the CPU (which excells at this type of stuff) on a per-object basis, they are submitted everything to the GPU and letting it figure out which to use on a per-pixel basis.. it's doing the same operations as before, only now you're doing it at every single pixel rather than just once.

Great "optimization" there. Now, static flow control on the other hand...



The biggest benefit I can see coming from PS3.0 is with doing normal CPU work on the GPU, especially now that FP32 is required for PS3.0 (so you can almost switch CPU and GPU algorithms out and expect 'pretty much' the same results, except for the parts where the FP32 implementation doesn't exactly cooresepond with IEEE standards).

With things progressing as they are, both on the CPU and GPU end, within a few years of pretty much any game's release it is CPU bound on the highest end of both with settings maxed. The solution here is to either A) have the IHVs support higher resolutions with more anti-aliasing or B) start putting a good amount of that CPU work on the GPU.. *even if* it's slightly faster to do it on the CPU. With dynamic flow control and all the extra instructions that PS3.0 provides, you can do a hell of a lot more CPU algorithms on the GPU now. Some may run a good deal slower on the GPU, since dynamic flow control is extremely expensive there, but by the time you release your product the GPUs will have gotten sufficiently fast to overcome that difference in many cases. There is, of course, a line to be drawn here though.. you certainly wouldn't want to swap out a CPU algo with a GPU-based algorithm that's 3x slower, but if you're not quite sure which one is actually faster, erring towards the GPU side is probably a safe bet.

Now, there's still the problem of getting the results of these calculations back to the CPU, but PCI-express should help there.
 
All this crazy dissing of SM3.0 is just silly.

Why should anyone impose their opinion on others on which video card to buy? It's your money, you buy what you like. If today's games don't support SM3.0, pick your card and buy it. If today's games support SM3.0, pick your card and buy it. It's that simple.

What is far more important, is the availability of these new tools for developers to work with. Yes, developers take time to work with these new tool. Tim Sweeney said so himself, his game will not ship till 2006 or later. But remember, Tim Sweeney gets this new NV40 cards at approximately the same time as we do. They don't get it years before us! Without the hardware, how do you develop features for it? Simulate it?

Early adopters of any card naturally do not reap the immediate benefits of the latest technologies. That's a given. But what we should consider is how these latest cards with the latest technologies work with today's technology. If it works just as well as any competitor's, and gives some frillies, kudos to them. If not, you can always buy the competitor's. It's your money.

However, bashing SM3.0 goes no where, because if you aren't excited about it, the developers are, because they get to work on it now, and show you the results later. Anyone working with SM2.0 today, and expecting to release it 2 years later, are behind the curve.

I'm pretty sure, if you see CGA games today, you must be wondering what kinda crack the developer was smoking. Exaggerated? Yes, but I hope you see my point.
 
Ilfirin said:
What they're doing is taking a very small, per-object cost (state changes), and changing it into a very big, per-pixel cost.

The assumption here is that a) per object cost is small (depends on which state you're changing, you could be talking about lots of lost cycles, especially with textures) and b) the frequency of the changes per object are small.

I agree with you that you don't want to write a single shader with branches for every material in your entire game. On the other hand, to suggest that there's no scenarios where a dynamic branch (or even a predicated branch vs using a state change and different shader) are a performance win is going too far.

Programming is a delicate balance. I don't believe any developers are doing to treat the GPU like every operation is free and performs best with no conditions. Clearly, each effect has to be evaluated as to whether or not it's more optimal to do it with multi-draw calls and state switching or to switch operations within the shader itself.
 
DemoCoder said:
The assumption here is that a) per object cost is small (depends on which state you're changing, you could be talking about lots of lost cycles, especially with textures) and b) the frequency of the changes per object are small.

Bit of a vocabulary difference here.. when I say 'object' that is after the mesh has already been split up into different surfaces and such. So there is at maximum 1 state block change per object, using this definition.

And I find it very hard to believe that a single state block change could be slower than doing the state change at every pixel. State changes only become expensive when you do them a lot. In my experience (and this *is* something that varies wildly from application to application) 1000 batches per frame vs 100 batches per frame is only a couple frames per second, a handful of extra pixel shader instructions on the other hand, is usually more expensive. And if the initial performance estimates are to be believed, dynamic flow control is a hell of a lot more expensive than a handful of pixel shader instructions.

Now, in an RTS where you're talking about tens to hundreds of thousands of batches per frame, that might be a different story..

This is basically the same old "where's the bottleneck?" question in a different format :)
 
I think a few criticism made, concerning what the devlopers had to say, especially, the ones in the vien of marketing speak, might be off base.

One thing that I found lacking from those posts was that, these devlopers are speaking as developers rather than what's needed now -- except for in the case of laying down foundation.

For example, Mr. Sweeny stated that anything less that 32bit FP is unacceptable these days, is correct if one puts in the context of, one is a developer making a game due out in 2 years, when anything below 32 bit percision is unacceptable.

That is to say, many of these comments don't seem all that "slanted". I think an issue that has become pervasive in this board is not so much bias, but the failure to apply the principal of charity when interpretting the words of others.
 
Rican said:
i bet if ati had ps 3.0 and nvidia didnt the would push it also its all about marketing both companies will lie their asses off to out do the other

so the dev in under the ati GITG program any difrente for the dev's in the TWIMTBP they are all under the bs for money

I understand, however GITG is nowhere near as big and widespread as nvidia's TWIMTBP. I have yet to see a game include a video of GITG blatantly advertising for ATI in it's startup process. Nvidia also pays off or 'offers' to lend developers a hand in exchange for keeping their mouths shut about any disadvantages to nvidia's products and helping to promote them. Lets face it, nvidia's PR department is powerful; a very large sum of money gets fed into it.
 
Chalnoth said:
GraphixViolence said:
Um, how exactly does multi-pass HDR rendering require FP16 blending? If you're multi-passing, you can just render to intermediate FP16 render targets and blend in the pixel shader on a subsequent pass. Or FP24 render targets, if you have a Radeon.
Well, first of all, there are no FP24 render targets. It's all FP16 or FP32 (the R3xx would simply convert the internal FP24 value to a FP32/FP16 value before output).
Right, so, the Radeon can still get higher precision for HDR calculations. Correct?

Anyway, blending is a potentially huge performance improvement over static render targets. Specifically, if you don't use blending, you'd need to render each and every triangle to its own texture to properly emulate blending. That's gonna put performance in the toilet no matter which way you slice it.

You can get around this, of course, but not without sacrifices.
You still haven't answered why multi-pass HDR rendering requires FP blending. Remember that it was already demonstrated in one of the Radeon 9700 Pro launch demos in 2002, and that chip doesn't support FP blending. Performance was pretty good if I recall. If you need another example of how it's possible today with solid performance, try this.
 
Devs should program and focus their games solely on API's, not a specific vendor's hardware.

All these hardware focused programs just degrade the experience for other users, because developers focus their development on a specific set of hardware.

If developers would based their games specifically on an API, it will be good for everyone.

Why split the PC market? Ok, TWIMTBP games run fine on ATI cards, but it's the principal..
 
Back
Top