Devs Speak on PS3.0

Princess_Frosty said:
keeping this thread of topic isn't helping, threatening to post things that i've posted in other forums is also trolling, and generally carrying on about it after the person has said they will no longer comment is just as bad.

Trolling is as trolling does.

Anyways you can't escape it, Nvidia supports 3.0 shaders and ATI wont (as far as we all know so far) if you want to run games using these features then it will have to be Nvidia untill ATI release something better

What features??? The next level of features will require significantly long shader streams than either the r420 or nv40 can handle. When you are doing an average of 100 instructions per shader, these cards will crawl. Currently most of the shaders in use are in the 10-20 instruction range. Short and simple.

The support for VS3.0 and PS 3.0 will be usefull for developers to prototype future engines and effects but will have minimal to no impact on the actual consumer space from the next couple of years.

The reason we got onto the whole TWIMTBP and GITG track is because the only reason anyone thinks that 3.0 shaders are going to have an impact is marketting done by nVidia using their TWIMTBP slaves. These developer comments need to be taken with a large lump of salt, because of the marketing influence of the TWIMTBP program.

The performance reality is that to truely take advantage of SM 3.0, you need cards with significantly more performance than will be available in either the R420 or nv40. You may see demos are segments of scenes that utilize the 3.0 shader model, these example will not be practicle in a game enviroment with numerous characters, backgrounds, AI, physics, etc.

So if you are a developer working on a next generation engine, it is to your benefit to have a SM 3.0 card to prototype with. If you are a consumer, it really doesn't matter, because by the time games that come out that actually use SM 3.0 effects ( and not just to pimp their money men), we'll be looking at the current generation of hardware less than budget level cards.

Aaron Spink
speaking for myself inc.
 
jimmyjames123 said:
How about explaining in detail what the advantages of PS 3.0 vs PS 2.0 are?
* Longer programs(512 minimum)
* Dynamic flow control
*Acess to vFace and vPos.xy
*And the ever complicated, Centroid interpolation....

Oh waite...you were'nt even talking to me....... ;)
 
jimmyjames123 said:
What a ridiculous reply! Care to expand more, or are you just angry tonight? How about explaining in detail what the advantages of PS 3.0 vs PS 2.0 are?

The move from PS2.0 to PS3.0 adds partial abilities to write actual programs. It still isn't close to being turing complete. basically the only main advantage is a higher instruction count baseline and theoretical support for branching. The branching will be mainly usefull for prototyping engines for future hardware and shading models, and may see some limited use in reducing the number of shaders that are downloaded onto the hardware, which may or may not result in any performance difference, depending on the architecture of said hardware.

From what I have read, there is a gain in efficiency when using PS 3.0 vs PS 2.0.

Yes, marketing works. I'm not sure where this efficiency gain from PS3 vs PS2 is coming from. Can you identify it. Or are you just relaying what came out of the mouths of babes?


In other words, the game can be programmed to run more efficiently with a given set of effects using PS 3.0 vs PS 2.0. If this were not the case, then it would be completely pointless for developers like Crytek to even bother to code add-on's for PS 3.0.

Who says that Crytek is even coding an add-on for PS 3.0??? So far there has been no proof of anything except them cross compiling their shader library to PS 3.0. We haven't seen any new features or other reasons to suspect that specific feature that require or rely on PS 3.0 have been or will be added to FarCry. Again, marketing works, but don't be a sponge to it and distrust what you hear ( they are lying to you, the only question is what exactly are they lying about ).


Aaron Spink
speaking for myself inc.
 
micron said:
jimmyjames123 said:
How about explaining in detail what the advantages of PS 3.0 vs PS 2.0 are?
* Longer programs(512 minimum)
* Dynamic flow control
*Acess to vFace and vPos.xy
*And the ever complicated, Centroid interpolation....

Oh waite...you were'nt even talking to me....... ;)

Pretty sure the only two listed that are actually exclusive to PS 3.0 are the vFace and vPos.xy and the Longer programs. Both flow control and centroid sampling are/will be available from PS 2.0 + included PS 2.0 extentions/options.

Aaron Spink
speaking for myself inc.
 
aaronspink said:
Pretty sure the only two listed that are actually exclusive to PS 3.0 are the vFace and vPos.xy and the Longer programs. Both flow control and centroid sampling are/will be available from PS 2.0 + included PS 2.0 extentions/options.
ps 2.b?

*edit*
We're waiting for DX9c to inherit dynamic flow control arent we?
 
jimmyjames123 said:
What a ridiculous statement. It's competely dependent on what the application is doing.

What a ridiculous reply! Care to expand more, or are you just angry tonight? How about explaining in detail what the advantages of PS 3.0 vs PS 2.0 are?

From what I have read, there is a gain in efficiency when using PS 3.0 vs PS 2.0. In other words, the game can be programmed to run more efficiently with a given set of effects using PS 3.0 vs PS 2.0. If this were not the case, then it would be completely pointless for developers like Crytek to even bother to code add-on's for PS 3.0.
As I said, it's completely dependent on what the application is doing. How much more efficient is "mad r3, r2, r1, r0" under PS 3.0 compared to PS 2.0?

-FUDie
 
aaronspink said:
Can you think of any reason EA would specifically disable features for non-nvidia products, even when through external 3rd party programs, you can enable those features and they work fine, still having higher performance?
I can't. But still your scenarion doesn't make sense. Like I said before you can be right about those companies and their marketing tactics, but your explanations just don't fit. The world is full of theories that makes the right predictions postulating nonexistent phenomenons :)
Maybe you can even be right for some cases but facts are never so simple, imho.
 
Princess_Frosty said:
The problem came from ATI meeting minamum requirements for the time which paid off, gave them better performance as running at 24 bit was a life saver, and gave them the edge for PS2.0. However the time will come when they will HAVE to run at 32bit, and IF they want to stay competative they will need to support multiple precision modes, otherwise the frame rates will be abismal. All this stuff that Nvidia have been doing, the multiple precision that a LOT of people frowned upon, ATI will also have to do. I'm not saying however that Nvidia handled it in an acceptable way thats down to your own judgement.

Where do you come up with stuff?

Look at NV40 as an example, it is not always faster when using PP compared to using full precision.
 
StealthHawk said:
Princess_Frosty said:
The problem came from ATI meeting minamum requirements for the time which paid off, gave them better performance as running at 24 bit was a life saver, and gave them the edge for PS2.0. However the time will come when they will HAVE to run at 32bit, and IF they want to stay competative they will need to support multiple precision modes, otherwise the frame rates will be abismal. All this stuff that Nvidia have been doing, the multiple precision that a LOT of people frowned upon, ATI will also have to do. I'm not saying however that Nvidia handled it in an acceptable way thats down to your own judgement.

Where do you come up with stuff?

Look at NV40 as an example, it is not always faster when using PP compared to using full precision.

For the most part you'd have to say it is tho. The Higher Precision in itself is less register friendly to the Nv40, Judging from most tests we've seen. the NV40 still has quite a bit to gain from FP 16 usage. Of course this is all my opinion ;)

Tho the Nv40 Doesnt seem as fickle when it comes to instruction order ect, the Like NV35/Nv30 is.
 
ChrisRay said:
For the most part you'd have to say it is tho. The Higher Precision in itself is less register friendly to the Nv40, Judging from most tests we've seen. the NV40 still has quite a bit to gain from FP 16 usage. Of course this is all my opinion ;)

How so? It seems split about 50/50 I'd say. And from what I've seen I wouldn't say that Princess Frosty is correct in saying that full precision is too slow- certainly on NV3x, but not on NV40. And when ATI goes to FP32, who's to say how it will perform?

I'm not saying that partial precision is useless and there's no benefits to it, but I am contesting that it is a strict requirement to getting good performance like Princess Frosty is saying.

The whole post is just classic. "R420 won't support SM3...or so the rumors say. I don't believe the performance numbers because I haven't seen them" Uh....
 
StealthHawk said:
ChrisRay said:
For the most part you'd have to say it is tho. The Higher Precision in itself is less register friendly to the Nv40, Judging from most tests we've seen. the NV40 still has quite a bit to gain from FP 16 usage. Of course this is all my opinion ;)

How so? It seems split about 50/50 I'd say. And from what I've seen I wouldn't say that Princess Frosty is correct in saying that full precision is too slow- certainly on NV3x, but not on NV40. And when ATI goes to FP32, who's to say how it will perform?

I'm not saying that partial precision is useless and there's no benefits to it, but I am contesting that it is a strict requirement to getting good performance like Princess Frosty is saying.

The whole post is just classic. "R420 won't support SM3...or so the rumors say. I don't believe the performance numbers because I haven't seen them" Uh....


Maybe in Short shaders that'd probably be the case, From my understanding of the Nv40 Architecture it starts off in 32 precision internally then down filters.

But in the longer shaders, It definately seems to benefit from lower precision, Probably due to the bandwith requirements of FP32.

That being said. Nvidias is still pushing FP 16 on the Nv40, Why would they do that if the card did not stand to benefit from it? Most of the shader tests we've seen for Pixel Shader, Seem to just test the Pixel Output (fillrate I guess) of Shaders. And arent really pushing long shaders, So the card probably wouldnt benefit from it in such enviroments.

In regards to the r420 thing, I have no idea. I'm not supporting the idea that r420 is slower or anything. Just think the NV40 still stands to benefit from 16 FP precision ;)
 
aaronspink said:
What features??? The next level of features will require significantly long shader streams than either the r420 or nv40 can handle. When you are doing an average of 100 instructions per shader, these cards will crawl. Currently most of the shaders in use are in the 10-20 instruction range. Short and simple.

How do you know this?

I bought a GF4, which was the second generation of cards to support programmable pixel and vertex shaders. A year and a half later, it's running Far Cry at reasonable speed.

I would expect the same with NV40 and PS3.0 VS3.0.

I think we can all agree that the NV40 is going to be the first card on which the 3.0 generation of programmability features are fully supported. We can also agree that the industry as a whole is moving to 3.0 level functionality. It goes to reason that developers would baseline their 3.0 code on the functionality available in the first supporting hardware that has a wide base of installed users. The most reasonable prediction thus seems to be that, if you were to keep the card for that long, the NV40's 3.0 functionality would be useful when 3.0 games come out, as they certainly will.

The combination of 3.0 level programmability, 32 bit precision, and PCI-Express is very exciting for those of us hoping to try using GPU's as SIMD processors. Since I'm a material scientist I'm most interested in writing molecular simulation code to run on NV40/R500 type hardware, but I'd imagine that there are a lot of other applications that would benefit from the massive number crunching ability.

GPU powered vehicle for the DARPA Grand Challenge race in 2006 anyone? ;)
 
There are eleventy million quotes on these boards from ATI staff, including at least one of whom was closely associated with the design of R300, stating that FP32 is not a speed thing in their architecture and therefore _pp is totally uninteresting to them. They have repeatedly stated that it is a transistor-budget thing, and an assessment about where/when in the history of 3D that 24-bit "won't be enough". Not spending transistors there allows for spending them on other nifty things.

Some NV types seem constitutionally unable to grasp the idea that NV architecture is not physical laws of the universe and therefore the challenges that ATI must overcome are not necessarily the same as the ones NV must overcome in any given situation.

My apologies to the 99% of you who already knew all of the above, at far greater detail than I do.
 
Who says that Crytek is even coding an add-on for PS 3.0???

Crytek already did code a PS 3.0 add-on for FarCry! They have stated in public that it took them 3 weeks to do so. There are also at least a dozen titles already lined up to have PS 3.0 support. Whether this will result in tangible gains or not, who knows.

Again, marketing works, but don't be a sponge to it and distrust what you hear

When ATI comes out with full PS 3.0 support (and trust me, they eventually will), are you still going to say that this is all marketing? ;)
 
As I said, it's completely dependent on what the application is doing.

And as I said, apparently certain effects can be coded more efficiently for PS 3.0 than for PS 2.0. Why else would any developers even be interested in embracing PS 3.0 at all?
 
Y'know, I always figured that NV's inability to create a demo that would show the limitations of FP24 in a clear way was proof that it was impossible to create one that would run at acceptable framerates on current NV hardware. Making a slideshow demo would only prove ATI's point.

But that was then --"current NV hardware" (i.e. NV40) is a hell of a lot brawnier than it was not so long ago. It will be very interesting to see if they can do one now. Early days yet --but I wonder if it is on the "to do" list?

My main worry about ATI not supporting PS3.0 in R420, should the rumors prove true, is not for today or tomorrow or next week. It is for six months from now and a year from now. I well remember ATI representatives in DX8 days loudly bemoaning the fact that a signficant portion of their then-current reputation for crappy drivers was in reality due to developers exclusively using NV hardware and DX8 drivers to do their developement with. This, according to ATI, resulted in NV's bugs and implementations of ambigious parts of the standard having a signficant negative impact on ATIs products when those games developed around NV ran on ATI hardware.

I consider it not in the least a coincidence that ATI's reputation for driver quality began to improve, and NV's to decline, when the shoe went on the other foot with DX9. This, of course, was not the only reason (NV aggressive "optimizations", ATI CATALYST) --but it must be a contributor.

So, now it appears that ATI may be about to invite a reoccurrence of the DX8 situation with PS3.0 to their own detriment and the detriment of future ATI owners. Certainly the lesson is too fresh for them to have forgotten it already. What I don't know is to what degree they felt forced to it by other considerations, or to what degree they feel the move from PS2.0 to PS3.0 is not so great from a standards/programming pov as to leave less room for that to bite them and their customers on the ass down the road.
 
So, now it appears that ATI may be about to invite a reoccurrence of the DX8 situation with PS3.0 to their own detriment and the detriment of future ATI owners.

So do you think more people are going to be buying an R420 variant(before the PS3.0 Ati cards come out, which prbably won't be long), then people who bought an FX, which is going to get worse and worse as developers add more PS3.0 code AND 2.0 code for the R420? All while Nv most likely isn't going to be babying it around with replacement shaders?
 
I have heard rumors that the R500 may be coming to market earlier than expected. I wonder if this is because ATI wants to have a card out as soon as possible that has full support for Shader Model 3.0.

NVDA was definitely much smarter this time around in designing their chip. Finally, developers are beginning to embrace their technology, instead of steering away from it.
 
reever said:
So, now it appears that ATI may be about to invite a reoccurrence of the DX8 situation with PS3.0 to their own detriment and the detriment of future ATI owners.

So do you think more people are going to be buying an R420 variant(before the PS3.0 Ati cards come out, which prbably won't be long), then people who bought an FX, which is going to get worse and worse as developers add more PS3.0 code AND 2.0 code for the R420? All while Nv most likely isn't going to be babying it around with replacement shaders?

The problem will be, if one develops, for people who buy the R500 and then find that its PS3.0 drivers are what they will call (very loudly, whether fair or not) "buggy". Some portion of that will be due to all the development for PS3.0 to that point will have been done on NV40 hardware and drivers. Some of that will be the months of experience with PS3.0 drivers that ATI will be ceding to NV, and some will be that rightly or wrongly the NV40 implementation will become the defacto standard that developers will be programming around. Some NV "bugs" will become standard and owners of ATI R500 will bitch that ATI's correct implementation is the "bug" since "it worked fine" on NV40.
 
Malfunction said:
...
Something is great about it or else why come out with it? If R500 is to have it, what's wrong with early adoption of it from nVidia if they already have use for it now?

Absolutely nothing, provided the nV40 implementation of ps3.0 turns out to be useful, and that's the real question, isn't it? It's similar to the "very early adopters" of ps2.0 cards who bought 9700P's back in Sept. '02 as I did. It turns out that those people made wise purchases, because nV30's implementation of ps2.0 was grossly inferior to that of R300's. So, it's going to depend on the efficacy of the nV40 ps3.0 implementation. We can discuss the position of ps3.0 in the API all we like, and its theoretical benefits, but short of a decent ps3.0 implementation in silicon, there's nothing else to do *but* to discuss it theoretically...;) As to nV40's ps3.0 implementation, we don't know enough about it at this point to make that decision, seems to me.

Besides, R500 is PCI-Express unless ATi decides to build another AGP card though I thought R420 was it?

I think all that ATi is waiting on there is for PCIe motherboard manufacturers to start churning them out, as well as on Intel and other chipset manufacturers to finish up their core-logic chipset support for PCIe, etc. R4x0 will be available in PCIe (as well as AGP) when the time comes that people are buying PCIe motherboards, from what I understand.

AGP owners will have an extended life with their SM 3.0 nVidia cards due to just that, SM 3.0 support.

Yes, if the implementation is a good one. But ps2.0 "support" in nV3x didn't do much for the "longevity" of nV3x, right? So it will all depend on the efficacy of the nV40 implementation of ps3.0.

I'd be pissed about the HL2 coupon if that is why I got a ATi product. I'd also be pissed about my ATi product for $399 or $499 being replaced by a similar priced R500 in 8 months because it will then offer SM 3.0 support.

So you'd be pissed that when HL2 comes out you get it free with the coupon? I mean, it isn't as if HL2 coupon owners will be forced to wait longer than anybody else to get the game when it ships. And, people who are overly concerned about buying a card today because of what *might* be shipping in eight months (they think) don't have to buy anything today, do they? So I don't see any reason for anyone to be "pissed" about anything you've mentioned here. Indeed, "8 months" (even 8 theoretical months) seems better to me than the "six-month product cycle" nVidia used to boast about a couple of years ago.

After a $399 or $499 purchase, I'm not so sure I'd wanna jump so quickly to a PCI-E native mobo/videocard, plus new memory, though... that might just be my way of thinking. :D

If that was my way of thinking I'd just put off buying a card--any card--until I could go PCIe. But if I did that, then I surely wouldn't be interested in buying a 3d card which uses an AGP bridge-chip to function on the PCIe bus, because you won't be getting the full PCIe functionality.
 
Back
Top