Chalnoth said:Perhaps, but the rest aren't.
Well at the moment there is no rest of the line.
Chalnoth said:Perhaps, but the rest aren't.
Chalnoth said:At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
K.I.L.E.R said:I don't have any games that take advantage of SM 3.0.
Where can I buy some now?
So? We were talking about 6-12 months down the line.AlphaWolf said:Well at the moment there is no rest of the line.Chalnoth said:Perhaps, but the rest aren't.
They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong. It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.K.I.L.E.R said:Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?Chalnoth said:At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
TSMC is insisting that all eggs are in one basket. It is a sellers market right now and they will not produce a new design that is fabricated elsewhere.Proforma said:Working with TMC and IBM is worth something I think. Maybe I am just stupid and don't know much, but I figure they must have some options.
(ie they don't put all their eggs in one basket)
Which isn't such a huge deal. Remember that they just don't want specific designs fabricated at their competitors' plants (a practice which can't last very long....). nVidia, for instance, could have their low-end parts fabricated at TSMC and their high-end parts at UMC or IBM.nelg said:TSMC is insisting that all eggs are in one basket. It is a sellers market right now and they will not produce a new design that is fabricated elsewhere.
LINK
You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse.Chalnoth said:They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.K.I.L.E.R said:Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?Chalnoth said:At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
pocketmoon66 said:Found what was hurting NV cards :
changing
dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_ZERO);
to
dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_KEEP);
(Early Stencil kill doesn't work if your still writing to stencil??)
OK revised figures using FRAPS
1280x960
FALSE: 51
TRUE: 121 ish
DB PS2 (cmp) 54
DB PS3 (if then else) 65 ish
Proforma said:I can't stand how people with intelligence can just say "branching isn't needed, its a useless feature", tell that to Tim Sweeney.
Saying shader 3.0 is useless is just <bleep> talk that probably thinks its a Nvidia feature and not a feature of Direct X 9.0c,which ATI should have had in the R420 and their video chip by end of year.
trinibwoy said:You expect us to believe that you own an Nvidia card but you didn't purchase a game because it didn't have 'shiny water' on ATI cards? Yeah right.
Well for one, anybody who 'hates' a video card brand is a retard.
And secondly, you are right, the developers should provide the same experience during game development. There are some that will disagree but I don't exactly see that stance extending to add-on features. Would Crytek have even given us those features if it wasn't for Nvidia? Shouldn't Nvidia benefit by their active involvement in getting them out there. If Crytek wanted displacement mapping in Far Cry at the outset wouldn't it be included in the shipping version in PS2.0 guise?
Right. Because I sure didn't say:FUDie said:You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse.
Which could be considered f*cking up, couldn't it?Chalnoth said:They just failed to anticipate the performance problems their architecture would have.
Well, considering the pixel shader that does the lighting is pretty short in this case, the latency from branching is probably on the order of the length of the shader. The geometry is also quite simple, so this is pretty close to a worst-case scenario for PS3 when compared to your algorithm.Humus said:Very interesting information. Thanks.
Interesting that it's faster than using ps3.0 dynamic branching even on nVidia hardware. I would have guessed it would be about the same performance, but I guess branching indeed is a bit costly.
Humus said:Proforma said:I can't stand how people with intelligence can just say "branching isn't needed, its a useless feature", tell that to Tim Sweeney.
Saying shader 3.0 is useless is just <bleep> talk that probably thinks its a Nvidia feature and not a feature of Direct X 9.0c,which ATI should have had in the R420 and their video chip by end of year.
I don't think the deal is about people saying that ps3.0 is a useless
feature. Did anyone? The deal seems to be that some people have a
problem with there being a good alternative. Some people just hate
to see that this technique is useful.
FUDie said:You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse.Chalnoth said:They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.K.I.L.E.R said:Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?Chalnoth said:At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
-FUDie
Proforma said:"Good alternative" is subjective and its not going to be accepted by
anyone with purposes in the industry as a standard way anyway
since its a hack. Since we can't add features to the hardware,
lets add in desperate hacks
I am sure you can also hack in object instancing via 2.0 as well.
Hell, why not do everything with 1.0 shaders. Who needs progress.
I would sure as hell trust Tim Sweeney who makes real software
thats ahead of the curve than someone who is in Canada and works
for ATI and makes pointless demos all the time.
When your demos are on the cutting edge and are like Epic's
actual in game demos, then call me.
Where did ATI f*ck up? Looks to me that the R420 is doing exactly what it was designed to do.Proforma said:Nvdiia did screw up, but..FUDie said:You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse.Chalnoth said:They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.K.I.L.E.R said:Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?Chalnoth said:At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
Now ATI has been f*cking up as well. Not an excuse.
It is, what it is.
Not that I hate ATI or love Nvidia, but for godsake, call a spade a spade.