New dynamic branching demo

Chalnoth said:
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.

Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?
 
3.0

K.I.L.E.R said:
I don't have any games that take advantage of SM 3.0. :(
Where can I buy some now?

Well currently FarCry is one of the only ones so far.
I want to use it in some of the stuff that I am working on.
 
K.I.L.E.R said:
Chalnoth said:
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?
They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong. It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.

Regardless, it is a mistake that they have corrected, and I have high hopes for the rest of the NV4x lineup.
 
Re: New poster trying to provide some FUD

Proforma said:
Working with TMC and IBM is worth something I think. Maybe I am just stupid and don't know much, but I figure they must have some options.

(ie they don't put all their eggs in one basket)
TSMC is insisting that all eggs are in one basket. It is a sellers market right now and they will not produce a new design that is fabricated elsewhere.
LINK
 
Re: New poster trying to provide some FUD

nelg said:
TSMC is insisting that all eggs are in one basket. It is a sellers market right now and they will not produce a new design that is fabricated elsewhere.
LINK
Which isn't such a huge deal. Remember that they just don't want specific designs fabricated at their competitors' plants (a practice which can't last very long....). nVidia, for instance, could have their low-end parts fabricated at TSMC and their high-end parts at UMC or IBM.
 
Chalnoth said:
K.I.L.E.R said:
Chalnoth said:
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?
They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.
You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse. :rolleyes:
It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".

-FUDie
 
pocketmoon66 said:
Found what was hurting NV cards :

changing
dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_ZERO);
to
dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_KEEP);

(Early Stencil kill doesn't work if your still writing to stencil??)


OK revised figures using FRAPS

1280x960
FALSE: 51
TRUE: 121 ish
DB PS2 (cmp) 54
DB PS3 (if then else) 65 ish

Very interesting information. Thanks. :)
Interesting that it's faster than using ps3.0 dynamic branching even on nVidia hardware. I would have guessed it would be about the same performance, but I guess branching indeed is a bit costly.
 
Re: programming

Proforma said:
I can't stand how people with intelligence can just say "branching isn't needed, its a useless feature", tell that to Tim Sweeney.

Saying shader 3.0 is useless is just <bleep> talk that probably thinks its a Nvidia feature and not a feature of Direct X 9.0c,which ATI should have had in the R420 and their video chip by end of year.

I don't think the deal is about people saying that ps3.0 is a useless feature. Did anyone? The deal seems to be that some people have a problem with there being a good alternative. Some people just hate to see that this technique is useful.
 
trinibwoy said:
You expect us to believe that you own an Nvidia card but you didn't purchase a game because it didn't have 'shiny water' on ATI cards? Yeah right.

I wasn't too clear. I meant they lost my $50 for their next games. (because of only coding for nVidia cards and because NwN blew ;)

Well for one, anybody who 'hates' a video card brand is a retard.
And secondly, you are right, the developers should provide the same experience during game development. There are some that will disagree but I don't exactly see that stance extending to add-on features. Would Crytek have even given us those features if it wasn't for Nvidia? Shouldn't Nvidia benefit by their active involvement in getting them out there. If Crytek wanted displacement mapping in Far Cry at the outset wouldn't it be included in the shipping version in PS2.0 guise?

Yes, I think it extends to any content that comes in official patches for the game. For the same reason that games released on PC should be released on consoles and vice-versa. What's the point for console fans not wanting their precious games on the PC? What's the point for nvidia fans not wanting their precious offset mapping working on ATi cards? What's the point for ATi fans not wanting their precious HDR effects to work on nvidia (NV3x) cards?

But hey, that's my opinion, not trying to force it down anyone's throat. Just don't complain next time the problem affects you. :p
 
FUDie said:
You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse. :rolleyes:
Right. Because I sure didn't say:
Chalnoth said:
They just failed to anticipate the performance problems their architecture would have.
Which could be considered f*cking up, couldn't it?

I just don't think it's a big deal. Mistakes are made to be fixed. It has been fixed, so I don't care.
 
Humus said:
Very interesting information. Thanks. :)
Interesting that it's faster than using ps3.0 dynamic branching even on nVidia hardware. I would have guessed it would be about the same performance, but I guess branching indeed is a bit costly.
Well, considering the pixel shader that does the lighting is pretty short in this case, the latency from branching is probably on the order of the length of the shader. The geometry is also quite simple, so this is pretty close to a worst-case scenario for PS3 when compared to your algorithm.

If you instead translate the algorithm to a real game with more advanced shaders, and much more complex geometry, I would expect the performance to shift back in favor of PS3.
 
Re: programming

Humus said:
Proforma said:
I can't stand how people with intelligence can just say "branching isn't needed, its a useless feature", tell that to Tim Sweeney.

Saying shader 3.0 is useless is just <bleep> talk that probably thinks its a Nvidia feature and not a feature of Direct X 9.0c,which ATI should have had in the R420 and their video chip by end of year.

I don't think the deal is about people saying that ps3.0 is a useless
feature. Did anyone? The deal seems to be that some people have a
problem with there being a good alternative. Some people just hate
to see that this technique is useful.

"Good alternative" is subjective and its not going to be accepted by
anyone with purposes in the industry as a standard way anyway
since its a hack. Since we can't add features to the hardware,
lets add in desperate hacks

I am sure you can also hack in object instancing via 2.0 as well.
Hell, why not do everything with 1.0 shaders. Who needs progress.

I would sure as hell trust Tim Sweeney who makes real software
thats ahead of the curve than someone who is in Canada and works
for ATI and makes pointless demos all the time.

When your demos are on the cutting edge and are like Epic's
actual in game demos, then call me.
 
call a spade a spade

FUDie said:
Chalnoth said:
K.I.L.E.R said:
Chalnoth said:
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?
They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.
You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse. :rolleyes:
It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".

-FUDie

Nvdiia did screw up, but..

Now ATI has been f*cking up as well. Not an excuse.
It is, what it is.

Not that I hate ATI or love Nvidia, but for godsake, call a spade a spade.
 
Re: programming

Proforma said:
"Good alternative" is subjective and its not going to be accepted by
anyone with purposes in the industry as a standard way anyway
since its a hack. Since we can't add features to the hardware,
lets add in desperate hacks

I am sure you can also hack in object instancing via 2.0 as well.
Hell, why not do everything with 1.0 shaders. Who needs progress.

I would sure as hell trust Tim Sweeney who makes real software
thats ahead of the curve than someone who is in Canada and works
for ATI and makes pointless demos all the time.

When your demos are on the cutting edge and are like Epic's
actual in game demos, then call me.

This technique is useful. Just get over it. You only look like a fool with that lame attitude.
 
all righty.. you are a buffoon.. You realy dont have a clue do you. Though you seem to drop names ,and act like you know somthing... but all you have realy said , and it is CRAPING on this thread, is " ATI is bad for not having SM3.0, and Humus is a hack.. who works for said IHV, and then must be bad too." Time for you to step back into your hole troll or stay on topic. Digi had it best.. go to nvnews, but they are to good for you too , Best to just go away. :rolleyes:
"...anyone with purposes in the industry .." ,"...than someone who is in Canada..." "..."then call me."

your a fucking moron... ;)
 
Re: call a spade a spade

Proforma said:
FUDie said:
Chalnoth said:
K.I.L.E.R said:
Chalnoth said:
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
Shouldn't they have run simulations of their hardware and the like, maybe from samples and have them tested in the least favourable conditions?
They probably did, but I read in an interview that the shader compiler was one of the last things they produced. So those early simulations may not have taken into account real shaders, but were just testing throughput possibilities. It, apparently, wasn't until the compiler was written that it was discovered that floating-point performance was going to be so hard to attain in a real shader. It seems that the hardware developers underestimated how challenging it would be to build that compiler. There was also the issue of not having enough FP units in the NV30-34, and one really has to wonder what happened there. I still think that they were counting on Microsoft supporting integer types in PS 2.0. Otherwise, why would they go through the trouble of supporting so many programmability features, but limit the performance so drastically? Something was seriously wrong.
You can't just admit that NVIDIA f*cked up, can you? Always gotta have some excuse. :rolleyes:
It may have been process problems that caused a change in the final design too late to salvage properly, or any number of other reasons. I don't really know.
If you don't know, why speculate? Again and again you repeat the mantra "process problems".
Nvdiia did screw up, but..

Now ATI has been f*cking up as well. Not an excuse.
It is, what it is.

Not that I hate ATI or love Nvidia, but for godsake, call a spade a spade.
Where did ATI f*ck up? Looks to me that the R420 is doing exactly what it was designed to do.

-FUDie
 
Back
Top