New dynamic branching demo

I don't see it as features vs speed..imho NV40 architecture has not sacrified speed at all (seeing SM2.0 NV40 benchmarks..)
Maybe their SM3.0 implementation is not as fast as it could be..but at least it's here and it works.
 
nAo said:
I don't see it as features vs speed..imho NV40 architecture has not sacrified speed at all (seeing SM2.0 NV40 benchmarks..)
Maybe their SM3.0 implementation is not as fast as it could be..but at least it's here and it works.

v1600.gif


if these are infact correct then it may be an interesting race .

Looks like the x800xt is a better buy than the 6800ultra and the gt is better than the pro.

Funny how i've been saying that for awhile now
 
Re: Demo trying to provide a hack and some FUD

digitalwanderer said:
Proforma said:
I have an ATI 9800 Pro in my machine right now as I typed this, but I feel ATI has fallen into the same traps as Nvidia as of this generation.

ATI had Nvidia down for the count, only to basically let them go and regain ground.

ATI didn't have Shader Model 3.0, which isn't an nvidia feature,
its a direct x 9 feature. Tons of moronic ATI fanboys think SM3.0 is an nvidia feature and since they don't have it currently, they don't need it and then they try to add in hacks to make it look like they have that feature via SM 2.0.

I don't hate ATI, I own thier products, but it makes me angry to see ATI fanboys doing the same things the Nvidia fanboys have done (which is make up crap out of a hack to prove they don't need SM 3.0, when in fact its a DX 9 feature that should be in ATI video cards to begin with.

Its not about knowledge as much as its FUD that SM 3 is not needed, which it is and no hacks will prove otherwise. Its a damn shame that ATI with its leadership over the years is letting Nvidia lead in DX 9 technology.

SM 3 is needed for the future and current development of games and it is NOT an Nvidia ONLY feature, but a feature that all state of the art and hard core marketed video cards should have. Since ATI doesn't want to support SM3.0 until sometime next year, that strategy is a poor one.

All of this crap is why I moved away from Nvidia and now ATI seems to be doing the same kind of things and at the same time Nvidia's products are looking much better in my eyes and that pathetic since Nvidia was down for the count and ATI was supposed to be foward thinking.
Fair enough, but you have to remember that the R420 IS just a refresh part for ATi and that they are going to be releasing the R500 a lot sooner than anyone expects. ;)

nVidia's on a new chip design, ATi is on their refresh....next round nVidia will be on their refresh and ATi will be releasing their fresh chip design.

There is a balance to it, patience grasshopper. ;)

This may not be true... Check this out

http://www.myhard.com/image20010518/146255.gif

1) point number one Nvidia usually names their refresh parts NVx5 right?
Well now they are naming it the NV48 (which is closer to the NV50 in
numbering) and the NV45 is currently a PCI Express version of the NV40.

2) Point number two, check out the roadmap of Nvidia above. You can
find the original link on an asian website. As you can see the NV48
comes out in 4th Quarter 2004, and lasts well into 2005, meaning that
there won't be a high end video chip from nvidia until fall 2005
(next true generation).

http://www.myhard.com/news_hard/361696444584820736/20040702/1826468.shtml

Here is the full article as of Today with the supposed full
roadmap well into next year for Nvidia.

Now this makes a lot of sense since late 2005/early 2006
we should start to see beta's of Direct X Next (Direct X 10).

So thats one full year from Nvidia with only ONE big hard
core high end video card.

So while ATI won't put SM 3.0 on their video cards until Spring
(june 2005), Nvidia will have time to speed things up quite a bit
and add in features.

Maybe I am wrong and all of this stuff is made up, but it all
makes perfect sense to me. Direct X 9 is lasting around 3
years or so before the next big leap comes out.

This is why I am angry with ATI, they had Nvidia down for
the count and now Nvidia will probably kick their a$$ again.
Nvidia is learning from past mistakes it seems and ATI is
not being the technology leader that it showed with the ATI
Radeon 9700. This has me worried.
 
jvd said:
if these are infact correct then it may be an interesting race .

Looks like the x800xt is a better buy than the 6800ultra and the gt is better than the pro.

Funny how i've been saying that for awhile now

The Anandtech benchmarks are wrong - they were running without AA applied, hence the large speed deltas. Read this.
 
also what happens if?

Nvidia releases drivers that introduce their famous speed ups
and increase in clockrate of the GPU. ATI won't beat it as easily.

ATI cards without introducing no significant new features
and speed only is not a good thing, its weak. All Nvidia has to do
is clock higher speeds and improve the speed and efficiency in the
drivers and bam! they are back in the game.

5 frames per second difference isn't that hard to beat, but not
having SM3.0 or 128 bit color isn't going to make ATI any better.

Speed can be added to a point (ala clocking higher GPU rates
and driver optimisation), but those features via hardware can not.
 
Re: Demo trying to provide a hack and some FUD

Proforma said:
1) point number one Nvidia usually names their refresh parts NVx5 right?
Well now they are naming it the NV48 (which is closer to the NV50 in numbering) and the NV45 is currently a PCI Express version of the NV40.
Uhm, no. The first number has always been their generational one, the nV48 is still gonna be an nV4x series chip.

2) Point number two, check out the roadmap of Nvidia above. You can find the original link on an asian website. As you can see the NV48 comes out in 4th Quarter 2004, and lasts well into 2005, meaning that there won't be a high end video chip from nvidia until fall 2005 (next true generation).
Yup, that fits in with my thinking....but I think their fall 05 chip will be late by 2-3 months. ;)

So while ATI won't put SM 3.0 on their video cards until Spring (june 2005), Nvidia will have time to speed things up quite a bit and add in features.
Why do you think it won't be until June of 2005? That would be awfully later than what I'm expecting.

Maybe I am wrong and all of this stuff is made up
That's my thinking. ;)

Proforma said:
Nvidia releases drivers that introduce their famous speed ups and increase in clockrate of the GPU. ATI won't beat it as easily.
:LOL:

So nVidia is already pimping another set of "magic" drivers already? :LOL:
 
Re: Demo trying to provide a hack and some FUD

Proforma said:
1) point number one Nvidia usually names their refresh parts NVx5 right?
Well now they are naming it the NV48 (which is closer to the NV50 in numbering) and the NV45 is currently a PCI Express version of the NV40.

Probability, at the moment, is that DX Next won't be available until 2006/7.

In the last NVIDIA analyst conference they already made note that the low end NV4x chips would likely last 2/3 years, while there will be another high end architecture in that time. This fits with the DX Next timescales and says that the next high end architecture from NVIDIA is likely to be another, ostensibly, SM3.0 part (with presumably a bunch of other enhancements).

As for the "right decision" as to what to support, ATI and NVIDIA are playing two completely different games. After two years of building up their brand and engendering themselves to the gamers, ATI are now chasing the OEM's - the choice to support PS2.0 for another generation, apart from cost reasons (which is a valid point) is a reason why they have managed to churn out 3 ASIC's in time for PCI Express; NVIDIA is in a situation where it need to repair its brand so they have chased the technology rather than chasing PCI Express and they are re-engendering themselves at the high end and to the gamer, to the cost of their OEM business for the time being.

The financials are currently bearing out that ATI's choice was the correct one for their business in the short term, with them seeing thir largest revenues and forcasting next quarter more than NVIDIA have and basically getting all the Teir-1 OEM positions that use PCI Express. If ATI's choice is going to show issues from abusiness perspective it will happen 6 or so months down the line, when the OEM's have more native PCI Express options.
 
digitalwanderer said:
Humus, did ATi give ya a pat-on-the-head or a bonus or anything weird like that for this? (Sorry, the cursiousity bug bit me and I had to ask.)

I haven't even showed it internally for anyone yet.
 
trinibwoy said:
I don't believe that Humus intentionally made the optimization ineffective for Nvidia hardware but I thought the optimization was supposed to be generic? Humus, being the coder you are in the best position to at least make an informed guess at a reason for the discrepancy. Any ideas?

For one reason or another they just don't do early stencil rejection. I don't know if it's just the driver, or if it's the hardware. Some people claim they had it working, so I guess there's some combination of states that just turns it off, for whatever reason.
Again, I would check it out more in depth if I could.
 
Zeno said:
You mean the same amount of pixel-shader work, right? Obviously there's more vertex work and additional stencil buffer work.

My biggest question about your technique: Do you need one extra pass (sending of scene to card) per 'if' statement that you wish to emulate, or can you do multiple independent 'ifs' with one pass?

Yes, same amount of pixel-shader work, provided that early stencil rejection works. More vertex work indeed, and more stencil buffer work (though the stencil buffer work is probably more or less free). In the general case you'll need as many if-passes as there are paths in the shader. But in many cases you can probably combine many into the same if-statement with some clever tricks, at least when we're talking about nested ifs.
 
Evildeus said:
If i was paranoaic, i would say there's a corelation between the release of the demo and the SM3.0 test @ Anand :LOL:

I didn't even realize this until your post. Maybe that's the reason why so many people got pissed? I wasn't trying to squeeze this in just before that. I didn't even know there was going to be a SM3.0 test @ Anand. The reason was rather that the topic was brought up in another thread, and I had it working at work, and now there was a day off yesterday due to Canada day, so I had time to do a demo of it. Entirely coincidental, but I can understand if it looks like it was intentional.
 
Humus said:
digitalwanderer said:
Humus, did ATi give ya a pat-on-the-head or a bonus or anything weird like that for this? (Sorry, the cursiousity bug bit me and I had to ask.)

I haven't even showed it internally for anyone yet.
For some strange reason I have a feeling they're aware of it already. ;)
 
Sigma said:
This is a demo about what? Of anything is about a stencil hack of a multipass technique to emulate branching of any shader model available on earth (not just PS3.0).

The thing is: it is a stencil hack -> useless. Everyone knows about this, just like eveyone knows for example that a RenderMan shader can be broken down on every hardware with PS1.1 capabilities (not regarding precision of course).

Since the stencil gets used, it is impossible to have stencil shadows on the scene...

It's not a hack, and it's not useless, and it should work fine with stencil shadows if you're careful.

Sigma said:
Another thing I would like to know Humus is why did you forget to implement any kind of occlusition to the part where it uses branching. That does shift the results a lot because the stencil is the "occlustion"...

Implement what?
 
Re: Demo trying to provide a hack and some FUD

digitalwanderer said:
Fair enough, but you have to remember that the R420 IS just a refresh part for ATi and that they are going to be releasing the R500 a lot sooner than anyone expects. ;)

What is this R500 you speak of?
 
samker said:
Well, this "dynamic branching" demo is about nothing else than stealing the show to nvidia like Humus already stated because it isn't dynamic branching at all. It's just a method which does a different job but nevertheless gives the same result as dynamic branching for some rare lighting situations without the flexbility of what PS3.0 has to offer. But contrary to SM3.0, "Humus new technique" will not be used in any upcoming game.

It's semantically equivalent to dynamic branching and gives the same performance benefits, so in what sense is this not dynamic branching? Does it have to be expressed as the ASCII letters i and f for it to be valid?

And no, it's not about any "rare lighting situation". Just because I only produced one demo doesn't mean it's the only situation where it would work. It works with any form of if-statement structure, nested or not. It will likely be able to implement 90% of the ps3.0 usage we'll see the next 6 months.

samker said:
Humus, I must really say that I am deceived of you and the way you're spreading misinformation around different forums. You really should know better.

Are you using a sense of the word "deceived" that I'm not aware of? And wtf are you talking about, misinformation? Put up or shut up.
 
Humus said:
Sigma said:
Another thing I would like to know Humus is why did you forget to implement any kind of occlusition to the part where it uses branching. That does shift the results a lot because the stencil is the "occlustion"...

Implement what?
I thought it was "occlusion", but i may be off base.

And you don't need to take into account some critics, but i think the way you advertized the demo was not the best one, IMO.
 
Alstrong said:
"nVidia can consider themselves owned
tongue.gif
"


sounds more like a joke to me, especially with the smilie there.:rolleyes:

It took 8 pages, but finally someone notices the smilie. Thank you very much. Now I have hope for the mankind.
 
Re: Demo trying to provide a hack and some FUD

Uhm, no. The first number has always been their generational one, the nV48 is still gonna be an nV4x series chip.

Thats how its been previously, but without a video card for the next year in the high end hardcore sector of the market and without a change for Direct X until Direct X 10 in late 2005/early 2006, this can change you know.

The problem is that you are sticking to ideas that have been with Nvidia since the begining I think, where the NVx0 is always the next generation video card and the NVx5 is always the refresh, but this can change like I said.

When is the last time that Nvidia went a year without two new video cards for the hardcore market? (one for next generation and one for refresh)?

The answer is never, yet if that roadmap is correct, then the NV48 will be the first hardcore market video chip thats spread over a one year period. Which according to the past would not happen.

So I know what your thinking, but I think you need to rethink what you are thinking. I understand your level of thinking, but it may not be true.

Remember, that Direct X used to be released with a new version every year and that thinking has been changed. DX 8 went to every two years, now with DX 9, its three years.

Not everything stays the same as time goes on and things change and what structures in the video card market might apply today, they may not apply in the future.
 
Now I have hope for the mankind.
I find mankind the best life-form yet the worst life-form but that's just me.

When is the last time that Nvidia went a year without two new video cards for the hardcore market? (one for next generation and one for refresh)?

The answer is never, yet if that roadmap is correct, then the NV48 will be the first hardcore market video chip thats spread over a one year period. Which according to the past would not happen.

What about NV3x? It left an entire year of no new video card from nVIDIA. If NV5x gets delayed till Direct X Next then I expect this to be like GeForce 4 vs. Radeon 9700 Pro.
 
Back
Top