New dynamic branching demo

Re: It is, what it is.

pat777 said:
Add more 3x more detail to those models and watch those framerates fall.

Detail isn't the issue, not if they are part of the same mesh - in fact, this would serve BRiT's point more. The batch overhead issue is about the amount of cycles wasted just processing more individual objects - the smaller the object is the more overhead there is in processing those objects, so the issue occurs when you have lots of small "batches" of objects - you can end up wasting more time switching between the objects than you are processing them on the vertex shader.
 
Re: It is, what it is.

Proforma said:
<snip>Long rant</snip>

If you want a 6800, just go ahead and buy one and be happy instead of crapping this thread.
If you don't want a 6800, then be happy that there's an alternative that enables some of its features on other hardware, while being fully compatible with nVidia's hardware.

What I don't get is in what way this technique made the world a worse place to live on. More choices is always a good thing. At worst this technique didn't offer you anything, but it's not like everything went to hell with it. If you don't like this technique, then don't use it.
 
Chalnoth said:
Well, he was kinda asking for it with the statement he posted on his website, no matter how joking he may have been when he wrote it.

This is what I don't get. How the heck can anyone get so friggin upset by that line? Is it because it was me, and not the regular fanboy who wrote it? It's not like we don't see people taking cheap shots at each other's favourite IHV constantly on this forum.
 
jvd said:
Humus is this something that can be added into the complier (hlsl or whatever the correct name is ) so that the radeons can benefit ?

No. It partly operates outside of shading. Shaders doesn't carry any render state attributes.
 
Re: humus

ZenThought said:
I had lots of respect for HUMUS until this latest episode.

first of all his immaturity in flaming NVDA rather than pointing out
yould could do SM3 emulation with SM2.

seconds. it is hack (althought it's good hack I admit it) and is
no replacement for real thing.

So I gather ATI will never implement SM3.0 since it's useless?

But pleeeeaaasee! Aren't we past this soon?

Flaming??? WTF? If that comment now counts as flaming I better hand over my posts to my fluffy pink rabbit who can soften up my posts so they are readable without hurting any 7 year old's intarweb feelings.

Hack! SM3.0 = teh useless! Can we get done with that? It's been addressed 100 times already.
 
Re: SM3

ZenThought said:
so SM3.0 usesful or not useful?

There's never any feature that's useless (well, ignoring stuff like pixel zoom and a bunch of other left over crap). The question is rather, how much does it bring? Can it be done another way? What are the performance characteristics? How much work does it require? How's the market penetration?
 
Re: It is, what it is.

Humus said:
Proforma said:
<snip>Long rant</snip>

If you want a 6800, just go ahead and buy one and be happy instead of crapping this thread.
If you don't want a 6800, then be happy that there's an alternative that enables some of its features on other hardware, while being fully compatible with nVidia's hardware.

What I don't get is in what way this technique made the world a worse place to live on. More choices is always a good thing. At worst this technique didn't offer you anything, but it's not like everything went to hell with it. If you don't like this technique, then don't use it.
The thing people are forgetting is that this technique helps nVIDIA too. I think both sides are resorting too much to age calling.
 
Humus said:
This is what I don't get. How the heck can anyone get so friggin upset by that line? Is it because it was me, and not the regular <bleep> who wrote it? It's not like we don't see people taking cheap shots at each other's favourite IHV constantly on this forum.
I'd say it's mostly because you did it when posting the demo. It certainly didn't help your case that there were performance problems unrelated to the algorithm on nVidia cards, but that's hardly your fault (since you wouldn't have tested the demo on nVidia cards).
 
I cant see why ppl are flaming Humus for showing us his Dynanic branching demo . Methods like this is great for covering the gap between Sm2.0 and Sm3.0 . Not only will it benefit r300/420 owner but Nv30 owners to . Myself i believe there is potential in Sm3.0 . I was disappointed with the Farcry 1.2 benchmarks . Somehow i expected more from it . The question that is more relevant than that Sm3.0 is a useless feature is does the difference really matter with today Gfx boundaries .

Dave: I gotta comment you on that youre really patience . I would have banned certain ppl in her a long time ago . With these NvHeads and AtiHeads bashing together .
 
digitalwanderer said:
bitwise xor said:
Just out of curiosity as people have seem to become rather single-mindely focused on dx9 for farcry, but does this particular game even show a difference between shader2.0 and shader1.1 thats worth caring about?
Yes.

Got a link to some info?

When I was following the discussion before (haven't for a long time), the best anybody could come up with was 'shinier pipes'. Does it goes beyond this?

I found a firing squad article saying that pretty much everything is shader1.1 anyway (which would make sense given when the game was made and the platform it would have been developed on). Is that true?

On the surfcae it seems to me the differences just don't amount to much. I'm wondering if this whole farcry dx9 thing is a case of fanboys getting too carried away with themselves, desperate for some material to continue argueing about ati vs nvidia?
 
Kain said:
I cant see why ppl are flaming Humus for showing us his Dynanic branching demo . Methods like this is great for covering the gap between Sm2.0 and Sm3.0 . Not only will it benefit r300/420 owner but Nv30 owners to . Myself i believe there is potential in Sm3.0 . I was disappointed with the Farcry 1.2 benchmarks . Somehow i expected more from it . The question that is more relevant than that Sm3.0 is a useless feature is does the difference really matter with today Gfx boundaries .

Dave: I gotta comment you on that youre really patience . I would have banned certain ppl in her a long time ago . With these NvHeads and AtiHeads bashing together .

people are flaming because he is trivializing SM3.0 issues. yes
it might not give night and day difference but it will simplify development
and nice feature to have.

so what it can be emulated? anything on cpu/gpu can be emulated
if you have time and patience. but that's not core issue.

since he so pro ATI (not surprise since he's their employee). we need
balance point of view . BTW, I am not NVDIA employee but neural bystander. In fact I own Radeo 9800 Pro for fair disclosure.

Here's what I see
- SM3.0 is nice feature to have
- yes some of them can be emulated by SM2.0

this is what pro ATI factions are saying
- Sm3.0 is useless because nobody is using them
- there is no proof of performance improvement

now who has fair point of view?

wish Humus spend better utilizing his time working on improving ATI
product rather than disapproving competition's feature
 
Most ppl cant take a joke . This joke has seemed to hit you guys hard for showing this kind of behaviour . Humus must have hit a really weak spot . 19 pages and growing and mostly of it is fanboism . Shame on ye Greenies and redhats
 
this is what pro ATI factions are saying
- Sm3.0 is useless because nobody is using them
- there is no proof of performance improvement
I don't think anyone is saying that .

Some of sm 3.0 will speed things up. some will slow it down . So like any feature at times it may give a minimal loss and at times a small gain.

Most of us are saying its not very important to support as all the big name games for the next 2 years will be focusing on sm 2.0 as that has the largest installed base for the next two years at least (the radeons getting allmost all the top tier oem contracts will make that statment very true) . So while sm3.0 is nice and is no means a draw back it isn't much of a plus .

Esp if as farcry has shown the x800xt is faster than the 6800ultra even when its running sm3.0

Btw with all nvidia has done in the last 2 years its very hard to trust them on sm3.0 support.
 
I'm somewhat surprised that this thread hasn't been locked.......what else more is there to discuss :?:
 
SM 3.0 is great.

SM 2.0 has yet to be fully utilised by the industry.

I see more games out using SM 1.x than anything.
 
jvd said:
Most of us are saying its not very important to support as all the big name games for the next 2 years will be focusing on sm 2.0 as that has the largest installed base for the next two years at least (the radeons getting allmost all the top tier oem contracts will make that statment very true) . So while sm3.0 is nice and is no means a draw back it isn't much of a plus .
The more SM2 support there is, the more opportunity for further optimizations using SM3 there will be. Of course, that doesn't mean I don't still think ATI's lack of SM3 support is highly disappointing. I still think they're holding games back.
 
Chalnoth, PS 2.0 hasn't even picked up yet and how long have cards supported it?

Since the R300 which has been nearly 2 years or something?

Ati or nVidia are not holding back the games industry. Coders code for the lowest common denominator and if they have time they knock in some extras like PS 2.0 or whatnot.

Look at how many people have low end cards these days.
Shit, my borther still has an NV20 and a P3 800MHz and all he's games run fine on them, even FarCry runs nice and smooth.

Until we see everyone upgrade their systems to something high end then things will continue to move slowly.
 
ZenThought said:
this is what pro ATI factions are saying
- Sm3.0 is useless because nobody is using them
- there is no proof of performance improvement
No one is saying that but you.

Bolded for emphasis. =P
 
Back
Top