New dynamic branching demo

Re: programming

Proforma said:
"Good alternative" is subjective and its not going to be accepted by
anyone with purposes in the industry as a standard way anyway
since its a hack. Since we can't add features to the hardware,
lets add in desperate hacks

I am sure you can also hack in object instancing via 2.0 as well.
Hell, why not do everything with 1.0 shaders. Who needs progress.

I would sure as hell trust Tim Sweeney who makes real software
thats ahead of the curve than someone who is in Canada and works
for ATI and makes pointless demos all the time.

When your demos are on the cutting edge and are like Epic's
actual in game demos, then call me.

No offense, but you kinda came off sounding like a total asshole there.. which is really unnecessary.

I don't think anyone on these forums have gone to the extreme to say that PS3.0 is some checkmark feature, and serves no useful abilities. No one is shunning technical progress when it comes to hardware advances here either. All that's been said, is that certain aspects of PS3.0 can be used with alternative methods with non-native supported hardware.

There's been countless examples of this in the past being done, and I really can't see how it's ever a bad thing to get more use out of existing hardware.. at least devs end up having the option and freedom to implement similar features across a wider range of products, whether the hardware natively supports it, or it can be done through some creative programming. In the end, all that really matters is whether the result is the same. Technology certainly isn't going to stand still because of it.
 
Re: call a spade a spade

FUDie said:
Where did ATI f*ck up? Looks to me that the R420 is doing exactly what it was designed to do.
-FUDie
I don't think ATI fucked up, but your argument is a non-argument. Even the NV30 does exactly what it was designed to do, LOL :LOL:
 
Re: programming

hughJ said:
I don't think anyone on these forums have gone to the extreme to say that PS3.0 is some checkmark feature, and serves no useful abilities. No one is shunning technical progress when it comes to hardware advances here either. All that's been said, is that certain aspects of PS3.0 can be used with alternative methods with non-native supported hardware.

This is not exactly true. A lot of people on these forums have said the PS3.0 is pretty much a checkbox feature (NV40). To slow to be used and so forth.

But i'd suggest going to the 3D Architecture & Coding thread instead (make sure that you read the disclaimer in the first post) which i'm guessing will be really interesting to follow. Not just for information about this technique (which seems to work great in this particular case) but i might lead to some better understanding of the penalties of DB on the NV40.
 
Re: call a spade a spade

nAo said:
FUDie said:
Where did ATI f*ck up? Looks to me that the R420 is doing exactly what it was designed to do.
I don't think ATI fucked up, but your argument is a non-argument. Even the NV30 does exactly what it was designed to do, LOL :LOL:
If you're saying that NVIDIA deliberately designed the NV30 to be inferior in PS 2.0 performance, I seriously doubt it. The design was hosed, that was the f*ck up. I don't see any design issues with the R420.

-FUDie
 
Obviously NVIDIA knew, or do you believe they build a chip without doing extensive simulations?
They knew from day one NV30 PS2.0 performance, I have no doubt about that.
They simply made the wrong choiches, they delivered late, they didn't expect such a good architecture (R300) from ATI.
Moreover, every architectures has flaws, even R420, but most of the time those 'flaws' are not bugs but trade-offs designers made.
IMHO R420 lack of support for SM3.0 is a 'flaw', but I know that ATI wanted it that way. As a developer I have no doubt about SM3.0 not being a marketing checkbox.
Humus's technique is just and old trick that in light of SM3.0 could be seen as a stop gap measure. SM3.0 make the developer life easier, that trick is easy to implement in some simple case but is not very elegant and well, it's just that, a thing developers are doing since alpha or stencil tests apperead on the GPUs.
I'm sure Humus didn't wanted to spread FUD in the name of ATI, cause I believe ATI wouldn't had let tim present that trick in that very unprofessional and laughable, like he has done some wonderful new discover way, imho.

ciao,
Marco
 
Guys, what is going on lately ?

Humus posted a demo with a new technique and then a Nvidiot made 12 posts in 2 pages, with absolutely no technical info but just against it as it could make his IHV less desirable. And now we have 15 pages with mostly garbage about Nvidia/ATI fights again?
Is that forum a sub forum of the Inquirer ?

And when i read The post just above me i wonder if people read carefully because it s not humus that made a bad presentation, he just reacted to all the garbage throwed at him.
 
Re: programming

Proforma said:
"Good alternative" is subjective and its not going to be accepted by
anyone with purposes in the industry as a standard way anyway
since its a hack.

by your logic, fast inverse square root would be considered a "hack",...but its 4 times faster than doing the normal 1/sqrt and proves to be extremely useful which made it widely accepted...hope you see where im going with this...
 
People may have taken some offense at Humus' manner in another thread, and simply chose to "extend" that "discussion" by basically thread-crapping here. They should have attacked Humus in the relevant thread, not brought their agenda into this one.

Anyway, this thread has outlived it's intended usefulness, IMO. Those interested in the technical details have Mint's new thread to frolic in. Those interested in flaming should just do something more constructive or at least satisfying, like grilling up a burger or watching what promises to be two entertaining Wimbledon finals. :)
 
Humus said:
Very interesting information. Thanks. :)
Interesting that it's faster than using ps3.0 dynamic branching even on nVidia hardware. I would have guessed it would be about the same performance, but I guess branching indeed is a bit costly.

Would the stencil trick be faster if you made it depth/stencil only and then added an extra pass for the ambient ?

Also, what would be cool in PS3 is to pass the light info in as a texture and loop through the all of lights in the pixel shader in one pass :)
 
pocketmoon66 said:
Also, what would be cool in PS3 is to pass the light info in as a texture and loop through the all of lights in the pixel shader in one pass :)
to free some PS constants?
 
How big percentage of the cards out there are PS3 cards? How big percentage is PS2 cards? Sub PS2 cards most likely is clearly the major part but PS2 is gaining more and more ground.

What I fail to see in this thread (the worst I've read in B3D) iis that some people don't see the benefits of Humus work. Do you really believe that mass market go out and buy the latest PS3 HW once it's out? I believe there are many like myself who have PS2 card and will keep it for quite some time - till it's really necessary (too slow or something that person wants is only in PS3) to buy better (for me that'll be R5xx or NV5x). Therefore what Humus has done should help other developers out in form of an example how to create PS3 type of dynamic branching with older PS2 functionality thus widen the market for game/app from PS3 to PS2. This should be easy to understand?

And so what if the code is not as clean as it would be with PS3? If you have problems comprehending a bit more difficult or not so elegant source code then maybe you're in the wrong field of engineering. It may be that eventually HLSL compiler will do it for you automatically (don't know if this is possible since I'm not in the 3D graphics development) in case you're writing PS3 but end hardware only posses PS2.

Dave, I hope that you use your "magic wand" and not let this kind of pointless arguing thread go out of proportions again (name calling etc.). This level of discussion (most of this thread anyway) really belong to nVnews and Rage3D.
 
paju said:
Therefore what Humus has done should help other developers out in form of an example how to create PS3 type of dynamic branching with older PS2 functionality thus widen the market for game/app from PS3 to PS2. This should be easy to understand?
It shoud be easy to understand, but what you fail to understand is that Humus has not used any PS2 feature. Obviously this is not a flaw, but unexperienced people have to understand we're not hearing nothing new here. Every medium game developer knows that stuff. Rebranding it that way is what irritates me. Look how that's fooled a lof of people here.
Humilty please..

ciao,
Marco
 
nAo said:
paju said:
Therefore what Humus has done should help other developers out in form of an example how to create PS3 type of dynamic branching with older PS2 functionality thus widen the market for game/app from PS3 to PS2. This should be easy to understand?
It shoud be easy to understand, but what you fail to understand is that Humus has not used any PS2 feature. Obviously this is not a flaw, but unexperienced people have to understand we're not hearing nothing new here. Every medium game developer knows that stuff. Rebranding it that way is what irritates me. Look how that's fooled a lof of people here.
Humilty please..

ciao,
Marco

What I meant with the "easy to understand" statement was that many more customers are reached - not just those with PS3 functionality. Not all can afford or are willing to buy the latest HW therefore it's in developers' (well, game house's) best interest to write the SW for the masses to get the biggest profits.

What is the minimum level of PS then required for the Dynamic branchin demo? If it's lower than PS2 then it sure should increase the attention of any developer who were not aware of this.
 
To be quite honest it saddens me to see some of the personal remarks made in here... we are discussing technology and some people think they are justified to extend their disagreement on a more personal level, to put it mildly.

Why someone would resort to name-calling in order to defend his argument always eluded me... :(
 
paju said:
What I meant with the "easy to understand" statement was that many more customers are reached - not just those with PS3 functionality.
Wait..for a moment clean your mind with this 'new way of perform dynamic brancing without PS3 shaders stuff'.
Probably there are plenty of games that used , use and will use tricks along the same way, and there are since GPUs with alpha(or destination alpha..) and stencil test appeared on the market, many many years ago. Oh man.. this stuff is so basic that surely somone has already patented it 8)
What is the minimum level of PS then required for the Dynamic branchin demo? If it's lower than PS2 then it sure should increase the attention of any developer who were not aware of this.
Thare is no attention to raise here, that's basic stuff imho, not the second coming of Jesus Christ, it should be clear by now.

ciao,
Marco
 
the optimization can also be applied to NV3x/NV40 now, just change "dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_ZERO);" to "dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_KEEP);" and clear the stencil buffer after drawLighting call.

I got 80FPS with optimization off and 230-250FPS with optimization on.
 
Ruined said:
The branching effect also exhibits the 10% slowdown on 6800 series cards. Nvidia cards get a performance *hit* when Humus' "branching" is enabled, not a gain. Has nothing to do with the card, but the coding.

Humus said:
Well you can just put that BS right back where the sun don't shine.

Do you have any proof of a flaw in the coding? It's open source. I challenge you to find any flaw...

In fact, I believe that this IS the card...

991060 said:
the optimization can also be applied to NV3x/NV40 now, just change "dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_ZERO);" to "dev->SetRenderState(D3DRS_STENCILPASS, D3DSTENCILOP_KEEP);" and clear the stencil buffer after drawLighting call.

got 80FPS with optimization off and 230-250FPS with optimization on

See the above post, someone fixed your code for you :LOL:


Humus maybe you can answer one basic question about your demo that kind of contradicts the reasoning behind your statements.

The technique you have used in your demo has been around for a while. People who program shaders for games like Halo, Far Cry, etc, are not morons. In fact, they have made complex, incredible looking games far beyond the scope of a simple graphical demo. That being said, odds are they know of this technique or even if they didn't could have figured it out. Since this would be the case in reality, if the technique has not already been used in games even though it has been around for a while, why is it not being used, and why are developers choosing Shader Model 3.0 instead, with over 10 titles slated to support SM3.0 already?

Logic would dictate because in reality, it's either not practical to implement, would have problems on larger scale games, or that Shader Model 3.0 is simply a better answer. If one of these options were not the case, it would have already been used by now, because it is nothing new.
 
Ruined said:
Logic would dictate because in reality, it's either not practical to implement, would have problems on larger scale games, or that Shader Model 3.0 is simply a better answer. If one of these options were not the case, it would have already been used by now, because it is nothing new.


As with all kinds of graphical techniqes, they are often known about for a long time before we get hardware powerful enough to use for realtime, mainstream games.

The alternative answer is that only with R420/NV40 class hardware do we have the power to use this technique realtime. Even NV40 can benefit from it.

SM 3.0 is probably a better technical solution, but when even Nvidia recommend limited use of SM3.0 branching because of the performance hit, I'm sure there are many possible situations where this technique might be preferable for both manufacturer's hardware.
 
Technology is moving so fast in the PC Graphics space that many developers neither have the time nor the capacity to spend an age thinking about the various nuances of some of the elements that are available on every single platform available - often they are more concerned about time and budget constraints that their publishers are putting on them and dealing with scalability issues for the PC Platform. Console developers can often afford to invest time in more estoric solutions that may not be immediately obvious, because their platform is not going to change in six months time but will often last for 5 or so years - this is why the level of quality of console titles still rises into the lifetime of the console as developer are finding ways to get more out of the platform.

PC developers often have to look for the banner features, rather than digging deeper, because of what they are dealing with on the platform, and they are usually looking to the IHV's to even provide them the support and insight into those, which is why ATI and NVIDIA spend heaps of cash on developer conferences and have developer relations teams assisting developers in their work, and often writing chunks of code for them.
 
DaveBaumann said:
Once a game has been released you've made the majority of money you are going to from it - patches are usually released purely for compatibility purposes, not spend significant development time creating new features as that is usually spent on you next project that will be a revenue earner from actual sales. Its already been demonstrated that 6800 can run the Radeon path fairly easily so if you were just patching to get the 6800 to well, to the best the game has then why would you not just alter the title a little in order to do this?
Someone already stated they were tring to sell their engine, and thus I think the time was not a waste of resources, since in the future ATI cards will support it as well...
 
Back
Top