New dynamic branching demo

Humus said:
You could render intermediate values to a render target and use that in all shaders thereafter. It'll cost some, but if there's enough shared parts it would be a gain.

That's what I was afraid of. First, the technique needs a render target that's the size of the front/back buffer to store the variable. Will the GPU allow reading from the render target and writing to it if more "if-then-else" statements occur after the first one? Would you need ping-pong buffers?

And an extra texture lookup would be required to read the variables back in the next passes. That's going to cost bandwidth unless the variables are zero a lot of the time. And if you need to compute more than four variables, it's MRT time.

But I guess on hardware that do not support dynamic branching, beggars cannot be choosers. Simple emulated "if-then-else" is better than nothing as you've already shown. I think techniques may already exist before one has even thought about the problem, but it's the people who really popularise the techniques that eventually get remembered.
 
Re: New poster trying to provide some FUD

Yeah, they will. It will be based on NV40 and be hot, large and power-hungry, and OEMs will ignore it because of that. That's why ATI has already signed up to supply companies like Dell, adn Nvidia is getting small niche suppliers like Alienware (alongside ATI).

oh crap, I thought you could run an Geforce 6800 Ultra on a good 350 watt power supply just like the many people who have tested the video card with and just like ATI's latest video cards. :rolleyes:

You are wrong - a lot of people are waiting to go to 64bit CPUs with PCIe. Intel in particular will be pushing it very hard indeed. Why do you think both ATI and Nvidia are getting ready to ship PCIe cards now if nobody wants them? Why do you think Intel is launching PCIe motherboards and Via and Nvidia are already sampling PCIe chipsets?

Well Intel won't have Desktop 64-bit power until late 2005. AMD already
has 64-bit cpu's out, but not many people have them compared to 32-bit chips. Its a hard core minority (not a market for many current OEMS yet).
Need a 64-bit OS to take advantage of this and the drivers are still far too young.

See the world of PCI express video cards this year, there sure are a ton of them. AGP is obsolete this year and they should stop selling them because of course nobody is going to buy them this year. :rolleyes:

You seem to be saying that ATI is making big mistakes in how it is running it's business. The inroads made into Nvidia's buisiness and mindshare over the last two years say otherwise. The advance OEM contracts for PCIe say otherwise. The better yield and better profit margins on R3x0 and R42x say otherwise. The increased profits and market share say otherwise.

I am not talking about events that happened two years ago, I am talking about the present and the future. I don't see Nvidia doomed at all.

As you can see when Nvidia lost what it had to begin with that things can change quickly as it did with the Radeon 9700 verses Nvidia, but I feel ATI is making some of the same mistakes.

Maybe I should ask you what you have been smoking, because you seem to be seeing the world through some kind of psychedelic mental state that's different from the rest of us.

An objective person does see things differently from a room of ATI fanboys.

When Nvidia goes out of business, please let me know. In fact when ATI has a product on the shelves thats more than a snore of what I already own, please tell me.
 
Re: New poster trying to provide some FUD

Chalnoth said:
Bouncing Zabaglione Bros. said:
Yeah, they will. It will be based on NV40 and be hot, large and power-hungry, and OEMs will ignore it because of that. That's why ATI has already signed up to supply companies like Dell, adn Nvidia is getting small niche suppliers like Alienware (alongside ATI).
Right. Because they're obviously not ever going to reduce the number of pipelines, or ever bother with a different process. :rolleyes:

In the next 6-12 months? Sure maybe they'll reduce pipes, power, heat, and mhz, and you can pretend that it's low performance is relevent to the discussion and that ATI won't be doing the same. :rolleyes:

We already know that low-k need a new design, it's not like Nvidia will just be able to decide to start making NV40 on low-k just like that.
 
Re: New poster trying to provide some FUD

Proforma said:
When Nvidia goes out of business, please let me know. In fact when ATI has a product on the shelves thats more than a snore of what I already own, please tell me.
Your own words show that you are not as unbiased as you claim. You don't think twice the performance of your current card is significant? What about long shader support? New compressed texture format?

-FUDie
 
Re: New poster trying to provide some FUD

Bouncing Zabaglione Bros. said:
Chalnoth said:
Bouncing Zabaglione Bros. said:
Yeah, they will. It will be based on NV40 and be hot, large and power-hungry, and OEMs will ignore it because of that. That's why ATI has already signed up to supply companies like Dell, adn Nvidia is getting small niche suppliers like Alienware (alongside ATI).
Right. Because they're obviously not ever going to reduce the number of pipelines, or ever bother with a different process. :rolleyes:

In the next 6-12 months? Sure maybe they'll reduce pipes, power, heat, and mhz, and you can pretend that it's low performance is relevent to the discussion and that ATI won't be doing the same. :rolleyes:

We already know that low-k need a new design, it's not like Nvidia will just be able to decide to start making NV40 on low-k just like that.

Working with TMC and IBM is worth something I think. Maybe I am just stupid and don't know much, but I figure they must have some options.

(ie they don't put all their eggs in one basket)
 
Re: New poster trying to provide some FUD

FUDie said:
Proforma said:
When Nvidia goes out of business, please let me know. In fact when ATI has a product on the shelves thats more than a snore of what I already own, please tell me.
Your own words show that you are not as unbiased as you claim. You don't think twice the performance of your current card is significant? What about long shader support? New compressed texture format?

-FUDie

No, thats nice, but not as nice as almost all of that (minus the compressed texture format), plus 128 bit color and full on shader 3.0 support.

I develop software for a living and also for my own use.

If I am going to spend 400-500 dollars on a video card, it best have
the latest technology in there already. Speaking about DX /OGL.

I wanted the shader 3.0 for development of geometry instancing
and ATI totally ignores the latest technology from Direct X, not only once
but also will do it again (according to sources on this forum) later this year.

Quoting from Dave's review...
"This part is certainly not revolutionary, it's hardly even evolutionary, but more of a refinement on R300’s weak points, placed on a more advanced process with double the number of pipelines and effectively we have an R300 on steroids!"
 
I fail to see why ATI not supporting these features at the moment is an issue to your "development" as you can develop with a 6800.
 
wanted the shader 3.0 for development of geometry instancing
and ATI totally ignores the latest technology from Direct X, not only once
but also will do it again (according to sources on this forum) later this year.

latest tech from dx ? Where were u when the r3x0 and nv3x platforms were released ? Why weren't you complaining about sm 3.0 suppot
 
Re: New poster trying to provide some FUD

Bouncing Zabaglione Bros. said:
We already know that low-k need a new design, it's not like Nvidia will just be able to decide to start making NV40 on low-k just like that.
Fine, but the rest of the NV4x line aren't the NV40, are they?
 
Re: New poster trying to provide some FUD

Chalnoth said:
Bouncing Zabaglione Bros. said:
We already know that low-k need a new design, it's not like Nvidia will just be able to decide to start making NV40 on low-k just like that.
Fine, but the rest of the NV4x line aren't the NV40, are they?

isn't the nv45 just the nv40 with the bridge chip on package ?
 
Re: New poster trying to provide some FUD

Bouncing Zabaglione Bros. said:
Chalnoth said:
Bouncing Zabaglione Bros. said:
Yeah, they will. It will be based on NV40 and be hot, large and power-hungry, and OEMs will ignore it because of that. That's why ATI has already signed up to supply companies like Dell, adn Nvidia is getting small niche suppliers like Alienware (alongside ATI).
Right. Because they're obviously not ever going to reduce the number of pipelines, or ever bother with a different process. :rolleyes:

In the next 6-12 months? Sure maybe they'll reduce pipes, power, heat, and mhz, and you can pretend that it's low performance is relevent to the discussion and that ATI won't be doing the same. :rolleyes:

We already know that low-k need a new design, it's not like Nvidia will just be able to decide to start making NV40 on low-k just like that.

OEMs use mid-range cards far more than high-end cards. I'm sure a 6800NU or another NV4x(not NV40, NV45, nor NV48) will do well for OEMs.
 
This thread is getting ridiculous.

NV40 dynamic branching has a cost. Nvidia themselves say the best use is branching on coherent per-vertex attibutes, or, as in the Nalu demo, a simple ordinary texture lookup, to choose a shader path. Humus is showing how to make a PS 2.0 path to handle this case of shader selection, nothing more, nothing less. You can let your imagination go wild if you want.

If dynamic branching was so universally useful, why wouldn't they use it in a more creative way in the Nalu demo? If NV40 was so good with dynamic branching performance, why would it be slower than this method? It's not a "useless hack that will never find it's way into games".

Humus, pocketmoon66, and other coders, maybe even Chalnoth, lets start a thread in the architecture/coding forum so we don't have to bother with all this utterly useless bickering.
We can talk about this technique in conjunction with stencil shadowing, NV40 performance in all three modes, stencil early-out restrictions, when it would be more/less useful, etc.

Sorry for the big font, but it's really hard to find the constructive posts.
 
Proforma said:
pat777 said:
Why not just give Nvidia the market right now. Screw the future!

No, I want ATI to be competitive and right now they arn't.
Its pretty much that simple. They had Nvidia down for the
count and now they are just going to let them catch back
up and maybe even surpass them while they work on
spreading their resources too thin just like Nvidia did.

They are losing focus and not learning a damn thing
from what Nvidia has done, thats why I am angry.

Did it occur to you that maybe ATI DID learn something from the mistakes NVidia made? That maybe ATI made the strategic decision to create a R300 on steroids that was competitive in performance to the NV40 so that they could focus on the Xenon project... and thus take the product hit now, instead of after the next Microsoft console ships and lose ground as NVidia did? It seems to me that is what they chose to do. Remember they actually HAD a different R400 product and dropped it as a release chip in favor of the R420. I think they did that for a reason.
 
programming

Mintmaster said:
This thread is getting ridiculous.

NV40 dynamic branching has a cost. Nvidia themselves say the best use is branching on coherent per-vertex attibutes, or, as in the Nalu demo, a simple ordinary texture lookup, to choose a shader path. Humus is showing how to make a PS 2.0 path to handle this case of shader selection, nothing more, nothing less. You can let your imagination go wild if you want.

If dynamic branching was so universally useful, why wouldn't they use it in a more creative way in the Nalu demo? If NV40 was so good with dynamic branching performance, why would it be slower than this method? It's not a "useless hack that will never find it's way into games".

Humus, pocketmoon66, and other coders, maybe even Chalnoth, lets start a thread in the architecture/coding forum so we don't have to bother with all this utterly useless bickering.
We can talk about this technique in conjunction with stencil shadowing, NV40 performance in all three modes, stencil early-out restrictions, when it would be more/less useful, etc.

Sorry for the big font, but it's really hard to find the constructive posts.

Any serious program needs to have some kind of branching or you might as well toss it for anything flexible and important.

I can't stand how people with intelligence can just say "branching isn't needed, its a useless feature", tell that to Tim Sweeney.

Saying shader 3.0 is useless is just fanboy talk that probably thinks its a Nvidia feature and not a feature of Direct X 9.0c,which ATI should have had in the R420 and their video chip by end of year.

Its not that I can't stand ATI, I want them to produce products that are competitive, but its these outrageous fanboys acting like Shader Model 3.0 is a useless feature made by Nvidia and will only work on their card to spread FUD. Its not a marketing ploy, but an actual feature of a Microsoft API that ATI seems to neglect lately.

Loops and conditionals are important in programming languages, you have a very crappy language without those. You can do all the work arounds you want, but they still are hacky workarounds.

yes, its true. Using Branching ***Might*** have a cost. However, the cost without it is far worse.

"If shaders are usefull why don't they make a demo of 'Final Fantasy - The Spirits Within' instead of Nalu." - Sounds like some statement the author above would make.
 
OICAspork said:
Proforma said:
pat777 said:
Why not just give Nvidia the market right now. Screw the future!

No, I want ATI to be competitive and right now they arn't.
Its pretty much that simple. They had Nvidia down for the
count and now they are just going to let them catch back
up and maybe even surpass them while they work on
spreading their resources too thin just like Nvidia did.

They are losing focus and not learning a damn thing
from what Nvidia has done, thats why I am angry.

Did it occur to you that maybe ATI DID learn something from the mistakes NVidia made? That maybe ATI made the strategic decision to create a R300 on steroids that was competitive in performance to the NV40 so that they could focus on the Xenon project... and thus take the product hit now, instead of after the next Microsoft console ships and lose ground as NVidia did? It seems to me that is what they chose to do. Remember they actually HAD a different R400 product and dropped it as a release chip in favor of the R420. I think they did that for a reason.

So what you are saying is that Radeon 9800 Pro is really a refresh of the 9700 and then then 9800xt is really just a refresh of the 9800, and then the R420 (XT800) is really just another refresh of the 9800XT, and then the R480 is just another refresh of the XT800.

What a waste, sounds like 3DFX all over again.

As you said, didn't Nvidia lose their compettiveness when they worked on the Xbox chipset? Doesn't this seem a little ironic.

Doesn't sound like they learned all that much from Nvidia to me as by that time they come back, Nvidia will have surpased them again.

Nvidia doesn't have the distractions that ATI does and this is a problem.
 
Any serious program needs to have some kind of branching or you might as well toss it for anything flexible and important.

Yeah, it's amazing gaming, 3d rendering, and 3d workstations flourished without any branching or loops.

What a waste, sounds like 3DFX all over again.

Or more like Nvidia all over again, or every company :rolleyes:

As you said, didn't Nvidia lose their compettiveness when they worked on the Xbox chipset?

That matters who you ask, because some people can't stand to blame things on incompetence and bad decisions, rather point a finger at a third party

Doesn't sound like they learned all that much from Nvidia to me as by that time they come back, Nvidia will have surpased them again

Unless they reorganize R&D teams and actually get R&D funds from the company making the console like Nintendo or Microsoft, perhaps you should read up on the terms of their contract

Nvidia doesn't have the distractions that ATI does and this is a problem.

That's why companies have things called CEO's and managers, they aren't just blobs of R&D teams with no direction

but an actual feature of a Microsoft API that ATI seems to neglect lately.

Sorta like how Nvidia neglected PS2.0? But I guess that didn't hold back the industry, right?

However, the cost without it is far worse.

I'm sorry, but how long has this entire industry lived on without these features?
 
At least nVidia didn't "neglect" PS 2.0. They just failed to anticipate the performance problems their architecture would have.
 
Back
Top