new demo(s) coming that put Double Cross (Ruby) to shame ?

So, they all achieve the same result, it's just a matter of how well and efficient the given implimentation is?

Which implimentation do you believe is the best Neeyik?
 
frost_add said:
I don't get it. I have single HLSL source file, when compiled to ps 3.0 it uses dynamic branching, static flow control, texture loads are ordered differently, generated code uses some extra ps 3.0 features I alredy mentioned (like arbitrary swizzles - even in case of no flow control ps3.0 code tends to be little shorter). By using single #define I made it use texldl instead of texld on shadow maps (so that it can be put inside conditinally executed block).
The same source file compiles to ps_2_b, but generated code is completely different (looops are unrolled etc etc). How else would you define "version transparency"???
I never expected anyone to be satisfied with that kind of shader. Yes, HLSL can emulate dynamic branching in a PS2.0b shader, but do you really want to be doing that? IIRC, the compiler will emit code that executes all paths and then uses cmp or lrp to pick between them. Again, I didn't think anyone would seriously use that as a shader.

In your case, you do get version transparency at the compiler level. At the performance level, though, you might not.
 
XxStratoMasterXx said:
Which implimentation do you believe is the best Neeyik?
Not easy to say given the dearth of games that are fully supporting HDR, across a range of implementations. Logically you'd want to go for hardware FP blending but Far Cry doesn't exactly fly when using it; the game doesn't offer an alternative method, so one can't see if it's "the best" method. If it was only 1 fps faster, would one still class the blending-way the best?
 
Guden Oden said:
Chal,
Don't you think ATi knows that? :p What maybe you need to consider is they're concerned with what will be most useful at a particular point in time, not what will look best on a marketing specs-sheet.
No, I think they're more concerned with making their hardware look good.
 
Neeyik said:
Not easy to say given the dearth of games that are fully supporting HDR, across a range of implementations. Logically you'd want to go for hardware FP blending but Far Cry doesn't exactly fly when using it; the game doesn't offer an alternative method, so one can't see if it's "the best" method. If it was only 1 fps faster, would one still class the blending-way the best?
Of course FP blending is the best way to support HDR. There's just no other way to do general HDR rendering. Any other technique will depend upon special cases.

Keep in mind that there is apparently a problem with the NV40 and its support of FP render targets. This problem was fixed with the GeForce 6600, which sometimes outperforms the GeForce 6800 Ultra with FP blending enabled in Far Cry.
 
Chalnoth said:
Neeyik said:
Not easy to say given the dearth of games that are fully supporting HDR, across a range of implementations. Logically you'd want to go for hardware FP blending but Far Cry doesn't exactly fly when using it; the game doesn't offer an alternative method, so one can't see if it's "the best" method. If it was only 1 fps faster, would one still class the blending-way the best?
Of course FP blending is the best way to support HDR. There's just no other way to do general HDR rendering. Any other technique will depend upon special cases.

Keep in mind that there is apparently a problem with the NV40 and its support of FP render targets. This problem was fixed with the GeForce 6600, which sometimes outperforms the GeForce 6800 Ultra with FP blending enabled in Far Cry.

Is that the reason Carmack mentioned in his .plan that he wanted floating point framebuffers with a blending operation? So he wouldn't have to hack anything and just do a generalized HDR rendering thingy?

(psst...Chalnoth oh great B3D guru, check your private messages :))
 
Inane_Dork said:
I never expected anyone to be satisfied with that kind of shader. Yes, HLSL can emulate dynamic branching in a PS2.0b shader, but do you really want to be doing that? IIRC, the compiler will emit code that executes all paths and then uses cmp or lrp to pick between them. Again, I didn't think anyone would seriously use that as a shader.

In your case, you do get version transparency at the compiler level. At the performance level, though, you might not.

Data dependant branching is not really language feature, it is hardware feature. You do not think about it when writing high level code, you think about algorithm you are implementing. Actually if you'd have to think about assembly when writing high level code that would kind of deny the reason why high level approach to shaders appeared in the first place.

The shaders I am talking about have not been created "for" ps 3.0 just as they were not created "for" ps_2_b. It just turned out that with 3.0 compile target they have possibility of running faster and that was spotted by the compiler, not by some special code magic.

If HLSL compiler is capable of taking advantage of ps3.0 features on existing shaders (and it is), then everyone should only be happy. NVIDIA should be happy because more games will have "shader 3.0" checkbox (it is just too easy to support), ATI will be happy because no game will show lower quality shaders on their hw (and they seem to be still quite capable to compete in raw perfomance), developers will be happy becuse they will write only one version of shaders (for this hardware generation), and gamers will be happy because games will have shaders that are optimal (performance wise) for their hardware, even if developers did not spend time hand tuning all shaders (and the more complex shaders are the less chance of that) for specific hardware.

Of course this discussion is kind of pointless. It is obvios that ATI will eventually have ps 3.0 hadware, just as it is obvious that ps 3.0 is just a step in the way towards something to happen after it.

My only point here is that developers will use that kind of "unified" approach for _this_ hardware generation (simply because it's practical one). I base that opinion on personal experiments with relatively complex shaders (compiling to 300+ instructions on 2_a/2_b profiles). Experiments done by other people may lead to different conclusions. "Write once, run anywhere" is kind of developer's holy graal but I really think that we are getting very close to that, at least in terms of shader programming. Much more of a problem are other hardware features now.
 
frost_add said:
Actually if you'd have to think about assembly when writing high level code that would kind of deny the reason why high level approach to shaders appeared in the first place.
I have never, ever gotten this impression from ATi, nVidia or developers in general. Since the rest of your post basically beats this point home, I'll just leave my argument here.
 
RejZoR said:
Actually HDR in Half-Life 2 was used the most realistic way.
That's an impossibility.
1. It didn't use FP blending, so it's clearly not a general technique.
2. Even if it used FP blending, there's no way it was the most realistic HDR possible.

And don't think of Far Cry as the be-all and end-all of FP blending HDR. It simply cannot have that great of an implementation because it was tacked-on after the game was finished (hell, after it was released).
 
Chalnoth said:
RejZoR said:
Actually HDR in Half-Life 2 was used the most realistic way.
That's an impossibility.
1. It didn't use FP blending, so it's clearly not a general technique.
2. Even if it used FP blending, there's no way it was the most realistic HDR possible.

And don't think of Far Cry as the be-all and end-all of FP blending HDR. It simply cannot have that great of an implementation because it was tacked-on after the game was finished (hell, after it was released).

Chalnoth,
Yes, it can't really have that good of an implimentation, BUT I think it's supposed to be a selling point for the engine. Say, if someone licenced CryEngine, wouldn't they be able to use the fp-blending HDR correctly if the content was made for it?

RejZoR said:
Actually HDR in Half-Life 2 was used the most realistic way.
Far Cry was nice,but way too intensive.

Half-Life2 didn't use HDR afaik.
 
RejZoR said:
Actually HDR in Half-Life 2 was used the most realistic way.
Far Cry was nice,but way too intensive.

HL2 used a bloom effect ala Deus ex 2 / Theif 3. No actual hdr.
 
Waltar said:
RejZoR said:
Actually HDR in Half-Life 2 was used the most realistic way.
Far Cry was nice,but way too intensive.

HL2 used a bloom effect ala Deus ex 2 / Theif 3. No actual hdr.

...hopefully Valve is going to release a HDR patch though, to enable HDR as seen in the tech demos
 
[quote="ChalnothFP blending is the best way to support HDR. There's just no other way to do general HDR rendering. Any other technique will depend upon special cases.[/quote]
So you're saying there are blending states in hardware that are impossible to do via a pixel shader with two floating-point source textures and one floating-point destination texture?
 
[maven said:
]So you're saying there are blending states in hardware that are impossible to do via a pixel shader with two floating-point source textures and one floating-point destination texture?
Practically, yes. To do this in general, for instance, you would need to run each triangle through the pipeline in a separate pass. This is utterly impossible to do efficiently, and thus it is not practical.

How do you get around this? You introduce limitations. This is what I meant by FP blending being the only general way to implement HDR rendering.
 
Chalnoth said:
[maven said:
]So you're saying there are blending states in hardware that are impossible to do via a pixel shader with two floating-point source textures and one floating-point destination texture?
Practically, yes. To do this in general, for instance, you would need to run each triangle through the pipeline in a separate pass.

Why do you need a separate pass for each triangle? (Serious question)
 
Snyder said:
Why do you need a separate pass for each triangle? (Serious question)
Because blending overwrites what's in the framebuffer dependent upon what's already there. So, unless you can guarantee that you aren't going to be overwriting neighboring triangles, you have to swap buffers each triangle.

Edit:
One way you can get around this, of course, is to have nothing rendered in transparent fashion. But there are many, many things in games that are rendered transparently. Take particles for instance. Sometimes there's hundreds, or even thousands on the screen at once. Then there's physical objects made of glass, water, or other transparent substances.
 
One question, if R9800 could handle all the instructions needed for shaders do you think that it will have "raw" power enough to run this :?:
 
Back
Top