Unreal 3 Engine and SM 3.0

Re: sm3.0

Bjorn said:
davepermen said:
but even so, they only added features. how you can use them for performance, i don't know, in case of vs.

Dynamic branching ?

Here's a explination of what static/dynamic branching is:
http://msdn.microsoft.com/library/default.asp?url=/nhp/Default.asp?contentid=28000410
-----------
The most common branching support in current hardware shading models is static branching. Static branching is a capability in a shader model that allows for blocks of code to be switched on or off based on a Boolean shader constant. This is a very convenient method for enabling/disabling potentially expensive code paths based on the type of object currently being rendered. Between Draw calls, you can decide the various features you want to support with the current shader and then set the Boolean flags required to get that behavior. The best part about this method is that any instructions that are 'disabled' by the Boolean constant are completely skipped during execution. The disadvantage is that you can only change which if blocks are enabled/disabled at a low frequency (i.e. between draw calls). In contrast, using the execute-both-sides approach, it is possible to dynamically choose between the outputs of the two paths dynamically at a per-pixel or per-vertex level.

The most familiar branching support is dynamic branching. The dynamic branching support offered by some shader models is very similar to that offered by a standard CPU. The performance hit is the cost of the branch plus the cost of the instructions on the side of the branch taken. This execution cost is comparable to what most people are familiar with optimizing for in CPU-side code. The problem with this form of branching is that it is not available on most hardware and is currently only available for vertex shaders. Optimizing shaders that work with these models is very similar to optimizing code that runs on a CPU.
-----------

So basicly in their explination, static branching can do everything that dynamic branching can do however in doing so you will in a worst case incur a penalty of also executing the branch not taken.
In the best case your not using dynamic branching at all, so there is no penalty.
 
Re: sm3.0

lyme said:
Bjorn said:
davepermen said:
but even so, they only added features. how you can use them for performance, i don't know, in case of vs.

Dynamic branching ?

Here's a explination of what static/dynamic branching is:
http://msdn.microsoft.com/library/default.asp?url=/nhp/Default.asp?contentid=28000410
..

Thanks for the explanation but i already had a good idea of what it is. I was just interested to hear if davepermen didn't consider that to be a feature that could add performance.
 
Re: sm3.0

Bjorn said:
lyme said:
Bjorn said:
davepermen said:
but even so, they only added features. how you can use them for performance, i don't know, in case of vs.

Dynamic branching ?

Here's a explination of what static/dynamic branching is:
http://msdn.microsoft.com/library/default.asp?url=/nhp/Default.asp?contentid=28000410
..

Thanks for the explanation but i already had a good idea of what it is. I was just interested to hear if davepermen didn't consider that to be a feature that could add performance.

Your welcome, personally I needed to look that up to get the difference between static and dynamic branching strait in my mind.

While I don't know what daveP could/would/will use it for there are at least a few concocted senerios where it would provide both greater ease and a performance benifit.
For example say you had a lightsource a ball and a box, where the ball is between the light and the box. Now assuming your writing a shader that will put a shadow on the box, or even better it is a crystal ball and you wanted to write a shader for the refraction of the light through the ball on the box. Using dynamic branching you could write a shader that would either light the box normally or paint the refracted light. Such a shader is a good idea because the curvature of the ball presents a curved area on the box.
If your using a 2.0 shader you could either use static shading and/or some tricks, which 'can' impose quite a penalty. While with a 3.0 shader you would just use a dynamic branch.
 
DaveBaumann said:
Isn't this also the case for the vast minority of NV hardware being sold currently... :?:

I was under the impression that NV didn't have any SM2.0 hardware up until NV40.

Was I wrong?
 
vb said:
DaveBaumann said:
Isn't this also the case for the vast minority of NV hardware being sold currently... :?:

I was under the impression that NV didn't have any SM2.0 hardware up until NV40.

Was I wrong?

Yep, should have been "what many people don't consider to be SM2.0 hardware" :)
 
max-pain said:

Interesting:

Advanced Dynamic Shadowing. Unreal Engine 3 provides full support for three shadow techniques:

  • Dynamic stencil buffered shadow volumes supporting fully dynamic, moving light sources casting accurate shadows on all objects in the scene.
  • Dynamic characters casting dynamic soft, fuzzy shadows on the scene using 16X-oversampled shadow buffers.
  • Ultra high quality and high performance pre-computed shadow masks allow offline processing of static light interactions, while retaining fully dynamic specular lighting and reflections.

I wonder where NVidias UltraShadow technology will fit in. Will it be usable here or probably a Doom 3 engine only feature ? (that of course depends on what other developers are doing)
 
Bjorn said:
I wonder where NVidias UltraShadow technology will fit in. Will it be usable here or probably a Doom 3 engine only feature ? (that of course depends on what other developers are doing)
Depends on if it's exposed in Direct3D (I think it's the EXT_depth_bounds_test extension, in OpenGL). If it is, then I'm sure Epic will use it -- as it is definitely quite useful.
 
A generalization of the scissor test to Z coordinate makes perfect sense and is cheap to implement, but MS's adoption of it depends on IHVs fighting it out.

However, even without the z-scissor, the shadow buffer algorithms in UE3 will be accelerated by the 32x0 mode which uses 16x supersamping to achieve softshadows. MSAA can't be used on render targets, so other cards will be forced to run it at that non-AA stencil fillrate.
 
DemoCoder said:
However, even without the z-scissor, the shadow buffer algorithms in UE3 will be accelerated by the 32x0 mode which uses 16x supersamping to achieve softshadows. MSAA can't be used on render targets, so other cards will be forced to run it at that non-AA stencil fillrate.
No, MSAA can be used on render targets, but not textures. You can, however, copy the contents of an MSAA'ed buffer to a texture.
 
Of course I was talking about render-to-texture. DX imposes the restriction that the depthstencil surface must have the same multisample type as the rendertarget texture, so you can't create a 4xMSAA depth buffer, and bind it as the depth surface of a rendertarget with no MSAA set. And depth textures can't be MSAA'ed either.

Correct me if I'm wrong, but you also cannot just call SetRenderTarget() with the result of CreateDepthStencilSurface(), instead, you must call SetDepthStencilSurface. You also can't get away with a "null" RenderTarget from what I've read, you have to bind a color buffer, even if you're not going to use it. Atleast, people on the game developer forums have reported that you can't do it.
 
# 64-bit color High Dynamic Range rendering pipeline. The gamma-correct, linear color space renderer provides for immaculate color precision while supporting a wide range of post processing effects such as light blooms, lenticular halos, and depth-of-field.

Hmm 64 bit not 128 bit weren't they complain earlier that FP24 isn't any good?

The other thing is the picture here is suppsedly what it is gonna look like in game http://www.unrealtechnology.com/screens/character_creation3.jpg .

Now back at the NV40 launch we saw some other picture such as http://cc.usu.edu/~roblb/unreal3.jpg .

Now could just be but the NV40 launch picture it looks like their is self shadowing of the bump/parralex map but in the picture on the website I can't see any I guess I could just be blind.
 
DemoCoder said:
A generalization of the scissor test to Z coordinate makes perfect sense and is cheap to implement, but MS's adoption of it depends on IHVs fighting it out.
You mean IHVs fighting it out with MS or each other? I think in one of the GDC2004 presentations ATI did mention something about EXT_depth_bounds_test. . . It seems they really like the idea -- and that the extension is EXT and not a proprietary NV is very encouraging indeed.
 
Oooo, looks nice. I want to be awed by graphics again. I haven't felt that "Wow" factor since first firing up Unreal.

Stupid gradual graphics... take away my wow...
 
Okay, so can anybody sum up the shadowing in UE3?

It seems to have precalculated irradiance stuff for static geometry, depth maps for cast shadows by dynamic objects, and volumetric shadows for self-shadowing dynamic objects, right? What's the reason to mix these three, wouldn't it make the system overly complicated and the visuals look silly?
 
Laa-Yosh said:
Okay, so can anybody sum up the shadowing in UE3?

It seems to have precalculated irradiance stuff for static geometry, depth maps for cast shadows by dynamic objects, and volumetric shadows for self-shadowing dynamic objects, right? What's the reason to mix these three, wouldn't it make the system overly complicated and the visuals look silly?
Some guesses:

Depth shadows are being use for shadow cast by dynamic objects because they're much easier to make fuzzy and have a relatively low cost associated with them (as compared to other techniques).

Volumetric shadows for self-shadowing on dynamic objects because such shadows would naturally be sharper anyway. Since only the individual models are being multipassed for each light, there's less of a worry about fillrate saturation, as opposed to full-scene multi-passing.

Precalulated irradiance for static geometry because it's a more realistic lighting design and quite practical to implement for static geometry.
 
Ante P said:
Diplo said:
If you search around the net you can find some unofficial shakey-cam footage which looks absolutely amazing - miles ahead of any other engine I've seen. However, Tim Sweeney has stated that games based around the engine won't appear until at least 2006...

Actually from what I understood by what Mark Rein said it was rather "the next Unreal game based on UnrealEngine3 won't be here 'til 2006" but that it was very possible that other games using the engine would surface earlier.

They aren't going to let their engine debut on another game but their own. When is the last time you saw that happen, they could be on a deal like valve is where all others who use their engine cannot debut before HL2
 
Back
Top