Some Questions For You Guys

No, its not. Its an example of the x800 beating 6800 in HDR using SM2.0 not using SM3.0 and FP Blending.

So what if its done in ps2.0? If it can be done using PS2.x already at good performance considering valve is adding it to their game, what is the problem? Using PS3.0 may or may not give a performance improvement just like we saw with the PS3.0 patch for farcry :rolleyes:

No one is really doubting the x800 can do HDR, but people are doubting it can do it so the effect looks impressive enough to warrant lower poly count and resolution.

Perhaps you need to take a look again at the video valve released a few months ago showcasing HDR running on R3xx
 
Installed user base is dominated by PS 2.0, what developer in their right mind would limit their game to .05% of the market. Sure they may include another path, Far Cry shows PS 2.0 can hang with PS 3.0 visually and speed if instancing is in use. Tell your programmer to spend less time on [H] during his work hours and start getting their products out on the market on time ;)
 
gkar1 said:
Cryect said:
muted said:
what about rthdribl

isn't that an indication it can do hdr? the x800 is beating the 6800 ....

No, its not. Its an example of the x800 beating 6800 in HDR using SM2.0 not using SM3.0 and FP Blending.

So what if its done in ps2.0? If it can be done using PS2.x already at good performance considering valve is adding it to their game, what is the problem? Using PS3.0 may or may not give a performance improvement just like we saw with the PS3.0 patch for farcry :rolleyes:

[quote="Cryect
No one is really doubting the x800 can do HDR, but people are doubting it can do it so the effect looks impressive enough to warrant lower poly count and resolution.

Perhaps you need to take a look again at the video valve released a few months ago showcasing HDR running on R3xx[/quote]

This is different from a the multi light shader ;)

No is saying that it does't work on cards without SM 3.0

Valves demo doesn't show it working in the game itself, did anyone download the leaked beta, I have heard it was it in there.
 
I think everyone is talking about something else when they say HDR here :)

Which is no surprise, since HDR in itself only means 'High Dynamic Range', and all that it requires is a dataformat that can store this high dynamic range. R3x0's floating point rendertargets will do fine for this.

And I think that where people say 'it requires SM3.0', I think they really mean 'if you want to render translucent objects in one pass, you require hardware that supports floating point blending, and the only card that supports this at the moment is also the only card supporting SM3.0, namely the GeForce 6800'. If I'm not mistaken, neither floating point blending nor floating point texture filtering are tied to SM3.0, and both can be implemented on SM2.0 cards aswell.

You can do the same effects on R3x0, but it requires multiple renderpasses, because you have to emulate floating point blending by repeatedly rendering to a texture, and using that texture in the next pass, to blend with the new framebuffer of that pass.

The difference is not that large, really, since you still render the same amount of geometry, you just require changing of rendertarget more often, which could cause considerable overhead if you do it many times.
When no translucent objects are present, no blending is required at all, and R3x0 is just as good as any SM3.0 card in most cases.
In fact, the method described above is actually better than using the blending on NV40, since NV40 only has 16 bit fp blending.
The above method could be implemented in full 32 bit precision on NV40, and is 24 bit precision on R3x0 when using a 32 bit rendertarget.

Another difference is when doing HDR image-based lighting (aka envmapping and such). R3x0 doesn't support texture filtering on floating point textures (not a problem in the above scheme, since you sample texels 1:1 at every screen pixel). NV40 does, so this may look better. On R3x0 it could be implemented in a shader, ofcourse at considerable extra processing cost.

As for X800's value... Well, it happens to be faster at executing most shaders than the 6800, so you can probably get away with some extra processing for HDR or less elegant implementations than in SM3.0 in a lot of cases. NVIDIA did exactly the same in the GeForce4 era. The R8500 had more features and better quality etc, but since the GeForce4 was just so damn fast, nobody remembers the 8500 today :)
In that same light, X800 may well be able to hold its own against the 6800 until ATi comes out with the next generation.
While I would personally buy a 6800 at this time (as a developer), I certainly don't find the X800 useless, and its extra speed may be a good reason for gamers to buy it, especially since most games don't take advantage of SM3.0 yet anyway.
 
You know, people are always worried about their PC hardware being future proof, lasting, whatever.

I ask, why do you even consider this? It is nearly pointless. 2 years ago we were using GF3 TIs. and Radeon 8500s. Did they last to today, a mere 2 years later? No. You don't want those cards today. Same thing with a AMD Athlon XP Palomino, or P4 Willamette. Were millions of people blowing big wads of cash on top-of-the-line-for-that-month stuff? Yes. Did they think they were future proofing themselves? Yup.

It's hopeless. It must be human nature or something.

I've been messing with PC hardware since the Sound Blaster 2.0, Cirrus Logic video domination, birth of 3d, whatever. You CAN NOT be future proof. Your hardware is something that will be useful for basically today. So, hell, overclock the shit out of it and burn it out 10 years earlier than it's supposed to die.

WHO CARES?!!?! In our market, we aren't dealing with collectible Chevy Corvettes or Ford Mustangs here. These are state-of-the-art silicon chips that are hopelessly obsolete the moment they get out the door.

Hell when the X800 and 6800 hit the streets, NV50 and R500 were nearly done.

Have fun for today, don't try too hard to plan for tomorrow.

Worrying about PC hardware being future proof is the universal NOOB sign :)
 
swaaye said:
You know, people are always worried about their PC hardware being future proof, lasting, whatever.

I ask, why do you even consider this? It is nearly pointless. 2 years ago we were using GF3 TIs. and Radeon 8500s. Did they last to today, a mere 2 years later? No. You don't want those cards today. Same thing with a AMD Athlon XP Palomino, or P4 Willamette. Were millions of people blowing big wads of cash on top-of-the-line-for-that-month stuff? Yes. Did they think they were future proofing themselves? Yup.

It's hopeless. It must be human nature or something.

I've been messing with PC hardware since the Sound Blaster 2.0, Cirrus Logic video domination, birth of 3d, whatever. You CAN NOT be future proof. Your hardware is something that will be useful for basically today. So, hell, overclock the shit out of it and burn it out 10 years earlier than it's supposed to die.

WHO CARES?!!?! In our market, we aren't dealing with collectible Chevy Corvettes or Ford Mustangs here. These are state-of-the-art silicon chips that are hopelessly obsolete the moment they get out the door.

Hell when the X800 and 6800 hit the streets, NV50 and R500 were nearly done.

Have fun for today, don't try too hard to plan for tomorrow.

Worrying about PC hardware being future proof is the universal NOOB sign :)

I see you are still using a 9700 ;) a good card lasted a good 2 years. Feature proof is a good thing but as you said for avid gamers is non issue.
 
Scali said:
You can do the same effects on R3x0, but it requires multiple renderpasses, because you have to emulate floating point blending by repeatedly rendering to a texture, and using that texture in the next pass, to blend with the new framebuffer of that pass.

The difference is not that large, really, since you still render the same amount of geometry, you just require changing of rendertarget more often, which could cause considerable overhead if you do it many times.
The difference is large, because for ping-ponging you need much more fillrate. Blending only touches the pixels which are really covered. Ping-ponging requires you to either combine all pixels per pass or calculate "dirty rectangles" and use stencil to save a good part of the blending work, but adding in another area.
 
swaaye said:
You know, people are always worried about their PC hardware being future proof, lasting, whatever.

I ask, why do you even consider this? It is nearly pointless. 2 years ago we were using GF3 TIs. and Radeon 8500s. Did they last to today, a mere 2 years later? No. You don't want those cards today. Same thing with a AMD Athlon XP Palomino, or P4 Willamette. Were millions of people blowing big wads of cash on top-of-the-line-for-that-month stuff? Yes. Did they think they were future proofing themselves? Yup.

It's hopeless. It must be human nature or something.

I've been messing with PC hardware since the Sound Blaster 2.0, Cirrus Logic video domination, birth of 3d, whatever. You CAN NOT be future proof. Your hardware is something that will be useful for basically today. So, hell, overclock the shit out of it and burn it out 10 years earlier than it's supposed to die.

WHO CARES?!!?! In our market, we aren't dealing with collectible Chevy Corvettes or Ford Mustangs here. These are state-of-the-art silicon chips that are hopelessly obsolete the moment they get out the door.

Hell when the X800 and 6800 hit the streets, NV50 and R500 were nearly done.

Have fun for today, don't try too hard to plan for tomorrow.

Worrying about PC hardware being future proof is the universal NOOB sign :)

Quoted for truth, buy what you want to use today, not in 2 years :)
 
Razor1 said:
I see you are still using a 9700 ;) a good card lasted a good 2 years. Feature proof is a good thing but as you said for avid gamers is non issue.


I bought the 9700PRO last summer after I busted my 8500 :) I would have had the 8500 a good while longer had that not happened.

9700 is probably the longest lasting graphics processor ever developed. Like I said in another thread, how many cards from 3 years ago can run Doom3 at 1280/HIGH and still be basically totally playable? It's a stunning design from ATI. Obviously blew NV's mind too with what happened with NV30.
 
The difference is large, because for ping-ponging you need much more fillrate. Blending only touches the pixels which are really covered. Ping-ponging requires you to either combine all pixels per pass or calculate "dirty rectangles" and use stencil to save a good part of the blending work, but adding in another area.

I render the geometry itself and read the texture containing the framebuffer, so I only touch the pixels which are really covered (see http://www.flipcode.com/cgi-bin/msg.cgi?showThread=01-16-2004&forum=iotd&id=-1 for a short description).
No extra fillrate required, just a bit of extra texture access, which is not a big deal, and the swapping of rendertargets, which could be a big deal when done often, as I said before (you have to copy the rendertarget to a texture, which is reasonably inexpensive these days, just don't do it too often :) You could use a rectangle based on the bounding volume of the object you're going to render).
And ofcourse if you render the same pixel more than once in one pass, it doesn't work 100% correct. Then again, you already had that problem in the regular blending variation if the order wasn't strictly back-to-front.
My solution to that is to subdivide all translucent meshes into convex sub-meshes. They can now be rendered by sorting the meshes back-to-front on their bounding volumes, and doing backfaces first, then frontfaces.
The same method works fine with the fake blending described above.
The only problem is intersecting translucent objects. They would have to be solved by sorting per-poly, and even then it may not be 100% correct (intersecting polys cannot be sorted properly), and it is not really an option in realtime anyway.
So effectively, the fake blending method works about as well as the real blending does.
 
Scali said:
I render the geometry itself and read the texture containing the framebuffer, so I only touch the pixels which are really covered (see http://www.flipcode.com/cgi-bin/msg.cgi?showThread=01-16-2004&forum=iotd&id=-1 for a short description).
No extra fillrate required, just a bit of extra texture access, which is not a big deal, and the swapping of rendertargets, which could be a big deal when done often, as I said before (you have to copy the rendertarget to a texture, which is reasonably inexpensive these days, just don't do it too often :)
Well, copying the render target to a texture is basically like drawing a single-textured full-screen quad. Even if there is specialized hardware for the copy (I doubt that), it takes the same bandwidth. And it needs a pipeline flush. Of course it depends on the complexity of your shaders how big of an impact this is.
 
A couple of thoughts.

I have a very limited knowledge of shaders but it doesn’t seem to me that SM3.0 is going to do a lot for overall performance in the shader department. My thinking is that even in a shader heavy game like Farcry the vast majority of PS’s have a very small instruction count. One site 3dchips counted the shaders (I think on one level) and ~ 2/3 of the Pixel shaders are PS1.1 -- shaders limited to an instruction count of 8 or less. Most of the FP shaders in PS2.0 are probably quiet similar --ie … a relatively small instruction count -- well under even 30 - 40. I gather SM3.0 is not going to do anything performance wise for all these small shaders. So even if Farcry has a couple of shaders ~ 100 instructions and maybe SM3.0 could help these a little -- since 98% of the shaders you’re running can’t be helped by SM3.0 the overall speed improvement by adding SM3.0 to these couple of shaders is going to be miniscule. In the case of Farcry, NV/Crytek knew they couldn’t get any significant speed improvement by toying with the shaders so they made other speed improvements and passed them off as SM3.0 .

Even looking to the future, is there going to be a lot of use for lots of shaders 100+++ instructions long? I mean what kind of effect is going to use a shader 200-300 instructions long…???????? Even the limited instruction count of 8 on PS1.1 produces some very good lighting effects like in the 3Dmark2001 nature demo. Game producers are going to be using more and more shaders for effects, but likely mostly small ones. So while SM3.0 … “can” … be used for performance gains in very long shaders -- the rub is that these long shaders are likely only ever going to make up a fraction of the shader effects that are being used. This scenario doesn’t translate into SM3.0 having any significant performance advantages over SM2.0b -- even in the fairly long run.
 
Blastman, we have still a long way to go until we get photorealistic surfaces. There's still a lot to do even for relatively simple materials. Currently >50 instructions are quite rare, but I think it will be common two years from now.
 
Well, copying the render target to a texture is basically like drawing a single-textured full-screen quad.

True, but I suppose it is still better than what you suggested, since your suggestion required to read two textures and write the result to the rendertarget. And your other suggestion required per-pixel stenciltest.
The suggestion I made, does the minimum amount of work, just read and write of the pixels, no extra bandwidth required for a second texture or for stenciling (the blending is done while shading, so usually the overhead of reading the texture and blending is covered mostly by the rest of the shader).

Ofcourse we can probably find pathological cases where either method is bad. One possible disadvantage of my method, as mentioned on the link I posted earlier, is when you use very highpoly models, so the vertex processing and triangle setup overhead will nullify the gain you get from only rendering the exact pixels without stenciling.

But in my average cases, this method works fine.
 
Xmas said:
Blastman, we have still a long way to go until we get photorealistic surfaces. There's still a lot to do even for relatively simple materials. Currently >50 instructions are quite rare, but I think it will be common two years from now.

This presentation from ATi might be interesting: http://www2.ati.com/developer/gdc/GDC04_Yee_Hart_Preetham.pdf

It compares the average game with the average Shrek scene. Gives you an idea of how far games still are from movies.
Then again, games have to be realtime... Hardware is getting quite powerful, and for non-realtime rendering they are already quite useful. I believe that Maya has a hardware-acccelerated offline renderer now, and NVIDIA has its Gelato.
 
Back
Top