aa

About the rest of engines, i think that SSAA is virtually compatible with any rendering technique (render 2x2 times bigger, and then shrink the image). Of course, this is a brute force method, but, the image quality is awesome.
That's only going to give OGSS, which won't give the best results.
 
StretchRect() ?

Well, in D3D9, yes, but, BR2 is D3D8, so, i used CopyRects().

That's only going to give OGSS, which won't give the best results.

Yes, you are right, but, it's still better than no AA.

In BR2, SSAA has some advantages with the transparent textures, cause the game does not use alpha blending, it only uses alpha testing (and the multi/super transparency AA from the nvidia control panel does not work :()
 
Last edited by a moderator:
StretchRect() ?
Yes. You can call StretchRect() from a render target to a texture. But you can't create a texture with multisampling so this is your only way to get MSAA on a texture.

Initially, DX9 specified that StretchRect() could only copy non-AA surfaces to non-AA surfaces and AA surfaces to AA surfaces. That was quickly remedied in DX9a, I believe.
 
but OpenGL has since become so incredibly outdated and irrelevant that there's simply no argument anymore.

I would claim that it is still highly relevant for Linux, OSX, and non-Windows Mobile mobile phone platforms with 3D capability. Perhaps its 'outdatedness' somewhat enhances its cross platform portability by virtue of demanding less of the OS driver model.
 
I would claim that it is still highly relevant for Linux, OSX, and non-Windows Mobile mobile phone platforms with 3D capability. Perhaps its 'outdatedness' somewhat enhances its cross platform portability by virtue of demanding less of the OS driver model.
Certainly yes it's relevant on platforms where you have no other options :) That said, in discussion on what a modern, efficient rendering pipeline looks like, the OpenGL API is not relevant... that's all I was saying.
 
Just curious, how do you think the z-Buffer affects image quality? And what does it have to do with clipping? I think you should really look up what a z-Buffer really does.
A z-buffer stores the depth values of a scene. It either does not give enough precision, or game developers simply don't feel like making the most of it.

You rarely see games with huge draw distance above or below.
 
A z-buffer stores the depth values of a scene. It either does not give enough precision, or game developers simply don't feel like making the most of it.

You rarely see games with huge draw distance above or below.
So how would you determine whether an object was in front or behind another that was previously drawn?
staying with the z-buffer (they seriously need to ditch it, and just use HW programmable clipping with 6 boundaries that can be of any size)
Clip planes won't work in place of Z buffers.
 
A z-buffer stores the depth values of a scene. It either does not give enough precision, or game developers simply don't feel like making the most of it.

You rarely see games with huge draw distance above or below.

The problem with huge draw distance is mainly a performance problem, not a z-buffer limitation. As the game world (usually) is a 3d-space, the area covered by the view cone rises by area = viewRange^3. The further and further you go, the more and more new geometry becomes visible for each additional meter of draw distance you add. At 100 kilometer range, just a single extra meter to the view distance would add more new visible objects to the camera view than all the objects inside the first kilometer. The cost to process all the scene data becomes a huge obstacle when the view distance rises.

With current graphics processing performance the z-buffer precision is enough for almost all game types. The only game types I can think of where the z-buffer precision is a limiting factor are space simulations, where you can fly near a highly detailed space ships / stations (and see inside the windows) and at same time see the whole universe at proper scale. However even this scenario can be handled with current z-buffer precision by rendering the scene on multiple passes (background, nearby planets, all near objects), as the distance between the objects in space is so large (it's easy to partition the scene to depth layers).

Personally I would like to have access to 64 bit z-buffer in the future (when we have performance to model all leaves on the ground as polygons, etc). Full 32 bits is also a big improvement from the usual 24 bit + 8 bit stencil combination.
 
So how would you determine whether an object was in front or behind another that was previously drawn?

Clip planes won't work in place of Z buffers.
Fully programmable clipping would work in place.

The scene could be rendered, and then closed by setting the clip planes to whatever size size are necessary for the scene. Or if that's not possible, you could do that backwards (i.e., define clip space as large as you think the scene may be and then put it in the clip space.)

It would have a huge amount of overdraw depending upon the how complex the scene is, but it would look better and wouldn't take up that much more performance.

It would be easier to just render the scene and just enclosing it.
 
Erm 2008 IQ you really confused on the clipping bit.
The point of the Z buffer is to determine what is the closest fragment to the viewer at a particular pixel. Clipping planes doesn't help, if you want to go down a non Z-buffer route you essentially have to move to a geometric solution, 2 which you may have heard of are a BSP-tree and Warnock subdivision.

Both have massive issues with complex dynamic scenes, that by the time you had implemented the hardware to do it your z-buffer system could by so fast and precise to make it the obvious choice.

Also when you try to extend these system to partial coverage for AA or translunency you also hit a brick wall. Extension of Z-buffer (namely A-Buffer and MSAA) have already solved these problems to an acceptable standard (for example watch a Pixar movie and tell me the IQ isn't good enough).
 
It would have a huge amount of overdraw depending upon the how complex the scene is, but it would look better and wouldn't take up that much more performance.

It would be easier to just render the scene and just enclosing it.
"huge amount of overdraw" and "wouldn't take up that much performance" are contradictory statements, even if your scheme could work, which I doubt.
 
2008 IQ, we already pointed you to several papers and threads in your dedicated Z-buffer whining thread that demonstrated how a complementary 32-bit float z-buffer provides as much precision as you need (i.e. 32-bit vertices are now the limitation... and not much of one at that). fp32 depth buffers are all you practically need for now, and they are a requirement (and often a default) of D3D10 hardware.

As has been mentioned, the problem with huge draw distances is one of LOD not Z-buffer precision. The former is a difficult problem to solve in general, but progress is being made.

... but of course if Microsoft wasn't constantly holding the industry back here, I'm sure we'd have infinite precision everything by now. ;)
 
2008 IQ, we already pointed you to several papers and threads in your dedicated Z-buffer whining thread that demonstrated how a complementary 32-bit float z-buffer provides as much precision as you need (i.e. 32-bit vertices are now the limitation... and not much of one at that). fp32 depth buffers are all you practically need for now, and they are a requirement (and often a default) of D3D10 hardware.

As has been mentioned, the problem with huge draw distances is one of LOD not Z-buffer precision. The former is a difficult problem to solve in general, but progress is being made.

... but of course if Microsoft wasn't constantly holding the industry back here, I'm sure we'd have infinite precision everything by now. ;)
Thanks for the kind reply=]

There's a few reasons why I seem to think that the z-buffer isn't enough, or isn't being taken advantage. 1st is, Rayman 3 used the w-buffer and it's bonus stages were quite like nothing i've ever seen with the z-buffer.

In PoP3D, IIRC, there was at least one area I remember that had this huge area and it looked like it was 1000ft high from the bottom and at the top you it looked like there was a 1,000ft below you.

There's this game on the m2 called battletryst (by Konami, in 1998) which has draw distances that are so huge and open and I haven't seen anything quite like it that uses the z-buffer. The m2 uses an fp16 w-buffer, according to it's specs on wikipedia.

4th, from some of the screenshots I've seen of bloodrayne 2, some of the scenes have a huge view above, that I haven't seen in many other games; I heard it had z-fighting issues, which means that those scenes where you can see really high up can't be done easily with the z-buffer.

finally, I cant figure out why more ps2 games have huger, more open environments than dx10 games. Look at gamespot's screenies for the ps2 ver of mercenaries 2 and then look at the pc version. The ps2 version looks much better in term of draw distance and far away precision.
 
Sorry for the snideness, but really, you have to believe facts when they are pointed out to you :)

Well there's a few things to note, the most relevant of which being that a huge "sense of scale" is an artistic thing, not a technical one. The only technical relevance is being able to uniquely resolve depths, and for that a fp32 complementary z-buffer is more than enough. IIRC from the paper that I linked, it's at least as good as a 24-bit fixed-point w-buffer if not better. It's certainly much better than a fp16 w-buffer (floating point w-buffer seems like a huge waste...).

So again, there is objectively no technical problem here. Any perceived differences that you're seeing are either artistic, or technically out-of-date.

And as far as games go, I think you're just being selective here. Crysis, Far Cry 2 and a number of other recent games all put the draw distances and senses of scale in the games that you mentioned to shame, so I'm not sure I buy your argument there (even from an artistic point of view).

[Edit] And I'm not sure what you're saying with Mercs 2... I checked out the IGN images that you linked and the PS2 version does not have a single screenshot that shows even a moderate view range let alone a big one. Perhaps you can link me specific examples?
 
Last edited by a moderator:
About the rest of engines, i think that SSAA is virtually compatible with any rendering technique (render 2x2 times bigger, and then shrink the image).

According to nhancer ssaa is d3d only cant force it on opengl games

There's a few reasons why I seem to think that the z-buffer isn't enough, or isn't being taken advantage. 1st is, Rayman 3 used the w-buffer and it's bonus stages were quite like nothing i've ever seen with the z-buffer.

In PoP3D, IIRC, there was at least one area I remember that had this huge area and it looked like it was 1000ft high from the bottom and at the top you it looked like there was a 1,000ft below you.

There's this game on the m2 called battletryst (by Konami, in 1998) which has draw distances that are so huge and open and I haven't seen anything quite like it that uses the z-buffer. The m2 uses an fp16 w-buffer, according to it's specs on wikipedia.

Independance war has a draw distance of well 384,403 km.
How do i know? well i can see the moon from earths orbit and actually fly into it if I so wish
i set my speed to 3,000,000 m/s and it took just under 2 minutes just to check it was to scale
 
Last edited by a moderator:
According to nhancer ssaa is d3d only cant force it on opengl games
I think NVIDIA could do it in OGL if they wanted to, but they aren't supporting SSAA any more.

I can't think of a reason why SSAA would not be 100% compatible unless you exceed the API's resolution limit. Oh, what is the limit in OGL?
 
...
4th, from some of the screenshots I've seen of bloodrayne 2, some of the scenes have a huge view above, that I haven't seen in many other games; I heard it had z-fighting issues, which means that those scenes where you can see really high up can't be done easily with the z-buffer.
...

From my debug version of the BR2 FSAA Patch:

IDirect3D8::CreateDevice(Adapter: 0, DeviceType: 1, hFocusWindow: 1115026, BehaviorFlags: 66|D3DCREATE_FPU_PRESERVE|D3DCREATE_HARDWARE_VERTEXPROCESSING, PresentationParameters: (0x00A013F4BackBufferWidth: 1920, BackBufferHeight: 1200, BackBufferFormat: D3DFMT_X8R8G8B8, BackBufferCount: 2, MultiSampleType: 4, SwapEffect: D3DSWAPEFFECT_DISCARD, hDeviceWindow: 1115026, Windowed: false, EnableAutoDepthStencil: true, AutoDepthStencilFormat: D3DFMT_D24S8, Flags: NONE, FullScreen_RefreshRateInHz: D3DPRESENT_RATE_DEFAULT, FullScreen_PresentationInterval: D3DPRESENT_INTERVAL_IMMEDIATE), ReturnedDeviceInterface: 0x00000001)

As you can see, BR2 uses D3DFMT_D24S8. The view distance is great, yes. Overall, this game does a lot of weird things with the D3D8 API, that i still do not understand.

According to nhancer ssaa is d3d only cant force it on opengl games

That is why i said 'virtually', cause, in theory, you as programmer, could render your scene 'internally', to a larger buffer, and then shrink it at the end, with an average shader. This is exactly what i'm trying to add to my BR2 patch, to let the ATi users enjoy some sort of SSAA (yeah, i know about the ATI texture size limit).
 
That is why i said 'virtually', cause, in theory, you as programmer, could render your scene 'internally', to a larger buffer, and then shrink it at the end, with an average shader. This is exactly what i'm trying to add to my BR2 patch, to let the ATi users enjoy some sort of SSAA (yeah, i know about the ATI texture size limit).

I'm curious, do you have independent control over FOV and rendering resolution for BR2?
 
I'm curious, do you have independent control over FOV and rendering resolution for BR2?

Yes, i have full control over the rendering resolution & FOV, but, the game clips the 'objects' outside of the 4:3 area before sending them to the D3D8 pipeline. I'm still trying to fix this with another 'technique'.
 
Back
Top