occlusion culling benchmarks?

3dcgi

Veteran
Supporter
Does anyone know of a review that benchmarks the 8500 and GF4 with and without occlusion culling? Hierarchical z in the case of the 8500. Heck, B3D might have done this at some point, but I don't remember. I'm wondering about how much of a performance improvement hierarchical z, etc. really gives since cards like Parhelia and Trident's XP4 don't seem do perform occlusion culling.
 
Try Villagemark. I think it'd be pretty fair for a test between IMRs.
 
DaveBaumann said:
http://www.beyond3d.com/reviews/ati/radeon8500p2/index3.php

You may also wish to look at Kerts Radeon article at Tech-report. I'm not sure anyone has done the same for GF4 since its difficult to disable the overdraw removal routines.

Difficult?! :eek:

You can't disable them at all! The main components responsible still continue to work!
 
Actually, it was very easy to turn the overdraw removal off and a GF3 at least (not sure about 4) - you just run it in 16bit.
 
DaveBaumann said:
Actually, it was very easy to turn the overdraw removal off and a GF3 at least (not sure about 4) - you just run it in 16bit.

Donno about GF3, but on the benchmarks I've seen, they weren't able to disable it, since the main components responsible continued functioning.

But then again, they weren't using 16bit color...
 
Dave:
Not sure about the PC parts, but overdraw removal does work in 16bit on the NV2X.
However it does affect how efficiently ZCompression works to the point where in most cases you arent getting any useful compression. The same is true for enabling quincunx.

There are switches in the hardware that control the functions of early Z, but I have no idea how you'd set them on a PC. To completely disable ZCompression you have to create the surface differently.
 
I haven't tested it recently ERP, so I don't know what they are doing in newer drivers. However, I know that when GF3 was first around there were a couple of render order benchmarks floating about - when GF3 rendered in 32bit it showed a great increase in front-to-back (and a smaller increase with random) ordering, however when it was in 16bit there was little to no difference in the render orders suggesting the early Z check was turned off.
 
It's possible it could just have been the Z compression.
Draw order dictates to some extent how efficiently it works.
I can't imagine why they'd turn off the early Z in 16 bit mode <shrug>.
 
My assumption was always that 16 early Z checks would quite a lot of abndwidth and potentially stall the pipeline more than just rendering in 16bit - the savings come in at 32bit becuase its more bandwidth dependant.
 
That's certainly possible.
NVidia could have noted it running slower with early Z in PC applications.
I haven't personally seen any cases where the early Z is slower.
 
DaveBaumann said:
I haven't tested it recently ERP, so I don't know what they are doing in newer drivers. However, I know that when GF3 was first around there were a couple of render order benchmarks floating about - when GF3 rendered in 32bit it showed a great increase in front-to-back (and a smaller increase with random) ordering, however when it was in 16bit there was little to no difference in the render orders suggesting the early Z check was turned off.

That's actually a quiet good proof to why 16bit color was completely useless with the GF3 generation of hardware as many sites noted.
 
Back
Top