Rightmark 3D

demalion

Veteran
I've been waiting for this to be translated from Russian.

Any thoughts on the benchmarking results, and/or the individual tests?

I like the "Pixel Filling" test, since it offers the options for comparison I thought facilitated anisotropic filtering quality comparison (though I still wish Xmas's application had offered all of these options for worst case analysis).
 
the test clearly shows that the Early-Z units of the GF4-Ti cards does not work at all! They were not able to switch on the Early-Z functionality, not even in the registry.

At 3dcenter.de they had an nice thread clearly showing that too.


So now, how will the GFfx perform when the Early-Z units hopefully work.
 
I think what we're seeing here is yet another Digit-Life screwup. Previous benchmarks elsewhere have shown that the GeForce4 definitely shows improvement when rendering front-to-back. For example, here are some tests on my GeForce4 through GLExtreme:

Overdraw factor 3, back to front: 329.79 fps
Overdraw factor 3, front to back: 685.52 fps
Overdraw factor 3, random order: 475.36 fps

Overdraw factor 8, back to front: 127.10 fps
Overdraw factor 8, front to back: 400.36 fps
Overdraw factor 8, random order: 262.76 fps

Yes, the early occlusion detection is indeed working, at least in OpenGL. I don't think the benchmark differences between OpenGL and Direct3D are enough for the occlusion detection to be missing in D3D.

The decent Villagemark (D3D) scores of the GeForce4 line also seem to refute this.
 
Chalnoth said:
I think what we're seeing here is yet another Digit-Life screwup.
I don't think so.

Those who can read German might find this link interesting:
http://www.forum-3dcenter.net/vbulletin/showthread.php?s=&threadid=44301

Of course front to back sorting increases performance, just as it does on any card except a TBDR. Because it saves a framebuffer write (+read if alpha blending is enabled) and a Z-buffer write for all pixels that are rejected by the 'late' Z test.

But tests clearly show that the GF3 can reject 16 pixels per clock through early Z, while the GF4Ti can't. GF4MX seems to reject up to 8 pixels per clock (2 pipes). That's with AA disabled. If AA is enabled, there's no early Z test at all, supposedly because the same Z compare units that do the early Z check can also be used to do a per-sample Z test.
 
It's possible that early-Z can get disabled by certain pipeline states. For example, writing to the depth register from a shader, or turning on alpha test, or perhaps other combinations of states. This might be different on the GF4 than GF3.

Still, if the GF4 is broken, how can it do so well vs the GF3Ultra?
 
In what benchmarks? Perhaps someone could provide a comparison with recent drivers (I assume you mean GF 3 Ti 500? I didn't look up clock speeds).

EDIT: Hmm, the question is about Early Z. :LOL:

Even so, it does seem odd that it would happen in one API and not the other...perhaps it is a driver issue with the DX 9 drivers? That would certainly be understandable...seems like the most reasonable explanation at this time.

Say, could someone on the staff here try and see if they could get access to the Rightmark in its current form? I'd think they'd be interesting in having a site like this being an early adopter.
 
Say, could someone on the staff here try and see if they could get access to the Rightmark in its current form? I'd think they'd be interesting in having a site like this being an early adopter.
A beta should be released soon, but they won't give it to anyone before that.
 
I think it's rather weird that you guys are discussing the GF4 here, when the R9500 is in a no better position... I'll let the following quote speak for itself:
It does work (HyperZ) in all the RADEON 9700 and RADEON 9500 PRO chips and demonstrates a perfect performance. But it is not supported in all RADEON 9500 chips! It's disabled, and probably on the driver level again. Why? Maybe, for creation of an additional difference in real applications? But the more believable version is that there are some defects in the die, and that is why they are used for cards on the RADEON 9500. Besides, to make performance lower relative to the RADEON 9500 PRO and 9700, such dies have half of their pipelines disabled. In the first part we discussed the problem of turning the RADEON 9500 into RADEON 9700 (9500 PRO) with the RivaTuner, i.e. on the software level. The following events showed that it's not so smooth as we wanted it to be. First of all, not all R9500 chips work without artefacts after the redesigning, about 28% have bugs which indicate some problems in the HyperZ unit. Isn't it the unit that controls the HSR? I think that ATI disables the crippled HSR unit on the software level with half of the pipelines and then uses such chips as RADEON 9500.
 
Why would we discuss the 9500 non pro? There is no mystery. What they propose as an explanation has been confirmed by comments elsewhere.

This is not nVidia versus ATI. This issue does not change the performance the GF 4 actually gets (EDIT: and, in fact, might even point toward the possibility of improved performance for it).
 
demalion said:
Why would we discuss the 9500 non pro? There is no mystery. What they propose as an explanation has been confirmed by comments elsewhere.

This is not nVidia versus ATI. This issue does not change the performance the GF 4 actually gets (EDIT: and, in fact, might even point toward the possibility of improved performance for it).
Oh... well then, i'll just hide in the corner now :)
 
alexsok said:
Say, could someone on the staff here try and see if they could get access to the Rightmark in its current form? I'd think they'd be interesting in having a site like this being an early adopter.
A beta should be released soon, but they won't give it to anyone before that.

I still think Dave, for example, could ask (if he thinks it is a good idea). It can't hurt, and I think it would be in their interest. They could just stipulate that he doesn't release an article using it if they want to concentrate the benefit in site traffic to certain websites before it is officially released. Would be handy to let someone respected, who would be understanding of it being beta, develop a positive impression beforehand.

Kristof has already expressed being amenable to being payed to physically abuse Dave...I wonder how much a prod would cost?
 
Not exactly 100% on-topic with this but...while nosing through ATI's site to find some other info, the comments about the Hyper-Z business early in this thread, sprung into my mind when I saw this:
HYPERZâ„¢ II
Lossless Z-Buffer Compression and Fast Z-Buffer Clear reduce memory bandwidth by up to 25%
Oops! No wonder they've been switching it off! :D
 
Back
Top