Vulpine GLmark????

jb

Veteran
Please don't take this as a fan boy rant. But I am really starting to wonder about the Vulpine benchmark.

This decent OpenGL benchmark allows for some good results but I think they are skewed. Their primary development platform was for a GF3 and thus the used custom nV calls to handle the OpenGL pixel/vertex shaders. Which is fine because 2 years ago when they were developing this benchmark it was the only card to support PS/VS. Benchmarking a non-nV card on this will not allow you to use any of your OpenGL PS/VS calls (Matrox, P10, 8500+ cards). Thus those cards get a lower score than their GF3/4 counterparts. For example consider this:

http://www.tech-report.com/reviews/2002q3/radeon-9000pro/index.x?pg=8

Notice how all of the 8500/9000 scores are in the same ballpark (60ish) at the two lower res indicating CPU or some other limitation and yet their GF3/4 counter parts are almost double these scores. Its not until memory bandwidth bottle necks take over do we see any shift. Even the slower 7500 is able to keep up at the lowest res! Something is not right there. Looking at any other OpenGL game benchmark does not show this. In fact its as we would expect (the ATI cards much closer to their nV counter parts).

For a benchmark it is not the most ideal case but I suppose I can understand that giving the fact that they don't update if often. Ideally you want your benchmark to be more up to date and reflect the new cards accurately. However its been 1 year or so now and they still have not bothered to update this issue. Until today/yesterday. The announce patch to support the nV30 only?

I guess I don't understand why they don't patch their bench to use all available OpenGL calls to give a more accurate assessment of current video card vrs supporting a brand new card that's not out yet. Please don't treat this as a fan boy post. I made this based off the info I have. I could be (and usually am) wrong in my so call "facts" and if so please feel free to correct me. But did I miss something here, have they slipped in a patch to address that's not listed on their site: http://www.vulpine.de/demos_benchmark.html

Or is it diver issues that are holding back the ATI scores? Anybody got some good answers why (and I dont want to hear cuz NV is evil, has them in the pockets, ect)....
 
Well, I really have no idea about the ATI scores, but it looks like the new "NV30 patch" is only so that the program doesn't crash while checking extensions when an NV30 is used (The NV30 has the longest extension string yet).
 
Good points. Again I not a pro with these things thats why I thought to post it here to get more info about it.
 
The reason NV cards perform so much better is that this benchmark uses GL_NV_vertex_array_range, but doesn't support GL_ATI_vertex_array_object.
So while the NV cards will have all their vertices in video memory the ATi and others will have it in system memory, which unavoidable means a lot of AGP traffic.

As for the 2048 bytes bug. It screams "newbie", but unfortunately is very common. There's no need to copy the extension string to another buffer, yet loads of apps do this, depending on how large their fixed size buffer is it's just a matter of time how soon a driver with an extension list large enough to overflow it gets released. I wouldn't be surprised if they fixed this bug by changing "char exts[2048]" to "char exts[4096]".
 
i don't use it either anymore...

use to use it back when the GF3 was top of the line, simply cause it was hard to find any benchmarks utilizing any form of vs/ps then, even though it was biased towards gf3...

but now it just doesn't apply as a worthy benchmark with current cards
 
Back
Top