3D Labs Wildcat Realizm 100

Zvekan

Newcomer
I had a chance to play a bit with Wildcat Realizm 100 and I got pretty suprising results - it is slow as hell with texturing.

Here is MDolenc's fillrate tester:

Code:
Fillrate Tester
--------------------------
Display adapter: 3Dlabs Wildcat Realizm 100
Driver version: 6.14.1.0
Display mode: 1024x768 A8R8G8B8 75Hz
Z-Buffer format: D24S8
--------------------------

FFP - Pure fillrate - 203.344040M pixels/sec
FFP - Z pixel rate - 120.152512M pixels/sec
FFP - Single texture - 200.705460M pixels/sec
FFP - Dual texture - 196.121979M pixels/sec
FFP - Triple texture - 186.629608M pixels/sec
FFP - Quad texture - 131.722549M pixels/sec
PS 1.1 - Simple - 203.333054M pixels/sec
PS 1.4 - Simple - 203.327988M pixels/sec
PS 2.0 - Simple - 203.330093M pixels/sec
PS 2.0 PP - Simple - 203.330887M pixels/sec
PS 2.0 - Longer - 203.330292M pixels/sec
PS 2.0 PP - Longer - 203.327972M pixels/sec
PS 2.0 - Longer 4 Registers - 203.298737M pixels/sec
PS 2.0 PP - Longer 4 Registers - 203.325272M pixels/sec
PS 2.0 - Per Pixel Lighting - 21.384666M pixels/sec
PS 2.0 PP - Per Pixel Lighting - 21.384680M pixels/sec

Also suprisingly changing optimizations from professional to entertainment had almost no effect unlike VP series where texture vs geometry preference had large impact on fillrate. Maybe drivers are still very bad (lates beta version dated on 20/8 was used).

3DMark03 ran very slowly - under 2 fps on GT2 and GT3 and crashed trying to run GT4. 3DMark01 also crashed - on advanced pixel shader test.

SPECviewperf 8.1 of course ran flawlesly but entertainment optimizations were, suprisingly, generaly faster. Using 8x AA and 8x AF had great effect on performance which was expected - 8x multisample AA must be very memory intensive. Suprisingly 4x AA is ordered grid.

AF is also angle dependant - full quality is offered only on 0, 90, 180, 270 degrees.

In short 3DLabs/Creative completely left ideas of conquering gaming market.

Zvekan
 
In the professional arena, you aren't very concerned about texturing. Quite simply put, I texture only about 5% of my time and during this texturing time, 99% of it is plain flat shading.

Texturing is just not that important. Our main concerns are vertices and edges. Quite simple isn't it? Extreme accuracy in line drawing is probably at the top of our priority list, since the designs we have to draw, have to be precise. 1 pixel error is a major error at times. So in our criteria of purchasing a professional OpenGL card, our first thoughts are, vertice speed and accuracy. Texturing simply takes a backseat in our priorities.
 
All I know is that many have switched to off the shelf game cards and if that trend continues then 3DLabs will go bankrupt. Didn't creative bought them so that they could us P10 in consumer market? Creative will also be less relevant as better onboard sound solutions come out. I still use permedia2 card in my backup machine and drivers have been less than stellar way back. I ran the code thru nv hw and found some bugs that 3DLabs driver let thru. Kind of weird to see this from gl leader but maybe they didn't took it seriously with gaming cards. Remember the restrictive blending ops? Everyone harked on that.
 
Zvekan - what are the clock speeds of the card? Could you test out some of the vertex processing capabilities of it using RightMark too?

Anybody notice that despite being slow, it's PS throughput is the same regardless of the revision used? Looks somewhat odd to my eyes and this is not the first case where we've seen a new-ish card struggle in Marko's test (Volari?).
 
Neeyik said:
Zvekan - what are the clock speeds of the card? Could you test out some of the vertex processing capabilities of it using RightMark too?

Anybody notice that despite being slow, it's PS throughput is the same regardless of the revision used? Looks somewhat odd to my eyes and this is not the first case where we've seen a new-ish card struggle in Marko's test (Volari?).

I didn't know a way to find out clock speeds :(

Had it only for a day sorry, I can say that vertex test in 3DMark03 returned around 7 fps at normal settings (talking from my head, didn't write it down as card worked quite unstable) and that is very strange as this is what SPECviewperf 8.1 returned (Professional optimizations)

Code:
Run All Summary 

---------- SUM_RESULTS\3DSMAX\SUMMARY.TXT
3dsmax-03 Weighted Geometric Mean =   34.06

---------- SUM_RESULTS\CATIA\SUMMARY.TXT
catia-01 Weighted Geometric Mean =   24.67

---------- SUM_RESULTS\ENSIGHT\SUMMARY.TXT
ensight-01 Weighted Geometric Mean =   18.70

---------- SUM_RESULTS\LIGHT\SUMMARY.TXT
light-07 Weighted Geometric Mean =   19.36

---------- SUM_RESULTS\MAYA\SUMMARY.TXT
maya-01 Weighted Geometric Mean =   43.55

---------- SUM_RESULTS\PROE\SUMMARY.TXT
proe-03 Weighted Geometric Mean =   48.10

---------- SUM_RESULTS\SW\SUMMARY.TXT
sw-01 Weighted Geometric Mean =   22.21

---------- SUM_RESULTS\UGS\SUMMARY.TXT
ugs-04 Weighted Geometric Mean =   26.79

It was run on a Athlon 64 3400+, 1 GB DDR400, ABIT KV8-MAX3.

Regarding MDolencs test - it could be wrong, but 3DMark also returned around 200 Mpixels/s (and around 700 Mtexels/s I think, again from my head).

HPC tests in 3DMark2001 were also terribly slow - no numbers again as it crashed :( - but probably DirectX drivers are very inmature.

Zvekan
 
Shame you only had for such a short time. PowerStrip might have been able to read off the clock speeds (it's sort of built into 3DMark03 too). I think you're probably right about the DX drivers being borked at the moment.
 
Neeyik said:
Shame you only had for such a short time. PowerStrip might have been able to read off the clock speeds (it's sort of built into 3DMark03 too). I think you're probably right about the DX drivers being borked at the moment.

I tried various utilites when I had a Wildcat VP series model to test, but nor PowerStrip, nor other utilities had luck in getting the clockspeed out.

I tried 3DMark03 but it returned funny numbers like 8 MHz and so.... if I remember correctly 3DMark03 also has problems with some GeForce FX based cards - luckily CoolBits solves the problem.

Zvekan

edit: CoolBits instead CoolStrip :)
 
Quite impressive. Seems like NVidia will need their SLI to compete with the Realizm 800
 
If everything goes well we will get a Realizm 200 next week at work. The main purpose is to test our software on these boards but if I have the time I wil do some more benchs.

So if you want to see some specific tests or have suggestions on what I should test let me know :)
 
Zeross said:
If everything goes well we will get a Realizm 200 next week at work. The main purpose is to test our software on these boards but if I have the time I wil do some more benchs.

So if you want to see some specific tests or have suggestions on what I should test let me know :)

Games! (I know this card isn't meant for games, but still. I just want to see what it could do.)
 
The last time I used a 3DLabs card was back a couple of years ago and it was the dual proc oxygen card (I forget the exact model number). It ended up running our opengl code slower than another computer with an original geforce256 card (same era) which cost about $1000 less.

The only big advantage for us at the time was that it could accelerate opengl on two displays, but otherwise I was not impressed, especially with the buggy and crashy drivers. It certainly didn't merit the $1500 list pricetag.

Nite_Hawk
 
Back
Top