8800GTX Shadermark results

Status
Not open for further replies.
There's something wrong with the VS speed tests. Why is 1 point light as fast as 8 point lights?
This is what I would expect to see from a unified architecture when it is given a workload that is heavily limited by the vertex shader: it becomes limited by triangle setup.
 
Is there any indication whether there is a performance split between full and half precision?
Just curious...
 
Is there any indication whether there is a performance split between full and half precision?
Just curious...
The Fillrate Tester results show that PP runs at the idential speed to normal, and the shadermark results say that both PP and normal have full FP32 (s23e8) precision. So no, no difference (and it doesn't seem you can even force it to lower precision).
 
I had noted the shadermark results, but missed the fillrate numbers. Now the only thing to wait for is for up-to-date drivers to show up. I'm not sure how much in the way of conclusions I should draw given how old those drivers must be.
 
I had noted the shadermark results, but missed the fillrate numbers. Now the only thing to wait for is for up-to-date drivers to show up. I'm not sure how much in the way of conclusions I should draw given how old those drivers must be.

Yeah, the drivers that shipped with the cards must be over a month old at this point.
 
This is what I would expect to see from a unified architecture when it is given a workload that is heavily limited by the vertex shader: it becomes limited by triangle setup.
In that case, why are the Plain Vertices rate much higher than the VS speed rates? If the VS was actually limited by Setup, then those numbers should be the same, no?
 
If the VS was actually limited by Setup, then those numbers should be the same, no?
The benchmark having no guarantee of being bugfree would be a possible explanation, I guess... :) Some of these numbers for pre-G80 SKUs clearly go against common sense or wellknown information.

Uttar
 
In that case, why are the Plain Vertices rate much higher than the VS speed rates? If the VS was actually limited by Setup, then those numbers should be the same, no?
Interpolants can affect triangle setup speed. 8 lights or 1 have the same number of interpolants, but plain vertices may have fewer.
 
Man, this sucker is a BEAST.

wtf are you going to do with 70GPix/s z-fillrate? :LOL: You could have a 10x true overdraw (without culling) on a 10K x 10K uberscreen and you'd still be realtime! Not that I'm complaining...

Arithmetic and texturing perf looks to be over 2 times the G71 (128 scalar MAD+MUL units at 1350 MHz would explain it), but I'm not seeing enough differences in shading characteristics from G71 that would suggest a very different base architecture. Texture decoupling, if it did happen, doesn't seem to have helped one shadermark test over the other (relatively speaking), at least not to the degree of R420->R520. For similar reasons I'm doubtful about truly independent scalar ALUs.

Not that any of this really matters, as it's insanely fast. The VS also looks speedy for a non-unified design. IMO, R600 will have a tough time matching this unless ATI also made GHz shader engines. Hopefully AMD can help them out in this area for next-next gen.

I might make the switch to green this winter! :oops:
 
Make sure you have a beast of a processor to back it up too. I have found that the "lowly" GTS is heavily CPU bound.
 
Right now, I am frantically looking for either Shadermark G71 results OR conclusive evidence indication what, if any, effect does resolution have on the results in order to directly compare the G70 and G80. The architecture does seem to give a very good account of itself VS R580. BTW, I think the preponderance of evidence at this point suggests unified design.
 
Make sure you have a beast of a processor to back it up too. I have found that the "lowly" GTS is heavily CPU bound.

Meaningless without context. There are a number of applications that are CPU-bound for a particular CPU-GPU-resolution-quality combination, but, by itself, there is no such thing as CPU-bound GPU.

(Pendantic? Maybe, but these blanket statements are annoying and have no higher signal/noise ratio than the articles of you-know-who.)
 
Meaningless without context. There are a number of applications that are CPU-bound for a particular CPU-GPU-resolution-quality combination, but, by itself, there is no such thing as CPU-bound GPU.

(Pendantic? Maybe, but these blanket statements are annoying and have no higher signal/noise ratio than the articles of you-know-who.)

Pedantic? Most certainly. I would certainly think its possible for you to at least imagine the GPU in its typical job (playing games!) and there you would certainly need a CPU that could keep up with it.
 
Pedantic? Most certainly. I would certainly think its possible for you to at least imagine the GPU in its typical job (playing games!) and there you would certainly need a CPU that could keep up with it.

I just upgraded from a 19" to a 24" screen. Suddenly, all my games become GPU limited. Not surprising of course, but the best way to show that a CPU-bounded GPU doesn't exist.
I wonder how many common, popular games out there are CPU bounded at 19x12 on a 8800GTX. Probably not many.
 
I just upgraded from a 19" to a 24" screen. Suddenly, all my games become GPU limited. Not surprising of course, but the best way to show that a CPU-bounded GPU doesn't exist.
I wonder how many common, popular games out there are CPU bounded at 19x12 on a 8800GTX. Probably not many.

What is your CPU? How do you know, unless of course you've done a wide range of testing, that you would not gain from the extra CPU power?
 
What is your CPU? How do you know, unless of course you've done a wide range of testing, that you would not gain from the extra CPU power?

So you think someone cannot see what the bottleneck in only one system is with while running some tests? You don't need to compare a gazillion systems to see if a system is CPU bound; one indication would be that you all of the sudden get in resolution X GPU specific functions nearly for free and the next best riddle is what a four legged animal walking on a hot tin roof might be ;)
 
What is your CPU? How do you know, unless of course you've done a wide range of testing, that you would not gain from the extra CPU power?


Yup. You can gain additional performance at high resolutions with a faster CPU. But the effect is minimized compared to lower resolutions when the application is purely CPU bound.

Chris
 
Status
Not open for further replies.
Back
Top