AMD: R7xx Speculation

Status
Not open for further replies.
Interesting that there's been an increase in fillrate despite a decrease in clockspeed and no increase in execution unit count. I'm assuming no AA was in-use here, so this can only mean Z-fill has increased as expected.

AFAIK Single-Texturing on Futuremark products has been a function of bandwidth.

Proof: ST-Fill on HD2900XT was ~8k.
 
In that case something is probably just borked on your system =(

I know for certain that my old X1950Pro ran like the wind with KotOR (with AA and AF of course).
It's got faster since I last tried it anyway, it's pretty stable at 50FPS in that section now, whereas before it was between 30 and 40 all the time.

Ah well, it's days are numbered anyway.

I'm getting a 4870 when they come out in the UK.
 
OGL in the desktop space most certainly is meant for gaming, however, so in this context I am correct. If we were discussing development in a workstation environment you would be correct.
Well you were talking about the API there's no OGL gaming api and OGL workstation api... But yes I won't disagree with you that the drivers used on desktop pcs are tuned for gaming performance.

ShaiderHaran said:
I certainly can't disprove your theory at this point in time, but I doubt the inclusion of any new compression scheme would provide the results we're seeing at this point in the life of GPU design. IOW: all the low-hanging fruit in this area has already been picked. It would take a radical new paradigm to achieve results like this, and I just don't think that would've been possible during the design phase of RV770.
I agree it seems a bit odd that they could improve it this much. But I don't see any different explanation (well maybe tuned memory controller if it was "wasting" bandwidth). 3dmark06 single texture fillrate test afaik is doing single texture + alpha blend (and z tests might not even be enabled, but even if they are it's not going to require more than 1 z test per output pixel). HD3850/3870 are only reaching about half their theoretical maximum there and the R600 was way faster in this test.
 
Well you were talking about the API there's no OGL gaming api and OGL workstation api... But yes I won't disagree with you that the drivers used on desktop pcs are tuned for gaming performance.


I agree it seems a bit odd that they could improve it this much. But I don't see any different explanation (well maybe tuned memory controller if it was "wasting" bandwidth). 3dmark06 single texture fillrate test afaik is doing single texture + alpha blend (and z tests might not even be enabled, but even if they are it's not going to require more than 1 z test per output pixel). HD3850/3870 are only reaching about half their theoretical maximum there and the R600 was way faster in this test.

Ok, so it's most certainly not due to increased z-fill then. Thanks for the clarification. I look forward to finding out the answer to this mystery!
 
4850 or 4870?

20080611_f7c93e53e3d4fbf0e5a8uvFE4kyyAT2A.jpg


20080611_de9f5d29b744f3ab627fwrnwayPRkKc6.jpg
 
Thanks for the info. Perhaps I'm wrong after all!

I don't think you`re wrong. Just that this piece of the puzzle does not lead to that particular conclusion.

Since it's slower in Crysis (which it isn't supposed to be...), it should be 4850.
 
Presuming RV770 has 32 TUs then it presumably also has 512KB of L2 texture cache, i.e. double RV670. That might make a difference.

Jawed
 
The 9800GTX has a bit more bandwidth (70.4GB/s) than HD4850 and I dare say we're seeing general signs that HD4850 is bandwidth limited, so that Crysis performance seems reasonable.

Though the earlier report of 34fps for GTX280 indicates that 9800GTX, with half GTX280's bandwidth, is being a smidgen wasteful. Though I think these framerates are in rounding-error territory.

Jawed
 
What? Everybody too busy analyzing these data to post?

I assume those are all Windows Fister numbers? I hate when sites leave out XP numbers just to run a DX10 bench (especially when said bench is just Vantage). Fister 3dmark 06 scores mean nothing to me.

Too bad about Grid though, would've been nice to see some numbers...
 
What? Everybody too busy analyzing these data to post?
Didn't we do that already?

Nevertheless, SM3/HDR-Performance is the only sub-score i can compare directly, since I only have a DC-CPU.

So on that particular playing field I see a drop-off of performance on the HD4850 going from 1x to 8x AA of about 32,1 percent. My HD2900 XT manages 47,9 percent - given the allegedly better 8x-AA-performance in HD3800 series, I am not as surprised, as I should be. Good, but not great.

In absolute terms the "HD4800 series" card is 7,7 percent faster than my HD2900 XT - not much. SM2-score's evenly matched, probably thanks to my CPU having a higher working frequency.
 
Status
Not open for further replies.
Back
Top