AMD: R7xx Speculation

Status
Not open for further replies.
I had a bit of rummage, to compare 2560x1600 16xAF/8xAA performance, this is an 8800GTX:

http://service.futuremark.com/resultComparison.action?compareResultId=3876467&compareResultType=14

SM2.0 - 1342
Graphics test 1 - 11.3
Graphics test 2 - 11.1
SM3.0 - 1063
HDR test 1 - 11.7
HDR test 2 - 9.6

HD4850:
SM2.0 - 1930
Graphics test 1 - 15.5
Graphics test 2 - 16.6
SM3.0 - 1735
HDR test 1 - 19.6
HDR test 2 - 15.1
I have to say the AA performance of the HD4850 is insane, even if we know that 8xMSAA is unkind to G80.

Jawed
 
I had a bit of rummage, to compare 2560x1600 16xAF/8xAA performance, this is an 8800GTX:

http://service.futuremark.com/resultComparison.action?compareResultId=3876467&compareResultType=14

SM2.0 - 1342
Graphics test 1 - 11.3
Graphics test 2 - 11.1
SM3.0 - 1063
HDR test 1 - 11.7
HDR test 2 - 9.6

HD4850:
SM2.0 - 1930
Graphics test 1 - 15.5
Graphics test 2 - 16.6
SM3.0 - 1735
HDR test 1 - 19.6
HDR test 2 - 15.1
I have to say the AA performance of the HD4850 is insane, even if we know that 8xMSAA is unkind to G80.

Jawed

Probably, just my silly question...

Isn't that at 2560x1600 is bandwidth bounded?

8800GTX has got higher bandwidth than 256-bit bus of RV770??? If so, what
would be a great source of this kind of insane AA performance of the HD4850 :?:
 
Well, I'd want to make sure 8x AA is actually being applied there.

Just being OT for a second but was there a conclusion for 8xAA actually being applied for the HD38x0? (remember the computerbase.de chart where it took almost no performance hit, whereas the 8800GT took a nose dive)
 
Just being OT for a second but was there a conclusion for 8xAA actually being applied for the HD38x0? (remember the computerbase.de chart where it took almost no performance hit, whereas the 8800GT took a nose dive)
I think you'd just need to be careful what you call "8xAA" these days. Both nvidia and AMD have modes afaik which might look like something they aren't - i.e. a mode simply called 8xAA might have 4 color/z samples and use a wide tent filter on AMD hardware (and nvidia might have similar simple names for csaa modes). Of course, easiest comparison is with "true" multisampled modes using a box filter.
 
I think you'd just need to be careful what you call "8xAA" these days. Both nvidia and AMD have modes afaik which might look like something they aren't - i.e. a mode simply called 8xAA might have 4 color/z samples and use a wide tent filter on AMD hardware (and nvidia might have similar simple names for csaa modes). Of course, easiest comparison is with "true" multisampled modes using a box filter.
In D3D, if the application asks for 8xAA then they always get 8xAA on AMD HW. The end user would have to explicitly override that via the control panel.

OpenGL used to have some fallbacks in case they ran out of memory so that the application would continue to run, but I don't know if that's still the case. In D3D, we return an error if we can't allocate an AA buffer.
 
Presumably just efficient use of memory and flexible compression.

Jawed

Thank you for kindly reply :smile:

Also, isn't that G80 known to be efficient architecture on using bandwidth?
If so, may this be possible that this round RV770 has got higher efficiency
design comparison to that of R600 :cool:
 
Hard to say if G80 is more efficient... while G80GTX/Ultra never seemed to be memory bandwidth limited, the G92GTX/GTS were easily bandwidth limited by the 256-bit bus. The 3870 never seemed to suffer like the G92's did at 1920x1200 and higher on settings (by suffer I mean drop in performance as much) despite being at 256-bit as well with memory clocks fairly close

So either the G80 + derivatives actually utilize a lot of the memory bandwidth or it uses it inefficiently
 
The 4800 series crysis benches in the new chart are going to be totally unplayable. Looking at Toms at 1680X1050 4XAA 9800GTX scores just 12.9 FPS (Although without AA that does kick up to 24.3 FPS). So the 19X12+AA is going to be that much lower. I just wonder if these 4870/50>9800GTX results scale to the lower resolutions, or if they're taking advantage of Nvidias poor memory managment a bit.
 

Those are probably real. But the problem with those slides is that they are comparing to older and cheaper models. The 9800 GTX now has a street price below $250 (and in some cases barely above $200), while the 8800GT now has a street price below $150 (and in some cases barely above $100). So the 9800 GTX is priced right now at the HD 4850 price point, while 8800 GT is priced well below the 4850.

Of course, NV had the same problem in their slides in that they compared GTX 260 and 280 vs the 3870X2. Things are always dubious when comparing to older models with much lower street prices than retail prices.
 
So either the G80 + derivatives actually utilize a lot of the memory bandwidth or it uses it inefficiently

Can't use a relative performance drop to judge bandwidth efficiency. If the other architecture is more bottlenecked in some other area the performance drop in a high bandwidth demand scenario will obviously be lower.

http://www.tomshardware.com/reviews/nvidia-geforce-9600-gt,1780-12.html

The only way to do a proper analysis is when there is equal performance in the noAA scenario.
 
Those are probably real. But the problem with those slides is that they are comparing to older and cheaper models. The 9800 GTX now has a street price below $250 (and in some cases barely above $200), while the 8800GT now has a street price below $150 (and in some cases barely above $100). So the 9800 GTX is priced right now at the HD 4850 price point, while 8800 GT is priced well below the 4850.

Please link me these $200 9800gtx's.
 
One interesting thing that I noticed on some leaked ATI Radeon slides on the 48xx series is that they were advertising "high performance anisotropic filtering". Looks like they must have made nice improvements in that area, but it's interesting that the latest leaked slides from AMD only showed 8xAF. Any reason to believe that there would be a significant performance drop from 8xAF to 16xAF?
 
On the home page at nvnews.net, you can see 9800 GTX listing for ~$230-$245. Also, I recall seeing some 8800 GT's listed there for ~$120-$140.

I see the linked one supposedly for 234 but I follow the link and they want 294 after rebate. All of the ones at newegg seem to be in the 275+ range (counting the rebate).
 
I see the linked one supposedly for 234 but I follow the link and they want 294 after rebate.

If you just search for 9800 GTX in price grabber, you can find several eVGA 9800GTX's selling for $269 without rebate, so it's already pretty close to the $250 price point, and I'm sure it will come down even more when the new Radeon's are released.
 
If you just search for 9800 GTX in price grabber, you can find several eVGA 9800GTX's selling for $269 without rebate, so it's already pretty close to the $250 price point, and I'm sure it will come down even more when the new Radeon's are released.

That's still $70 more than the mooted 4850 price point, for a card that's supposedly 10% slower. And that's probably from dodgy venders, we should use newegg for pices generally.

And I agree G92b prices will fall, the question is what it's going to do to Nvidias profit margins, especially since it supposedly was struggling margins on G92b even at current prices.

More (4850) slides, didn't see these posted


20080611c62dc6381ed03b0wl8.jpg


2008061179eb79a869a59a4if4.jpg


2008061172fce6224f79701gj8.jpg
 
Status
Not open for further replies.
Back
Top