radeon 3870 lose at 4AA/16AF against 8800GT but win at 8AA/6AF?

Discussion in '3D Hardware, Software & Output Devices' started by revan, Nov 18, 2007.

  1. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me

    No, this are percentages... the graphs where made in flash mx or something, when you hold your mouse over one bar they changed from frames/s to percentages relative to the card you chose... try this link:
    http://www.google.com/translate?u=http://www.computerbase.de/artikel/hardware/grafikkarten/20
     
  2. Sound_Card

    Regular

    Joined:
    Nov 24, 2006
    Messages:
    936
    Likes Received:
    4
    Location:
    San Antonio, TX
    As pointed out already, those are the %'s with HD 3870 being the base (100%). In that review, if you scroll over the graph, it changes from fps to %.
     
  3. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    3,264
    Likes Received:
    813
    Thats a very interesting review :shock:

    Reading other reviews I found it odd where fairly often they would quote something about how RV670 got slaughtered at AA when in fact their results showed the 3870 besting the GT at the highest resolution with most AA/AF enabled.

    So some stuff seems to have changed indeed in RV670.
    Dave previously mentioned some stuff improving latency sensitivity.

    An earlier thread (edit: this one) pointed out that cache hit improvements etc. tend to make the AA/AF workload proportionally less with higher resolution.
    Maybe the HD bit of marketing is actually legit & R600 architecture is better suited to higher resolutions than the NV architecture.

    A much older R600 thread included suggestion that the 64bit memory channels of R600 would probably often not be fully utilised resulting in inefficiency.
    A much higher clock rate 256bit bus made of 32bit channels (eg RV670) thus may actually have much higher usable bandwidth.
    Presumably absolute (but not necessarily per-clock) latency is better with both higher core clock & memory bus clock too?

    The few OC benchmarks at 850+ I've seen seem to look much more like direct competition to GTX.
    (Are ATI guys allowed to disclose the design target clocks for R600 since its now EoL?
    Or while we're on it, the rationale for the 512bit bus?)
     
    #43 hoom, Nov 19, 2007
    Last edited by a moderator: Nov 19, 2007
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Funny, I'd forgotten that I suspect RV670 has 32-bit memory channels - which should have an effect on latency. Dave's also mentioned latency related changes but we don't know what they are.

    Jawed
     
  5. fellix

    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,552
    Likes Received:
    514
    Location:
    Varna, Bulgaria
    It would be useful to conduct a handful of tests with GDDR4-based R600 board and one with RV670 (the HD3870), with similarly clocked memory arrays and GPUs. That would shed some light on the matter.
     
  6. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me
    thanks for the work BRiT. it will ease the discussion, with all the links posted
     
  7. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    Sure it is not playable, but think about a second card: 3870 CF for ~$450 would beat 8800Ultra SLI for ~$1200, while reaching 30FPS+...

    The problem is definitely on NVs site, FPS divide by four from 4xAA to 8xAA, what should not happen...

    I see 3870 drop from 4x AA to 8xAA in FPS: 31 -> 21. NV should drop equal, but like I said FPS divided by four...
     
    #47 AnarchX, Nov 19, 2007
    Last edited by a moderator: Nov 19, 2007
  8. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me
    [
    1.We could saw here that the larger gap betwen 8800gt and HD 3870 is at 4xAA (19%), but at 8xAA/16AF the situatian turned in Ati's favor, HD 3870 beating 8800GT by 5%!
    -How it is that possible?
    .[/QUOTE]

    Don't want to respond to my own question, but, I've just had an idea....

    May be DX10 is the guilty ...
    We saw that the games in dx10 are very slow (unoptimized yet?), at least for now...
    Much slower then we expected a year ago, when Ati guys built the foundation for their new architecture...
    It is possible that they optimized the 29xx series for 8xAA+, and this backfired when the dx10 games SLOWLY:!: show their faces...
    In the meantime Nvidia stay on earth targeting for 4xAA. with their 8th generation..

    Probably 29xx/38xx will work like a charm (8xAA+ must be a beauty) when good dx10engines will be out...
    Hope will be still alive that day::grin:
     
  9. skazz

    Newcomer

    Joined:
    Aug 21, 2004
    Messages:
    87
    Likes Received:
    0
    Location:
    The Netherlands
    Do both vendors use the same method of AA at either 4x or 8x?
     
  10. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    The fast answer is yes, they both do gamma corrected Box AA by default at 4x and 8x.

    The not so fast answer is that both vendors also have differing forms of 4x and 8x and everything in between and beyond.

    Nvidia also has dedicated hardware resolve for AA. You'll notice lots of hoopla when something doesn't use the hardware and instead uses "software" or "shader based" AA (IE - COJ DX10) to avoid rendering errors when AA is enabled.

    ATI uses shader based AA with hardware dedicated to a "quick path" from RBE's to shaders for this purpose.

    Maybe this is just a case where Nvidia hardware is optimized for 4xAA and as you get to higher levels of AA it equalizes out or is less efficient than ATI's shader based AA. Who knows...

    Regards,
    SB
     
  11. Dooby

    Regular

    Joined:
    Jul 21, 2003
    Messages:
    478
    Likes Received:
    3
    Who the hell buys a 8800U to run stuff at 1280x1024 ? Numbers for top end cards on anything less than 1600x1200 are pointless IMO. Ive not used less than 1600x1200 in 8 years.
     
  12. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    There are a lot of users with nice 19" LCDs with 1280x1024 native resolutions. I have two such displays in dual display.

    And shock and horror, in a game like Crysis I would keep all the settings on high and go down to even 1024x768 (or even worse, 1024x576) to get stable framerates where 90-95% of frames are above 30fps. I used to game on a 1600x1200 21" CRT, and as nice as it was, I think resolution is overrated when it comes to compromising features (notably AA and AF) or framerate.
     
  13. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    So can we pinpoint why the 8800GT loses more performance against the 3870 when 8xAA is enabled?

    Bandwidth? framebuffer? drivers?

    However, i wouldn't be surprised if 16xCSAA results are much better than the pure 8xMSAA mode. Infact, 16xCSAA is known to not only provide better IQ, but the performance impact is roughly equivalent to 4xMSAA.

    Reviewers should use CSAA as well in these cases IMO. Why? because these modes dont hit hard on bandwidth/framebuffer (the very concept behind CSAA) like the traditional modes. (Look at the performance hit of 8xMSAA on 8800GT!)
     
  14. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    CSAA doesn't provide consistent quality across all games (compatibility) and polygon edges (polygon intersections).

    Better compression and ring-bus? :)
     
  15. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    I'm willing to bet that it's something as simple as the fact that G92's hardware accelerated/assisted AA is optimized for 4xAA with little thought given to 8xAA.

    And when moving to 8xAA it has to spend an inordinate amount of time rendering it or maybe it's falling back to shader based AA at that point.

    Since Rv670 doesn't have dedicated AA hardware (other than the fastpath to the shaders) it has more of a linear-ish drop off than G92, and with the abundant ALU power, I don't imagine 8xAA is going to be inordinately more taxing tha 4xAA.

    Regards,
    SB
     
  16. vertex_shader

    Banned

    Joined:
    Sep 8, 2006
    Messages:
    961
    Likes Received:
    14
    Location:
    Far far away
    I think its bandwidth, here is the test continue with 3850, 3870 at 1280x1024 21.9% faster than hd3850, with 8xAA 16xAF enabled 35.4% faster, 1600x1200 hd3870 26.2% faster, with 8xAA 16xAF 56.2%.

    3870 8xAA 16xAA performance looks really great against competation, the problem there is only very few game can the card run with playbale framerate at 8xAA 16xAF.
    (the thread name is wrong its say 6af :wink: )
     
    #56 vertex_shader, Dec 2, 2007
    Last edited by a moderator: Dec 2, 2007
  17. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    I've always assumed it was the infamous "memory management bug" on G8x/G9x hardware, rearing its ugly head again. Given computerbase.de's numbers showing the 3870 beating even the 8800 Ultra w/8xAA @ 2560x1600 across a wide variety of games. Since the Ultra has more bandwidth than any 3870, I'm going to go ahead and say it's not a bandwidth limitation ;)
     
  18. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    How can you say that's not a bandwidth limitation, when both solutions use different compression techniques?
     
  19. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Because one has way more bandwidth than the other yet yields lower performance.

    If we believe it is a bandwidth limitation on the NV side, then why does the Ultra with ~50% more bandwidth than an HD 3870 lose out to it at such a high resolution with 8x AA across so many titles (12% aggregate)? If it is bandwidth, then a G8x/G9x product would need something like 70% more bandwidth than an HD 3870 to recapture the performance crown with these settings, and I just don't buy that. Also, if it is bandwidth, why doesn't the HD 2900 XT with its massive 100GB/s+ bandwidth out-perform G8x/G9x by even more?

    It's not bandwidth.
     
  20. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,517
    Likes Received:
    24,424
    It's not bandwidth in the AMD/ATI camp.

    It is bandwidth in the Nvidia camp.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...