HD 4870 review thread.

Discussion in '3D Hardware, Software & Output Devices' started by mczak, Jun 24, 2008.

  1. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    8,477
    Likes Received:
    326
    Location:
    Treading Water
    Not really no, but if you read anandtech's testing methodology you'll see that they tested single cards on an Nvidia chipset, but they tested crossfire on an intel chipset, that's probably what is causing the anomaly in crossfire scaling with regards to Cod4.
     
  2. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    The quakeworld was fixed recently (xfire added to all OpenGL) and I thought it made it to the release driver, but it is very recent.

    As for super-linear performance, it's possible -- The actual amount of cache is increased as well as the number of functional units. That means that it's very possible for data to stay in cache longer in xfire, than in regular rendering on a single card. That will lead to performance higher than 2x.

    On average, we are seeing scaling in the 1.75 ~ 1.85 range for xfire. Of course, this is at higher resolutions and/or higher settings.
     
  3. tEd

    tEd Casual Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,094
    Likes Received:
    58
    Location:
    switzerland
    Basicly AI limits you to get all the nice things at the same time. If you have AI enbaled you also have texture filtering optimisations enabled. They do decrease filtering quality (This is my opinion which i base on my testing). Fine i disable AI and i get the maximum filtering quality of the card but i loose compatibility fixes , workarounds for games may not support AA and of course crossfire compatibility.


    IMO this isn't competitive anymore. At nvidia i can all do that easy. Disable filtering optimisations , force any SLI mode and have the games compatibilty fixes/AA workarounds and what not.

    It's kinda sad really because now they have a really nice card with the 4850 and with crossfire you would get a nice performance/price package but ......

    AI standard is a ok thing for people who just want things to run but for more "advanced" users it's not that good.
     
  4. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    Same exact view!
     
  5. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    From that paper:
    A most welcome addition! :) Best would be if we got full MULs for each unit, but that's of course more expensive, but with full performance shifts we can at least implement fast integer multiplications with a little shader wizardry. Can't wait to see performance of my GPU Texture Compression demo running on the 4870. It has loads of shifts so this improvement alone should speed things up quite a bit.
     
  6. ChronoReverse

    Newcomer

    Joined:
    Apr 14, 2004
    Messages:
    245
    Likes Received:
    1
    If you can actually see the difference in Standard AI mode then hats off to you.

    The arguments definitely hold try for Advanced AI sure.
     
  7. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Hmm, is that a hint that CrossFire on RV770 is using two GPUs to render a single frame and that the caches (L2) are sharing data across both GPUs equally :?:

    Jawed
     
  8. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,151
    Likes Received:
    5,086
    Non-AFR rendering modes would be cause of much whoopin' and hollerin' I would imagine. So I can only assume it's still AFR, so I'm not sure what they are getting at here.

    Now if it IS some non-AFR rendering mode, I may just suddenly be interested in multi-GPU again. :)

    Regards,
    SB
     
  9. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Precisely. I've never seen reductions in filtering quality on any of the Cat AI era Radeons I've used over the years, at least not with AI set to standard.
     
  10. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Hey, mon, always nice to see you around. Nice chip y'all got there.

    I know you don't play CatalystMaker on TV even, but can you at least tell us that given the renewed emphasis on multi-gpu at AMD that we'll be seeing more robust software support for it in the nearish future? I think most of us are pretty sick of hotfixes to address what a robust profiler should be addressing. Particularly now when it's clear that AMD has made multi-gpu the center piece rather than a sideshow.
     
  11. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    How is the final framebuffer organized in AFR mode? Is the slave card (the one not connected to the monitor) sending the framebuffer for store on the master or is it just driving the display those frames needed?

    If the total amount of buffers are less than double the normal double/triple buffer at least one of the cards would have more memory to play with, which in a memory limited situation (super high resolutions for CF testing etc) would increase performance.
     
  12. dizietsma

    Banned

    Joined:
    Mar 1, 2004
    Messages:
    1,172
    Likes Received:
    13
    If ATi have gone for multiple chips to support their high end product then does this not limit them in the future to only AMD chipsets using poor desktop cpu's compared to the competition?

    Staking your future on multiple gpu's when the main supplier of hardware support for this is your number 1 enemy seems rather a risky thing to do.

    Maybe that is part of the reason why nvidia went large monolithic again, they are already having trouble with licence for Nehalem chipsets for multi gpu's.
     
  13. ChronoReverse

    Newcomer

    Joined:
    Apr 14, 2004
    Messages:
    245
    Likes Received:
    1
    Intel chipsets support Crossfire. This means even the best CPUs on the (arguably) best chipset can still support ATI's multi-gpu.
     
  14. Mark

    Mark aka Ratchet
    Regular

    Joined:
    Apr 12, 2002
    Messages:
    604
    Likes Received:
    33
    Location:
    Newfoundland, Canada
    Well, there's the 4870 x2 and 4850 x2 (if such things exist) which are multi-GPU on a single card that will work in any motherboard be in AMD, Intel, or NVIDIA based.
     
  15. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I seem to recall there is another company that makes Crossfire capable mobo chipsets as well. Starts with an "I" or something.

    Edit: Ah, I missed the "future" point. Well, the thing is, one suspects that a pretty significant portion of Intel's high-end cpu sales are to gamers. Just who would be screwed the more if Intel platforms can't run multi-gpu solutions at all? Then of course there is Ratchet's point re single-card multi-gpu; that is bus agnostic.
     
  16. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    No, it's just saying that with multi-gpu (and also with multi-cpu), there are cases that can be super-linear, and cache bottlenecks are a prime example.
     
  17. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Are we at the point though that it's at least feasible that (newer) X2 or GX2 type cards could scale well without having to rely on AFR or do you see AFR as something that will be sticking around a bit longer (in relation to X2 and GX2 type cards only, not two separate cards).
     
  18. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    Well, AFR is still the best way to increase both geometry and pixel processing, in general. Applications are being written nowadays, to be more compliant with this type of rendering -- Eliminating, say, frame to frame dependencies and other elements requiring synchronization and communication.

    So, yes, I expect AFR to stay around.
     
  19. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    But how can scaling be superlinear in this case if the caches are independent? In general, if you have two processors and two cache blocks, superlinear scaling can only happen if the processors have access to both blocks.
     
  20. Pete

    Pete Moderate Nuisance
    Moderator Veteran

    Joined:
    Feb 7, 2002
    Messages:
    4,945
    Likes Received:
    348
    Eric, nice to see you got a chance to post here again, and glad to hear you guys have got your QuakeWorld/OpenGL issues sorted out. It's about time. :razz:

    Seriously, congrats on and thanks for the great chip.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...