HD 4870 review thread.

Discussion in '3D Hardware, Software & Output Devices' started by mczak, Jun 24, 2008.

  1. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14

    I noticed that as well and really drives home that point.
     
  2. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    It would depend on title for me but would be wonderful to have both options - if even possible.
     
  3. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,893
    Likes Received:
    344
    Location:
    PA USA
    So lots here hate [h] for some reason, but I found the ending of his funny.
     
  4. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,153
    Likes Received:
    483
    Location:
    en.gb.uk
    These do look like awesome cards, I think this is the first time since X800XL that ATI has had a nice mix of price/performance/power at a time I'm starting to think of an upgrade. Though I'm still waiting to see whether PowerPlay is working properly, and whether it offers comparable idle power to what HybridPower should (in theory, if it works) achieve.

    It's good to see ATI back in the game, competition is always good for us end-users, though I'm kind of dreading the fan-boy wars (which seem to have already started here) an NV30 references.

    Good stuff ATI.
     
    #44 nutball, Jun 25, 2008
    Last edited by a moderator: Jun 25, 2008
  5. thatdude90210

    Regular

    Joined:
    Aug 9, 2003
    Messages:
    937
    Likes Received:
    6
    It doesn't look like ATI is having that much trouble getting enough GDDR5. Newegg is showing 3 different HD4870's in stock, even Bestbuy.com has it available online. I kinda remember it being much harder to get x800 at launch.
     
  6. Npl

    Npl
    Veteran

    Joined:
    Dec 19, 2004
    Messages:
    1,905
    Likes Received:
    6
    hmm, now that ATI is concerning efficiency, how about undoing the bloated .net Control Center just like the ill-fated (but very hyped, just like CC ) ringbus was discarded. :wink:
     
  7. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Awesome, then they'll be unneeded on that great day in zion when the birds sing, the little children play and we all hold hands and sing kumbaya. In the meantime, interim solutions admitted to the need for years ago would be most welcome. :smile:
     
  8. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land

    Oooh, you want me to pull out the big gun, eh? Okay, done. See, if Nvidia had delivered on F@H earlier I could have moved on to the next crusade. :lol: I was actually shocked to discover this morning that Terry's post on the subject admitting (for the first time, I think) that a CF games profile editor was something they recognized the need for will be two years old next month.
     
  9. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,146
    Likes Received:
    5,082
    CCC isn't bloated. At least not on my machine. It's taking up less than 10 megs out of 8 gigabytes of memory. Somehow I think I can spare that. :p

    Yeah something is definitely odd here as AMD's slides indicate that Rv770 should have lower idle power draw than Rv670.

    But it's idling at 500 mhz instead of 300 mhz like the Rv670. So something is definitely not working correctly. Hopefully just a driver or BIOS thing.

    Regards,
    SB
     
    #49 Silent_Buddha, Jun 25, 2008
    Last edited by a moderator: Jun 25, 2008
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    Some say that at least clock gating isn't working on current drivers, it affects both load and idle watts:
    http://static.pcinpact.com/pdf/docs/03ScottHartog-RV770Architecture-Final.pdf
    Page 20
     
  11. A.L.M.

    Newcomer

    Joined:
    Jun 2, 2008
    Messages:
    144
    Likes Received:
    0
    Location:
    Looking for a place to call home
    Every graphic card feature that is not open to everyone and common is ill-fated. It doesn't matter how good it is. Just remember Glide APIs.
    PhysX is good in theory, but a lot of guys with a PPU experienced a lot of problems. Are you sure that in only few months from the acquisition of Ageia NVidia will sort all problems out?
    In fact as it is now, it seems that the PhysX boost only works in 3DMark Vantage and in the UT3 mod.
    I will say at least that it can't be a deciding factor, like the tessellator and DX10.1 are not for ATI products.
    Besides that, Havok will have support by cpus and gpus (both AMD and Intel). Who do you think has more chances to win? :wink:
     
  12. JasonCross

    Newcomer

    Joined:
    Jul 14, 2005
    Messages:
    39
    Likes Received:
    4
    You can't make that argument in a vacuum, though. Developing a new ASIC based on the RV770 technology, only twice as big/wide (or whatever...something "monolithic" and "big") requires engineering resources, time, and money. That's engineering resources, time, and money that wouldn't have gone into RV770!

    So you'd be left with a really super big chip based on an architecture that, perhaps, wouldn't be as good as what we've got now.

    I agree that single-GPU solutions suffer from fewer quirks than multi-GPU stuff, whether one one board or mutliple boards. It's not all just about how much "scaling" they can get - there are issues like running a game in a window, or games that are particularly unfriendly to multiple GPU rendering, or GPGPU stuff that might only recognize or run well with a single GPU and therefore not scale at all with your expensive board.

    But to assume that a bigger, higher-end GPU design does not come at the cost of engineering resources devoted to the smaller chips or new architecture in general is a mistake. It's a fiercely competitive business and even if you have unlimited financial resources (which AMD most certainly does not!), and all the manpower in the world (again, no), engineering time is an issue for everyone.

    AMD put their stake in the ground and I won't commit to whether it was the right or wrong thing to do. It's far too early to tell. But it certainly was not made foolishly, nor forced on them.
     
  13. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    I wonder why they didn't add a CF game profile editior for the life of me. They must of had their reasons behind it but they're so mysterious at times, heh!:)

    That was the missing link for me with CrossFire! Man, what a core to build the CrossFireX brand name! Damn!
     
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Overclockers UK received 300 and they went on sale today.

    http://forums.overclockers.co.uk/showpost.php?p=11961858&postcount=135

    But as seemingly the only UK retailer with decent stock they're gouging.

    Jawed
     
  15. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Hi,

    First post, so go easy on me.

    Looking at the benchmarks from Anandtech and comparing 4870 CrossFire to the GTX280

    Crysis
    CrossFire scaling: 32%
    Performance over GTX280: 15%

    Call of Duty 4
    CrossFire scaling: 112%
    Performance over GTX280: 75%

    ET: Quake Wars
    CrossFire scaling: 5%
    Performance over GTX280: -9%

    Assassin's Creed
    CrossFire scaling: 23%
    Performance over GTX280: 20%

    Oblivion
    CrossFire scaling: 48%
    Performance over GTX280: 8%

    The Witcher
    CrossFire scaling: 101%
    Performance over GTX280: 58%

    The instances where CrossFire scaling is in the 100%+ ballpark, it is downright amazing. I mean is that even possible?

    Also the fastest card performance title would go back to AMD, after a long time.
     
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Hmm, so the TUs are bigger than we were expecting:

    http://www.techreport.com/articles.x/14990/2

    bottom picture.

    So the "ALUs" (who knows what else might be in that bit of each SIMD) amount to 28% of the die, while the TUs amount to 12%.

    Over on the centre of the right hand edge of the die shot is a bright blue very dense looking interface that I presume is the CrossFireX Sideport. Is it reasonable to guess, based upon the fine structure, that it is clocked relatively slowly and isn't driving over a huge distance? If it's slow, is bandwidth increased by the sheer quantity of lanes?

    Jawed
     
  17. tEd

    tEd Casual Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,094
    Likes Received:
    58
    Location:
    switzerland
    While i would like to go with ati again until they reconsider their catalyst ai options i just can't.
     
  18. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    Not a big fan of Cat AI as well.
     
  19. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Wanted to clear up something from the rumor thread:
    I think there is a misunderstanding here. First, "32 bilinear interpolators" doesn't really make sense. RV770 has 40 texture units and each is capable of bilinear filtering at full speed on INT8 textures.

    Second, as already mentioned on this site, you aren't always interpolator limited, even with INT8 textures. Take the 3DMark06 multitexture fillrate test. Each layer consists of 8 textures using 8 2D texcoords. With RV770, you can do an average of 2.5 texture lookups per pixel per clock, so 8 texture lookups would be ~3.2 clocks per pixel. However, it takes 4 clocks to send down 8 texcoords so that ends up being the bottleneck (2 coordinates per clock). If the app packed the coords into 4 4D coords, then it would only take 2 clocks to send down the coords and you'd be texture limited as expected.

    Also, texcoords generated in the shader won't cause you to be interpolator limited.
     
  20. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,268
    Likes Received:
    1,785
    Location:
    Winfield, IN USA
    Could you elaborate a little please tEd?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...