Radeon HD 4770 Reviews

Discussion in '3D Hardware, Software & Output Devices' started by Arty, Apr 28, 2009.

  1. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
  2. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    Whether this is a good idea or not would depend on a lot of factors - inventory levels, product lifetime before the launch of DX11 parts, projected volumes and margins, et cetera. And of course, the rumours you've heard may be wrong.

    The HD4770 is a pretty nice product, and it could turn out to be very successful at a time when consumers are holding onto their wallets. But then again if consumers really hold on to their wallets, they may instead keep what they have until there is a significant change in either performance or features. It remains to be seen.
    Generally speaking, graphics sales seem to be driven by either performance or feature set, (although I'd argue that features are dropping in importance). Introducing both 40nm and DX11 compliant parts at the same time is attractive because it should allow the IHVs to make both arguments at once for a strong sales push. On the other hand, it doesn't leave much openings for the next two-three years to introduce something really new. I predict that the first 40nm DX11 parts will be relatively modest, to leave room for the introduction of new products at the same basic technological level.

    I don't think the RV740 be the only DX10 level part built on 40nm. Staggering the introduction of new lithography and new features makes a fair bit of sense. And I doubt nVidia will want to yield this segment for too long on the desktop.
     
  3. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,452
    Likes Received:
    10,357
    It's not as bad for Nvidia if they don't currently. While GT200 might be a relative fianancial disappointment (I hesitate to say failure), G92 has been gangbusters for them. It's only slightly larger than Rv770 (G92b) and continues to sell quite well.

    Factor in that they've already taken the hit by writing down inventory in Q4 08 and it's pure proft until they finish clearing out all the inventory.

    And even after they clear inventory it isn't at such a huge cost disadvantage versus Rv770 that their margins would take a nosedive.

    If all they had to rely on was GT200, then yes, they'd be in a lot of trouble. G92b does buy them some time though.

    Now if only they would stop trying to pull one over on customers and stop renaming it. As 9800 XX seems to be selling just fine without a new name. :roll:

    Now if only it supported DX10.1. /sigh.

    Regards,
    SB
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,729
    Likes Received:
    2,142
    Location:
    London
    I don't think NVidia wrote-down any inventory (at least not 65nm/55nm chips). The hit NVidia took was presumably on loss-of-interest and warehousing costs.

    But, there's little doubt that G92 is so well established that it has generated an awesome amount of profit this last 20-odd months.

    Jawed
     
  5. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,251
    Likes Received:
    4,468
    Location:
    Finland
    I'm sorry but using WoW like that to bench is a bad, bad joke.
    Killing few NPCs in human starting zone - or any other starting zone - gives you absolutely no idea whatsoever how the cards perform in the game.
    If you want to see the performance in WoW, you need to go Northrend, you need to go to raids (which, though, removes the real possibility of benchmarks as the runs can't be repeated even close to identical) etc

    They said they used 'Ultra' settings which means they used latest patch. The only difference between the highest notch & one notch down is the drawing distance of shadows.

    edit:
    To give you some impression of what I mean, you can see their screenshot of a human starting zone.
    I took 5 screenshots, which represent quite typical enviroments in Northrend
    You can find them at http://home.akku.tv/~akku38901/B3D/
    And let's not even go to talking about raids, where you have 25 players + NPCs, most casting fancy light balls and whatnot everywhere
     
    #25 Kaotik, May 14, 2009
    Last edited by a moderator: May 14, 2009
  6. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    I wouldn't fault them for using slightly lowered settings. Quite the contrary - ultimately benchmarking is only relevant if it can act as a predictor of application performance. It's a bit ridiculous to see review after review testing cards with for instance Crysis that yields the framerates of a brisk slideshow. What is the point?

    I'm not a big fan of the way HardOCP does their benchmarking, as IMHO the limits of playability is far too fuzzy, too application dependent, and too personal a concept to be transferable. But they do have a strong point in trying to look at performance at relevant settings rather than settings that yield 1200 fps or 10 fps.

    My favourite among all the reviews of the HD4770 that I have read is the one at TechPowerUp where they test the card against a lot of competition (telling you about not only the 9800GTX competitor, but what you could gain by upgrading further, as well as allowing direct comparisons against a lower end card), a lot of games (not only the staples) and at different resolutions and levels of AA/AF so that you can see where a card like the 4870 with similar shader performance but much greater bandwidth will pay off. I found it very informative and great at presenting both where the card fits in the marketplace, and what it can do for the game player.

    I realize that "everything maxed" is a simple standard setting to apply, but if it isn't a setting that provides a good gaming experience and won't be used, it doesn't really describe anything of value to the reader. Once you start dialling down the settings however, different people will make different choices (*) and you will loose the ability to make comparisons between different articles, and you have greater leeway for biasing the review to favour either of the IHVs. These aspects of the reviews get too little attention, IMHO.


    (*) For instance, personally I always turn off motion blur and DOF if available, since I find their use inappropriate for gaming. Next to go is shadow quality, since I tend to look at objects and people rather than their shadows, so simpler shadows don't bother me much. And so on. But I doubt you would be able to find a consensus on the order to go about things, much less one that is suitable for all games.
     
    #26 Entropy, May 14, 2009
    Last edited by a moderator: May 14, 2009
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,251
    Likes Received:
    4,468
    Location:
    Finland
    Hmm? I said they used "Ultra" settings which means the shadows were highest too, nothing bad about that.
    The problem is using the starting zone as benchmark location as it's not representative of the actual game performance at all.
    (just compare their shot, which is from starting zone, to the shots I linked)
     
  8. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    Yes, I was going off on a tangent a bit. What you are adressing is another angle to the problem of benchmark relevance, that it should try to describe the situations where performance is actually an issue - mapping the irrelevant parts of an application isn't very helpful. This has been an issue for ages, and not only for games. As far as gaming is concerned, one way to address the problem has been to record custom demos of very dense action, combined with focusing on the minimum frame-rates, rather than averages. Custom recorded demos still neglects that the CPU/memory subsystem typically become a lot more stressed in these situations as well. If you take these issues into account, where the frame rates can easily drop to a third of the average (and where high frame rates are critical!) the low frame rates shown in a lot of benchmarking becomes even more absurd.

    In years gone by these issues were discussed more often on Beyond3D forums, particularly when vendor benchmark cheating was a hot topic. These days the focus of the forums is more on theoretical capabilities, architecture, and speculation around these topics, rather than (and sometimes disturbingly disconnected from) actual application use.
     
  9. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,729
    Likes Received:
    2,142
    Location:
    London
    The quality of data in most reviews these days is so poor (e.g. different drivers for different cards of the same brand) that it's a royal pain trying to interpret performance.

    Jawed
     
  10. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    Yup. Don't know if it has gotten worse actually. It was always a royal pain. :) But awareness of the problems was higher back then - exposing the cheating did have some unintended benefits.

    Didn't mean to say that things were better then at B3D, merely different. It does seem that interest in application performance has dropped quite a bit. In some sense, this is a sign of market maturity. However, it also implies a declining interest in general. One reason that the more theoretical/architectural discussions are more dominant now could simply be that the ones who cared for performance out of game play concerns are reasonably satisfied today, and that those that remain are more interested in performance for its own sake (and are much fewer), leaving the field to the more architecturally inclined such as yourself. What I find a bit lacking now is a discussion of where we want graphics to go and why. Much of the discussion I see is either completely disconnected from graphics, or lacking in questioning the actual value of the features - do they justify their cost in power draw and money? Not including cost/benefit into the analysis feels like a stage CPU designs left forever more than two decades ago with the dismal failure of the iAPX 432 and the subsequent rise of RISC.
     
    #30 Entropy, May 14, 2009
    Last edited by a moderator: May 14, 2009
  11. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Ofcourse the REAL problem there is that the drivers differ so much between versions because of bugfixes and 'optimizations' *cough*cheats*cough*.
    If the drivers were better quality, and didn't try to patch up games with shader replacements and all that, we wouldn't see such differences.
     
  12. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,729
    Likes Received:
    2,142
    Location:
    London
    What's the nature of the optimisations? Elision/LOD manipulation for textures? What kinds of bugs are common? What proportion are due to over-aggressive optimisation?

    Also, is it reasonable to hand-code replacements for shaders in games? Back when Doom 3 shader-replacement was all the rage it was an easy target, there was basically nothing else to play with. Sure you can find the 5 most costly shaders in a game these days and theoretically re-code them. But maybe it's easier to take those costly shaders and use them as hints to steer compilation optimisations.

    What proportion of the shaders in games are written by IHV devrel?

    From the experiments I've done with Stream Kernel Analyzer it seems the ISA compiler has a few tricks it needs to learn. But it's subtle stuff...

    Jawed
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Why this barrage of questions?
    My point is just that it would be nice if drivers were good from their first release, and if they just did exactly what you'd expect them to do, nothing more.
     
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,729
    Likes Received:
    2,142
    Location:
    London
    I'm not asking you, specifically, for answers! I'm not even sure if we'll get decent answers. Also, it's a huge subject. I don't have any real insight on the gap between correctness and "high-performance-correctness". Nor any real idea how architecture-dependent these things are.

    This post is interesting, because it relates to something pretty obscure:

    http://castano.ludicon.com/blog/2009/03/19/gpu-dxt-decompression/#more-1142

    yet the end result is a solution to unwanted artefacts in corner cases.

    Maybe someone at Intel can be persuaded to blog about driver development on Larrabee, say after the curtains have been drawn-back at IDF... It would be an interesting and hopefully reasonably fresh perspective.

    Jawed
     
  15. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,900
    Likes Received:
    5,342
    something that no ones mentioned, just because ati hardware runs dx10.1 faster than dx10 doesnt mean nv hardware would (if it supported it)
     
  16. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,900
    Likes Received:
    5,342

    Woah carefull there scali, i suggested things should just work once, and everyone thought i was crazy
     
  17. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I think that's implicit though.
    With DX10.1 you can write code that works smarter, not harder. It allows you to reduce render passes and such. If there is any DX10.1 hardware that DOESN'T get faster from doing less, there's something wrong :)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...