AMD RV770 refresh -> RV790

Discussion in 'Architecture and Products' started by w0mbat, Nov 10, 2008.

  1. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Well, in the first place it was/is a benchmark (and one the Radeons win hands down).

    I have yet to see marketing guys from either company spring up and say, 3DMark Vantage was not a real application and [insert preferred choice of lame excuses].

    It's also the first benchmark I have seen, that a company wants to loose on purpose, since that's what's happening since around Catalst 8.8 or so.

    Funny, purely fictive scenario: Tomorrow Viva Pinata 2 goes public, this time with (more) real(istic) fur instead of tesselation heating up your GPU to the max. What's going to happen then? Artificial performance drop in upcoming drivers?

    And that's the whole point: Who guarantees, that something like this doesn't happen?
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,110
    Location:
    New York
    No kidding:

    idle - 150w
    vantage extreme - 270w
    furmark burn - 320w :shock:

    Those are for the entire machine but 50w more is impressive.
     
  3. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Normal workloads do the same thing. All furmark shows is that nvida and ati did poor power/thermal testing.
     
  4. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Sounds like a bit of security concern as well. Like back when CPUs didn't throttle properly in response to overtaxing workloads and compromised systems could literally be irreparably physically damaged from across the world.

    Not that I really expect the next big worm out there to specifically target overheating the graphics processors in the systems.
     
  5. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    I assume you have at least some evidence for this?
     
  6. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    what the poor thermal testing or that normal workloads do the same thing? There are plenty of other programs that push ATI/Nvidia out of spec, Furmark is only remarkable in that it pushes them so far out of spec.

    As far as the power/thermal testing, I think that is something that is pretty self evident.
     
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    The workloads, I'd be curious to see any other software besides Furmark that actually pushes them over PCIE specs and/or TDP
     
  8. Creig

    Newcomer

    Joined:
    Nov 20, 2006
    Messages:
    57
    Likes Received:
    1
    Wouldn't both a 6+8 and 8+8 4890X2 card still be in spec? From what I recall (and please excuse me if my memory is faulty), the 6 pin PCI-E connectors are rated at 75w max while the 8 pin PCI-E connectors are rated for 150w max. And for the PCI-E slots, 1.0 and 1.1 can supply 75 watts max while the 2.0 version can supply 150w.

    So even if you had a PCI-E 1.0 board with an 8+8 card installed, you're looking at 75w (board) + 150w (PS) + 150w (PS), or 375w max. The 6+8 version would be limited to 300 w draw.

    The current 4870X2 has only a 6+8 connector configuration. And a single 4890 draws less power under load (approx 10 watts) than a 4870. So I could easily see a 6+8 4890x2 2GB and 8+8 overclocked 4890 4GB being within PCI-E specs.
     
  9. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Perhaps it's more of a political reason? That ATi doesn't want to have the most powerhungry card on the market, because it would have a negative impact on the lean-and-mean image they have set with their graphics cards?
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    6+8 is in spec, 8+8 isn't.
    The rumor of PCIE 2.0 slot providing 150W was false, it's still 75W like 1.0 slot and the max allowed power is 300W (6+8+slot, 75+150+75)
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    The artricles I've seen indicate the PCI-e spec has revisions for each of the additional power plug additions, 6 pin connectorm and 6 and 8 pin connectors.

    Is there a revision for two 8 pins, or are we going to see ATI and Nvidia pushing another revision?
     
  12. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    Lean-and-mean image? Seems like just about everybody knows the 4870+s suck power like Dracula even when idling. Regardless of whether or not this is the case, it's what the majority of reviews show.
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Well people on this forum mainly seem to praise ATi's great performance-per-mm2 ratio.
    Which I suppose means lean-and-mean.
    They make nVidia look like overly complex and large GPUs, requiring bigger, more powerhungry cards.
     
  14. Creig

    Newcomer

    Joined:
    Nov 20, 2006
    Messages:
    57
    Likes Received:
    1
    So the main stumbling block is that even though the power draw would be within spec for both the slot and power connectors, it's invalid because the current PCI-E standards don't recognize 8+8 as a valid configuration? Sounds more like a technicality.

    As long as the card doesn't exceed max draw on any of the connectors, I would think it would get PCI-E approval. Unfortunately, as I'm not a PCI-SIG member, I can't download the spec papers to read just how they spell out what is allowed and what isn't.
     
  15. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Well, it might be troublesome with PSUs that follow PCI-e specs, and therefore offer pairs of 6+8 connectors, so you would run out of 8-connectors too soon, and be left with 6-connectors unused.
     
  16. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    That's IMO a secondary concern compared to marketing something that's not adhering to an open industry standard.

    Many and not only mGPU-ready PSUs already feature more than one 8-pin cable.
     
  17. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
  18. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
  19. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    #1119 UniversalTruth, Apr 23, 2012
    Last edited by a moderator: Apr 23, 2012
  20. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,535
    Likes Received:
    144
    Please refrain from posting "Sky is falling" stuff when it can be avoided with ease. There's a large difference from "dropping Catalyst support" to "dropping FGLRX support in Linux".
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...