AMD: R8xx Speculation

Discussion in 'Architecture and Products' started by Shtal, Jul 19, 2008.

?

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

Poll closed Oct 14, 2009.
  1. Within 1 or 2 weeks

    1 vote(s)
    0.6%
  2. Within a month

    5 vote(s)
    3.2%
  3. Within couple months

    28 vote(s)
    18.1%
  4. Very late this year

    52 vote(s)
    33.5%
  5. Not until next year

    69 vote(s)
    44.5%
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,859
    Likes Received:
    2,790
    Location:
    Finland
    Sideport is extra communications line meant for chip-to-chip communications
     
  2. turtle

    Regular

    Joined:
    Aug 20, 2005
    Messages:
    279
    Likes Received:
    8
    Somehow, I think this was in development long before Lucid existed, not to mention in an x2 product there would no longer be need for an external solution. There's always the possibility a Lucid chip could replace the PLX on an X4 as well.

    A little while ago I posted this tidbit over @ XS:

    http://www.xtremesystems.org/forums/showthread.php?t=212665

    That's my (and others) nice little bits of rant about the possibilities of Hydra...not to mention I tripped over what seems to be the addition of the chip to RD890 boards.

    I still find it amusing such a scenario will happen considering one of Lucid's primary investors. :lol:

    As mentioned in that thread, the idea of sideport used on an X2 plus a hydra chip on-board sure makes RD890 exciting. In theory, if a X2 is seen as a single GPU either because of sideport and/or MCM, a hydra chip onboard could allow for some bad-ass scaling with four GPUs (5870x2...x2?).
     
  3. turtle

    Regular

    Joined:
    Aug 20, 2005
    Messages:
    279
    Likes Received:
    8
    At first I thought you meant chips, but re-reading it, I'll tell you why it likely won't change. Nvidia didn't for G80, which used every millimeter, and we can't forget this monstrosity, in which my idea would only be slightly smaller...just enough so no more indented shims need be invented. :lol: (Yeah yeah, I know it's tilted because it needed equal traces...but GDDR5 doesn't :) )

    [​IMG]

    (Sorry for the massive sequential posts)
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    NVidia didn't do what for G80?

    Jawed
     
  5. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha

    Joined:
    May 14, 2005
    Messages:
    1,385
    Likes Received:
    299
    Location:
    NY
    Yeah because Hydra is guaranteed to work (especially in all cases)...:razz:
     
  6. bowman

    Newcomer

    Joined:
    Apr 24, 2008
    Messages:
    141
    Likes Received:
    0
    Probably more so than halfassed Crossfire and PCI-e bridge chips..
     
  7. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha

    Joined:
    May 14, 2005
    Messages:
    1,385
    Likes Received:
    299
    Location:
    NY
    Yeah you keep telling yourself that.
     
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,346
    Likes Received:
    3,864
    Location:
    Well within 3d
    The comparison is flawed enough to make 10% power savings fall within the noise.

    While a GPU takes up the lion's share of power consumption, there are other pieces of hardware on the PCB that are not duplicated in an X2 board that consume power. This could be counterbalanced by the PCI-E bridge chip, which does burn a bit of power.

    In addition, chips destined for X2 boards would have been culled from the part of the pool of chips that had better than average power characteristics, leaving the thermally less desirable chips for the single-chip cards.
    So yes, one could say that the chips on X2 boards burn less power, because more power hungry chips aren't allowed on X2 boards.

    Can you give me the exact chips you are comparing: clock speed, stepping, and bus speed?
    The IO wattage saved is not going to amount to 50% of the total power consumption of a single chip. The amounts used by the FSB are on the order of maybe 5-10 watts, and we'd only get savings on a portion of that.

    A more straightforward conclusion is that any chip of which there will be two of should not eat up 150 each when 300 watts is the maximum allotted for the card.

    A comparison across designs and across a full process transition is a very weak one.

    A more straightforward trend is that GPUs are power and heat constrained enough that designers are willing to sacrifice clocks and die space to hit them.
    A successor chip in the same product range as RV670 is going to target roughly the same thermal envelope.

    Given the design freedom of a new generation and what parameters they have to tweak, any attempt at analysis beyond that without more concrete and meaningful data is just banking on happy coincidence.

    If intrachip latency were a factor, a square die would be the best. A rectangular one means putting some units further away from one another.

    I think this is a heavily weighted factor.
    Chipsets are frequently pad-limited, thanks to the significant amount of IO they need to support in relation to the amount of logic they need.
    A value GPU with a given amount of silicon needs a certain amount of IO that may not scale as well size-wise.
    Intel's Atom processor is a small core, and its length is dominated by its bus pads.
     
  9. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    Bump. A little birdy told me samples are expected in Q2... not sure if it's RV870 or RV830 (or whatever the other DX11 chips are called)... So it could be out in Q3... But there's also a DX11 chip due for partner validation for Q4. Supposedly all DX11 chips have True HD Audio support. Oh and someone mentioned a codename... Cypress.
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,859
    Likes Received:
    2,790
    Location:
    Finland
    Regarding that True HD Audio support, it probably means Dolby TrueHD, right? (and DTS-HD and whatever the others were)

    It's unrelated to thread itself, but what's up with the HD4-series audio support, HD4850/70 lists AC-3, AAC, DTS, DTS-HD & Dolby True-HD, HD4830 (which uses same GPU) only AC-3, AAC, DTS, while lower end HD4-cards only list AC3, but still 7.1 and same max bandwidth, is it just crippled by software, or is some of the info on the ati.amd.com site wrong or what's going on there?
     
  11. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,479
    Likes Received:
    219
    Location:
    msk.ru/spb.ru
    Why would a GPU need to "support" an audio encoding format?
    Probably something to do with kHzs and bits of audio passed through HDMI. Does RV770 have any limitations in this?
     
  12. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha

    Joined:
    May 14, 2005
    Messages:
    1,385
    Likes Received:
    299
    Location:
    NY
    R7x0 can't bitstream DD+, TrueHD, DTS-HD, or DTS-HD MA (although one could decode those bitstreams and then use the R7x0 to send the LPCM; however there is no easy way of doing this with Blu-ray/HD-DVD discs on the computer). I assume R8x0 can now bitstream those formats.
     
  13. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,220
    Likes Received:
    546
    Location:
    en.gb.uk
    Why is streaming these audio formats such an issue? The hardware to support it must be trivial surely compared to everything else the GPU + card does? Is it merely a case of "oh yeah, I knew there was something we forgot"?!
     
  14. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,859
    Likes Received:
    2,790
    Location:
    Finland
    That's the thing I'm wondering about, why are they all mentioned on HD4850/4870, 4830 missing some and lower end HD4 missing all but AC3?
     
  15. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha

    Joined:
    May 14, 2005
    Messages:
    1,385
    Likes Received:
    299
    Location:
    NY
    I tried to subtly allude to it, but DRM is the reason. An audio device must support an encrypted, protected path (there's no standard to this btw, which makes things a lot worse) if it wants to be able to bitstream or even decode high resolution audio from Blu-ray/HD-DVD discs on a computer. If one decrypted the Blu-ray/HD-DVD disc using an application like AnyDVD, then one could at least decode the high resolution audio and use the R7x0. However like I pointed out, this is not an easy solution.

    Because technically R7x0 has enough bandwidth to support the "decoded LCPM flavors" of high resolution formats (but no bitstream support). I'm guessing this was confusing to most and they decided to drop mention of this on later cards. R6x0 only supports the same formats as S/PDIF does (as does GT200x).
     
  16. KonKort

    Newcomer

    Joined:
    Dec 29, 2008
    Messages:
    89
    Likes Received:
    0
    Location:
    Germany, Ennepetal
    #316 KonKort, Mar 21, 2009
    Last edited by a moderator: Mar 22, 2009
  17. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,433
    Likes Received:
    181
    Location:
    Chania
    These are the times when I miss folks like "SA", which used to trigger highly interesting debates here at B3D. IMHLO there's a lot of unpicked yet low hanging fruit for IHVs when it comes to overall efficiency and I wouldn't be surprised at all if we're to enter a new era with D3D11 architectures.

    Anyone willing to make a couple of educated guesses what we might face in the second half of this year?

    ***edit: a quick reminder that there's also a front page at B3D *cough* http://www.beyond3d.com/content/articles/108/1
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,558
    Likes Received:
    600
    Location:
    New York
    With both Nvidia and Intel (for the most part) pushing SOA should we expect the same from AMD? Or are there good reasons to stick with VLIW going forward?
     
  19. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,346
    Likes Received:
    3,864
    Location:
    Well within 3d
    I don't see there being any contradiction there.
    One is a way to organize the data structures, the other is a way to organize instruction issue.
     
  20. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,558
    Likes Received:
    600
    Location:
    New York
    You can use AOS/SOA to describe instruction issue as well (and Intel does so in the LRB presentation) That's a lot more fungible on LRB since it's all happening in software (not to mention a 16-wide vector unit is a lot easier to play with than an 80-wide one) but even then Intel is pushing for SOA where possible. I'd be very surprised if the GPU guys provide that level of flexibility - don't expect AOS support from Nvidia for example.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...