AMD: Speculation, Rumors, and Discussion (Archive)

Discussion in 'Architecture and Products' started by iMacmatician, Mar 30, 2015.

Thread Status:
Not open for further replies.
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    You seem to think there's going to be just one Polaris 10 model, while with 2 chips, they pretty much have to release at least 3 of them, and since they've been consistent about Polaris 10 and VR min spec, it should be quite clear that slowest Polaris 10 hits the VR min spec. From that, add 2 models with 20 % performance between each and you're already at 980 Ti. Of course, it could end up a little under, but so might 1070.
     
  2. First off, Fury has the HBM advantage and I'd bet Raja Koduri wasn't talking about HBM solutions when he was making those comparisons because no Polaris has HBM. Most probably, he was talking about current GDDR5 solutions such as Hawaii or Tonga. I mean we could even move the goalposts even further and suggest he was talking about the Nano, for which we'd have a miraculous chip with ~GTX980 performance at less than 75W.

    Secondly, why must there be a 150W solution between Polaris 11 and 10?
    Polaris 11 is bound to be a sub-75W part, probably with no PCI-E power connectors at all. If Polaris 10 is 2.5x more efficient than a R9 390X, then it'll be 275W/2.5 = 110W.
    Maybe they could push the clocks up and hit Fury performance levels within 150W, but at the same time they would:
    1 - Decrease chip yields
    2 - Increase the cost for PCB and power regulators
    3 - Completely cannibalize current Fiji solutions even if they have huge price cuts.
    4 - Shrink the market at which Vega will be targeting in the future


    No, I fully expect to see Polaris 10 Pro and Polaris 10 XT graphics cards. I just don't expect the XT model to significantly exceed the performance of a R9 390X, if at all (which is already uncomfortably close to a Fury BTW).
     
  3. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    If Polaris 10 is the big chip of the two, and is supposedly around 230mm2, where's a chip in the range between 230mm2 and the assumed huge Vega? Polaris 12 next year? A Vega 10 and 11?
     
  4. gamervivek

    Regular

    Joined:
    Sep 13, 2008
    Messages:
    805
    Likes Received:
    320
    Location:
    india
    According to TPU's latest review, at 4k, Fury X is only 26% faster than 390X, ref. 980Ti only 21%. If Polaris 10 has a Hawaii like configuration with the arch. improvements that AMD have been talking of they only need a ~30% clockspeed boost over a 390X, which is way overdue considering that Maxwell was doing better than that on 28nm, to easily get clear of Fury X, 980Ti and Titan X.

    Unrealistic would be expecting a 256-bit bus to perform well at 4k and the clockspeed boost considering the leaks we've had were 800 and 1050Mhz.

    There are two Vega chips just like Polaris, Anandtech confirmed it.
     
  5. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
  6. Yup. gamervivek already mentioned it but here's the quote:

    There might be a Vega 11 that places itself between the GTX1070 and the reference GTX1080, and then a Vega 10 that will counter the rumored GTX 1080 special editions with >2GHz clocks (which may eventually become the GTX1080 Ti).
     
  7. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Two Vegas, at least one of which should be huge and the only advertised feature being HBM2. It's possible they're both huge but one features FP64 for pro setups. Other possibilities seem to be 490 and Fury tiers respectively.

    I still can't help but think a pair of Polaris 10 on an interposer with even 4GB HBM1 (2GB each but actually shared via Onion) would be an interesting part. For Example a current FuryX replaced by 2 FINFET dies that still come in under 200W. I'd have to check the specifics on Onion, but it's possible it only connects two devices. So APU works, dual cards work, CPU + discrete over PCIE probably works. Then just step that up with the smaller Vega at around 300mm2 with HBM2. That configuration would definitely be pushing some limits.
     
  8. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    ...uh? GPUs has already a PCIE connection and an IOMMU i think, so they can interconnect without hassle. Why would you need Onion (coherent) bus... between GPUs other than a fast PCIe link?

    Plus, the NUMA-like system you're building would suffer heavily if textures needed by one GPU stays on the other half of the memory...
     
  9. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Lets be honest,
    using 4k as a performance point is mostly irrelevant because the fps and frame times are just too weak.
    Case in point, look at Witcher 3 at 4k or as a more equal playing field but does not push the boat out GTA V.
    GTA V is still not acceptable in terms of playability at 4k.

    From a performance mark perspective, it may be useful to compare an architectures performance trend as it goes up the resolutions including 4k.
    I appreciate the argument changes slightly if playing in mGPU, but this adds a whole other set of variables that skew calculating a GPU performance.
    Cheers
     
  10. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    It's reasonable to expect Nvidia will target 1070 minus 20% performance with its mainstream GPU. They have to match or beat that. And pricing does the rest. Simple as that.

    So slightly faster than 980 for $250. That's you target AMD. But please, do aim higher.
     
  11. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    To share the actual memory pool as opposed to duplicating everything. Goal being to texture some stuff from the other memory pool. Two chips sharing an interposer with a ridiculously wide link could probably pull it off. I thought they were replacing Garlic with Onion3, so it seemed practical. Also seems likely they'd have support built in for it already if Zen+Polaris MCMs were in play.
     
  12. If it's slightly faster than a 980 then it certainly won't be $250. If it's $250 then it's certainly not faster than a 980.
    Having both of those at the same time would be just unnecessarily aggressive pricing for AMD.
     
    no-X likes this.
  13. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
  14. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,529
    Location:
    Pennsylvania
  15. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,423
    Likes Received:
    10,316
    You might want to look again.

    http://wccftech.com/amd-polaris-architecture-vr-minimum-spec/

    In there is a quote from AMD

    The Polaris architecture was designed to bring VR to the masses. That could mean either Polaris 10 or 11.

    http://www.gamespot.com/forums/syst...polaris-10-to-replace-fury-x-33136353/?page=1

    Their roadmap also implies that the low end Polaris 10 will replace the high end R9 300 series while the top Polaris 10 will replace the Fury series (X and non-X)

    http://wccftech.com/amd-polaris-10-gpu-pictured/

    Polaris 10 also demo'd running Hitman DX12 at greater than 60 FPS at 2560x1440 which is actually comparable or faster than Fury X. I can't find the link again since it was something I read months ago, but one of the prominent tech sites had a news blurb while they were at an AMD event that showcased Polaris 10 running some games at 2560x1440 faster than he'd ever seen Fury X run those games. Or maybe it was a twitter post.

    Of course, there's always the possibility that final shipping hardware won't be as fast as the hardware they demo'd, so we'll have to wait until the actual unveiling and benchmarks before coming to any conclusions.

    Regards,
    SB
     
  16. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    It also implies Vega replaces Polaris too, HBM 2 cards in low end and midrange bracket sound right? I don't think those blocks mean much other than this is what is coming out at this time.

    http://wccftech.com/amd-polaris-10-gpu-pictured/

    Frame rate locked and at what settings? Settings weren't mentioned.......
     
    no-X likes this.
  17. Ext3h

    Regular

    Joined:
    Sep 4, 2015
    Messages:
    428
    Likes Received:
    497
    Yes and no.
    Vega (respectively the new arch) will also replace Polaris at some point (even though probably not prior to Navi), but HBM2 on midrange and lowend probably won't happen any time soon. Interposers, as well as the extra thin slices in the HBM stack, are still far too expensive for that. We will probably have to wait until AMD finds a way to get sufficient memory into the package without resulting to stacking, or at least for further reducing the cost involved with that.

    So, unless AMD has already an yet unknown GDDR5(X) relative of the known two Vega chips planed, no midrange Vega for the next 1 1/2 or 2 years.
     
    Razor1 likes this.
  18. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Considering how tight HBM2 memory is at the moment and that is with Samsung being 1st into mass production surely means they are going to have to release this with either GDDR5X or more traditional (doubt they will go with HBM1).
    I would expect NVIDIA has probably a high order contract with Samsung for HBM2, so not sure where AMD would find a large amount, and if they did (lets say SK Hynix can manage this) at what cost is this going to add to Vega.
    The plan was to wait until HBM2 was more available to masses for cost (was words to that effect used either in a presentation or interview) and why it was pushed back to 2017.

    On the plus side, fingers crossed they do release even if with GDDR5X for now, as that will also push NVIDIA to release their big card; great all round for fans of both companies.
    I guess comes down whether they both have back-up plans to go with say 12Gb/s GDDR5X if they needed to release their large GPUS early Q4 to keep costs down.
    Cheers
     
  19. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    Unless AMD has planned for Vega to use GDDR5x, I don't think they can swap in GDDR5x as they wish, cause that is a fairly large change lol not to mention a waste of silicon in for the bus. What I think if this rumor is real, Vega will be announced in Oct. and reviews will come out shortly after that with ES's and buy-able a couple months later which puts it at end of 2016 where HBM 2 should be well into mass production.
     
  20. LordEC911

    Regular

    Joined:
    Nov 25, 2007
    Messages:
    877
    Likes Received:
    208
    Location:
    'Zona
    The rumor definitely has something to it, there is something in the works that they are expecting in Q4.

    Here is what I said about it last week-
    http://semiaccurate.com/forums/showpost.php?p=261209&postcount=771
     
    Lightman, CSI PC and Razor1 like this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...