AMD Vega Hardware Reviews

Discussion in 'Architecture and Products' started by ArkeoTP, Jun 30, 2017.

  1. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Different aspect of software scheduling, but yes it's what they have been doing. This would be the compiler generating short instruction sequences to use temporary registers. The register file cache if you will. Problematic if GCN did it as each subsequent instruction is a different wave so temporary registers would be filled. It would require each wave to run a handful, or at least until it stalled, of instructions prior to the next wave scheduling. A matrix multiplication for example being a commonly repeated set of instructions with a lot of data sharing.

    http://videocardz.com/71280/amd-vega-10-vega-11-vega-12-and-vega-20-confirmed-by-eec
     
    ToTTenTranz likes this.
  2. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,967
    Likes Received:
    4,562
    Lots of news coming from this. According to this certification list, it looks like all other Vega SKUs may be a lot closer than we thought.
    There is:

    Vega 11 XT
    Vega 11 XL
    Vega 12 XT
    Dual-Vega 10 card for the Instinct series
    Vega 20 - but only for Instinct and Pro series, meaning it could be a GV100 competitor not available for consumers.

    Vega 11 sounds like it could be the replacement for Polaris 10/20, if they manage to develop an interposer with GPU + single HBM stack for similar costs as GPU + 8*32bit GDDR5 lanes. Since the cards in that performance range are highly inflated in price because of mining, they could get away with selling $300 cards with a small performance boost over Polaris 20. 32 NCUs @ 1.5GHz would probably do well enough.
    Vega 12 might be mostly exclusive to laptops. Probably for macbook refreshes first and 4 months later to the pleb.



    EDIT: this obviously belongs to the Vega rumors thread. If a mod would be so kind to transfer the post there, or maybe I can just repost there.
     
    #522 ToTTenTranz, Jul 28, 2017
    Last edited: Jul 28, 2017
    Malo likes this.
  3. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,267
    Likes Received:
    1,783
    Location:
    Winfield, IN USA
    Ok, am I an idiot for being excited about the new enhanced v-sync? One that works without a freesync monitor? THAT freaking excites the hell out of me! :D
     
  4. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    Not necessarily, but you could be if you only bought a Vega because of that single feature (instead of a cheaper Polaris-based card for which it is also enabled - which is nice in itself).
     
  5. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,967
    Likes Received:
    4,562
    You're not an idiot, unless you actually have extensively used a Freesync monitor and get to know how spectacularly cool it is to have a game that practically feels like 60FPS to us (non-esport gods) mortals, even though the actual framerate is navigating between ~45 and 55 FPS. And all of this without tearing.

    For mGPU users (didn't you have a Fiji Pro Duo?), which tend to get more dips than most, it's a real game-changer.
     
  6. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,029
    Likes Received:
    3,101
    Location:
    Pennsylvania
    As someone with a 34" Ultrawide non-freesync monitor and likely buying a Vega, it excites me as well.
     
  7. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    178
    Likes Received:
    147
    Yea, this pretty much aligns with the previous slides: https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed
     
  8. xpea

    Regular Newcomer

    Joined:
    Jun 4, 2013
    Messages:
    372
    Likes Received:
    309
    To me, Enhanced sync looks like a copy of Nvidia Fast sync...
     
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
  10. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,029
    Likes Received:
    3,101
    Location:
    Pennsylvania
    Is that a bad thing?
     
  11. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,702
    Likes Received:
    117
    Nope, I guess if one wanted to nitpick, Nvidia gives the end user the option of whether or not to disable sync below the refresh, whereas AMD just does it for you. But if one is using Fast/Enhanced Sync anyway, the entire point is the lower latency, so it really doesn't make sense otherwise (unless one just absolutely can't stand any tearing ever).
     
  12. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,153
    Likes Received:
    928
    Location:
    still camping with a mauler
    I just realized something. It would be trivial for NVIDIA to release a GTX1075 (basically the GTX1070M SM configuration + a minor clock bump) at $350 and absolutely ruin Vega. I see no reason for them for them not to do this.
     
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Huh.

    A wavefront will run on a GCN SIMD until it reaches some kind of barrier (flow control, memory access, explicit barrier...) So this can easily be hundreds of instructions all from a single wavefront running contiguously.

    The only meaningful exception is when the instructions won't all fit in instruction cache.
     
  14. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,967
    Likes Received:
    4,562
    What that sounds like is the perfect mining card that no consumer would ever get their hands on.

    AMD on the other hand would be delighted to see the RX580/570 back in the shelves for gamers.
     
  15. BacBeyond

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    73
    Likes Received:
    43
    I pointed it out earlier, but EnhancedSync works with Freesync, its not any kind of replacement

    EnhancedSync = active when above max refresh rate

    FreeSync = active when below max refresh rate

    FastSync will cause the same problems as VSync when under refresh rate, so you'd want to turn it off with Adaptive-VSync enabled. EnhancedSync just combines the two into one setting.
     
  16. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,029
    Likes Received:
    3,101
    Location:
    Pennsylvania
    He never said anything about it replacing Freesync, only that it's a feature that doesn't require a Freesync monitor.
     
    DavidGraham likes this.
  17. BacBeyond

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    73
    Likes Received:
    43
    My point was it won't make his monitor act like a Freesync one, because it has a completely different purpose (above max refresh vs under).
     
  18. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Hmm. I thought it was a 4 cycle cadence with round robin (exception for priorities) through all the waves (or at least the group) on a SIMD as a wave couldn't schedule back to back except under certain circumstances? I have no doubt what you suggest would be a bit more efficient as you could carry the output, but my understanding was everything gets written out each cycle of the cadence and that ability largely unused. Exception of a few complex instructions. All waves in a group attempting to stay relatively in sync, reducing the burden on the instruction cache. The "barriers" only preventing a wave from scheduling within that rotation. Nvidia doing what you described.
     
  19. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    It wouldn't take too long before the per-wave instruction buffer is empty, and the fetch process is subject to variable latencies and arbitration for instruction fetch. AMD indicated age, utilization, and priority could factor into which buffer is granted a fetch in a given cycle.
    Perhaps the arbitration factor can be reduced if it's one wavefront per SIMD, and the L1 is no thrashed. Sharing within a CU may be insufficient since the L1 instruction cache is shared between multiple CUs.
     
  20. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,702
    Likes Received:
    117
    Yes.... that is exactly what I wrote in the first place.....
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...