AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by Deleted member 13524, Sep 20, 2016.

  1. xEx

    xEx
    Veteran

    Joined:
    Feb 2, 2012
    Messages:
    1,060
    Likes Received:
    543
    The price was for the development kit with special tools and support from AMD.

    Also it was the first of its kind and with Vega it would be easier and cheaper to implement.

    Enviado desde mi HTC One mediante Tapatalk
     
  2. gamervivek

    Regular

    Joined:
    Sep 13, 2008
    Messages:
    805
    Likes Received:
    320
    Location:
    india
  3. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    Excuse my ignorance please but this thread moves too fast for me. When is Vega expected to be released? I"m kind of wanting an upgrade for my 970 but I wanna wait for AMD to show its hand before making a decision. If it's months away I'll probly go ahead with the 1070 but I can wait a few weeks.
     
  4. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    1st half 2017
     
  5. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    That is one of my biggest problem with AMD. When the sent the R390 vs. the 970/980 8GB was a must, when they sent Fiji against 980ti 4GB is enough, when they sent RX480 vs 1060 8GB is okay, 6 is not, when they sent Vega against GP102 suddenly 4 GB is enough and 8 plenty.
     
  6. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    That's why I'm leaning towards the APU for FP64 environments. In those scientific fields having relatively small (still huge) APUs with HBM and 4 channel DDR4 might make sense. Far more bandwidth and capacity with the ability to lean on ROCm for acceleration. Might be easier than getting C and fortran in some instances compiled to graphics hardware. Four APUs in a 1U rack could be an effective design there. I've only had to mess with large 32 bit workloads, but I could see FP64 being even more memory/data intensive considering the required accuracy. Peak throughput might be less important than reliably feeding it and getting code to execute.

    Virtualization probably benefits from it. No evidence to support that, but dumping the context to local memory as opposed to system would seem beneficial.

    SSD for consumers seems unlikely. While it may have some uses, the performance likely wouldn't be there. The SSG demo was using RAID0 with Samsung(?) NVMe drives that retail ~$350 each and provide 3-5GB/s of bandwidth together. More likely would be some sort of ramdrive with DDR3 or eventually the Optane/3DX NVRAM. Less capacity, cheaper, more bandwidth ,and higher IO rates. Even GDDR would work, but be a bit wasteful as it would be constrained by the PCIE3 bandwidth. Using cheap DDR3 even at retail prices it would only be ~$100 to add 16GB of RAM. That would probably be close to maxing the bandwidth capabilities and provide ample capacity for most current games. That's roughly 8+16GB of memory which should cover most scenes being rendered.

    Released and available probably have different answers. I saw one report from someone talking to AMD at CES suggesting between first and second quarters, but not necessarily towards the end of that period.
     
  7. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Why would you need any special purpose RAM? Just page on-demand from the DDR4 main system RAM. 16 GB DDR4 is common in new gaming computers (2x 8GB DDR4 memory kit = 80$). When games become complex enough to use 32 GB of system RAM, the price has already halved.

    Seems to work pretty well on Pascal P100 (esp with prefetch hints):
    https://devblogs.nvidia.com/parallelforall/beyond-gpu-memory-limits-unified-memory-pascal/

    Game data set (60 fps) changes very little from one frame to the next. Just take two consecutive frame screenshots, and you notice that most texture surfaces are identical and visible geometry (including LOD level) is mostly the same. My experience (with custom virtual texturing) shows only 2% of texture data set changes per frame on common case (of total 256 MB active data set cache). I'd say that automated on-demand paging (with pre-fetch hints) from system RAM should work very well for games.
     
    #667 sebbbi, Jan 7, 2017
    Last edited: Jan 7, 2017
    Lightman, BRiT, fellix and 1 other person like this.
  8. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    The only reponse i have got, is professional market will be served first... so yes 1st half without much details. But its quite possible that he was only speaking about the 16GB version? ( Honestly i dont know )
     
  9. jacozz

    Newcomer

    Joined:
    Mar 23, 2012
    Messages:
    90
    Likes Received:
    23
    Here's a rather depressing take on Vega:
    http://techbuyersguru.com/ces-2017-amds-ryzen-and-vega-revealed?page=1

    "Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11."
    :-(

    Is Vega another 7970 that will take years before it's getting competitive?
    Really? Haven't they learned anything?
     
  10. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    It would still offer a performance advantage as it wouldn't be limited by the bandwidth of the PCIE slot and contention for system resources. While I'd agree it's overkill for most users, there is still an advantage that likely makes sense for an enthusiast or prosumer product. Server benefits for scaling are obvious.

    That still required NVLink, application process (excluding IBM's Power8 CPUs to my understanding), and effective prefetching to overcome some latency. If brought to consumer products in the current form it should work well for most needs, but still has some limitations.

    AMD's APU design theoretically allowed the GPU to utilize ALL available system memory bandwidth as opposed to just that of the PCIE link. That's about as unified as you can get. Discrete cards would still have the PCIE bottleneck. The separate pool, as mentioned above, works around that limitation. While likely not an issue for most gamers (someone will likely do this anyways), scaling with many(say 8-16) GPUs would create significant contention. Very reason "Network Storage" likely showed up on the Vega slides as opposed to going through a host network. Costs aside, the separate pool is technically superior in the same way that having all resources in VRAM should be superior to paging anything.

    I'm not suggesting the technique won't work well, but that one implementation will be superior to the other and probably better than the current method. At the very least from the standpoint of making developer's lives easier. The cost of that implementation is a separate matter, but would still be marketable. It should also make the actual GPU the primary component of performance. Similar to how DX12/Vulkan reduced reliance on the CPU.

    It will probably be competitive, but using "Primitive Shaders" for example, that to my knowledge don't exist in any of the APIs or are supported by the competition, likely limit the use a bit. That statement seems more about it taking time for new techniques to really take hold. It's simply forward looking hardware with more capabilities than are currently practical.
     
  11. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    No, thats not depressing, ... its completely normal, and i find this way more encouraging, this mean that AMD leverage new technology in their gpu's. ( And Nvidia do it too ) I think you missmatch a bit what they write.

    Nvidia, AMD have introduct FP16-Int8, but they are not used in games, so in fact, this is a factor that we need yet to forget when speaking about gaming. some other features need to be set as specific path,, but thats additional features, its allready the case for both AMD and Nvidia: see GPUopen: http://gpuopen.com/

    The good thing is, with Vulkan and DX12, developpers are more than never at the front and can use new techs and include them really fast
     
    #671 lanek, Jan 7, 2017
    Last edited: Jan 7, 2017
    RootKit, chris1515 and BRiT like this.
  12. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    I am not sure if it is a good thing, because now implementing the use of such technology has to be justified for the application and given the installed base of capable graphic cards this will be very hard to do.
     
  13. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Why ?

    You forget that every console are running GCN architecture and new version use certainly Vega.. and console game developpers are allways seeking for best method and performance ( at contrario of PC where most was use brute force ). I have just to look at how they seek every optimization possible on their code.. We have learn from them more in 1 year about GCN, that in 5 years after it have been introducted.
     
    #673 lanek, Jan 7, 2017
    Last edited: Jan 7, 2017
    RootKit, Lightman and chris1515 like this.
  14. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me
  15. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    13,878
    Likes Received:
    4,724
    DX12 is just software and if Vega or even Navi ??(I doubt it) is in scorpio then it will be in MS's best intrest to introduce any of its features into dx 12. It may not happen at launch but if GCN 4.0 or whatever this is called now is the base of GCN going forward it will still benefit AMD and end users. You get a vega that is competitive with other cards in its price range and then if these other featuers are taken advantage of you get performance increases when they are implemented
     
    RootKit likes this.
  16. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Yes, but this is all in the future. Before Scorpio has achieved a big enough user base, we are probably in 2018. And then it still must be worth the effort to make an extra code for the Vega based Chips compared to the older CGN chips. If it is worth it, I fear that the launch benchmarks of VEGA could look bad, because it would mean that the new architecture needs new code. I just hope it will not be the typical AMD GPU which destroys the NV competition under new APUs, unfortunately the software for those APUs only appears years after the GPU launched.
     
  17. pTmdfx

    Regular

    Joined:
    May 27, 2014
    Messages:
    416
    Likes Received:
    379
    If you meant hot page migration from the system memory, it is already available on all allocations through CUDA runtime on Windows and Linux for all supported devices. For OS allocated memory, it requires OS support though. NVLink is not an essential in this regard.
     
    Razor1 likes this.
  18. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    It isn't required but it can definitely help performance wise.
     
  19. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Not natively, which is what I meant when I said application process. In the case of a Vega APU it should be able to access the memory controller on it's own to migrate pages. No different than a CPU core accessing memory. OS support shouldn't be required, but probably helps. The Nvidia solution required the CUDA runtime, which isn't that different from having the application page in memory as required. Not that different from letting the driver handle memory management. The exception, to my understanding, was the Power8 with NVLink which could handle the operation in hardware. Software vs hardware solutions to the same problem.
     
  20. ImSpartacus

    Regular

    Joined:
    Jun 30, 2015
    Messages:
    252
    Likes Received:
    199
    I haven't seen this small leak from r/AMD posted yet, but there have been plenty of links in the past couple of pages.

    [​IMG]

    The OP:
    My guess is that the 570 would be 2017's rebranded Polaris 10 XT due to the presence of an 8GB variant. Or maybe part of the rebrand is making all Polaris 10 XT use 8GB and then having the "option" of 8GB move down to the formerly 4GB-only Polaris 10 Pro? That might make more sense since the 470 already shows up in certain laptops (Alienware 15, etc), so a rebranded version of it would be a natural fit for a laptop whereas a rebranded 480 might not be quite as good of a fit.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...