AMD Radeon VII Announcement and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Jan 9, 2019.

Tags:
  1. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    And that’s exactly the problem 2080. They are poor value compared to 1080Tis today, the only redeeming factor may be the advanced features down the road. VII does not have the RTX moat, shallow as it may seem. Everyone who had $700 to spend on 1080TI level performance Pascal/Vega generation product has done so already.
     
  2. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    136
    Likes Received:
    27
    What..?

    I just bought a RTX 2080 because my Ti was going bad. I didn't really want to buy it, but I already had a 3440x1440p G-sync gaming display. Otherwise, I would've regretted buying the RTX. Since I can clearly see, that AMD's new 7nm "Radeon Seven"... is a much better buy than nVidia's RTX 2080.

    It is far better, because it will hold it performance & value better with the next-gen games that will come, given it's 16GB of VRAM and the 1TB/s bandwidth. Forward thinking on future 1440p~4k titles for years to come. FreeSync2 is the standard and Radeon VII delivers PCIe 4.0.
     
    ToTTenTranz likes this.
  3. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,061
    Likes Received:
    2,717
    Location:
    Well within 3d
    The reasons for the shift were to my knowledge not given by AMD.
    Implementation bugs may not be the only reasons why the feature was delayed or cancelled. The original direction was that the new path would have been transparent to programmers, being managed by the hardware and driver.
    A late abandonment of that direction may have left any API design effort in a position where resources were allocated elsewhere, or even if they were freely available the time it would have taken to pull together a robust API would have taken too long.

    Also, not every flaw or weakness needs to be a bug that can be fixed with a correction to some logic.
    A design might make decisions that in hindsight hindered the concept. If the next generation were to make fundamental changes to how the design worked, pouring resources into a design that is being replaced might hinder the new direction that learned from the lessons of the initial concept. If the time it took to get an API for Vega's methods would have dragged into Navi's launch period, it may have deprecated to make way.
    There have been silicon designs with early versions of functionality that were never exposed to the general market. Intel's Williamette had the first version of hyperthreading, but it would not be exposed until the second-gen Northwood.
    Possibly, elements of NGG were disclosed on a more aggressive schedule than could be sustained, and it would take a more fleshed-out version of the technology to be considered acceptable.
     
  4. bdmosky

    Newcomer

    Joined:
    Jul 31, 2002
    Messages:
    166
    Likes Received:
    21
    Maybe you should at least wait for reviews before making such a bold declaration like this.
     
    Heinrich4, vipa899 and pharma like this.
  5. nnunn

    Newcomer

    Joined:
    Nov 27, 2014
    Messages:
    28
    Likes Received:
    23
    Being a bandwidth-limited sort of guy, I had one big wish for 2019:
    a card that can do 1,000 GB/s for less than $1,000.
    Actually, quad-GPU (4 TB/s) for less than $4,000.
    AMD delivers in the first week?
     
  6. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    I will take “Things that never happened” for $2,000 Alex.

     
    Ike Turner, OlegSH, vipa899 and 3 others like this.
  7. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    136
    Likes Received:
    27
    I don't need to, I posted facts. And based my opinions on those specs.

    If you have doubt about my speculation about games in the future, Perhaps. But I don't need to see reviews of hardware, to understand where games are headed.. (ie: games are not going to use less VRAM in the future.)
     
  8. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    136
    Likes Received:
    27

    You sound befuddled. I don't live in a small apartment with only one room. I maintain 4 Personal Computers in my house. My Newegg account is close to 20 years old. Don't hate, I worked hard my whole life for my enjoyment/hobby.
     
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,662
    Likes Received:
    2,362
    The numbers posted by AMD has the VII barely doing 25% uplift in performance compared to Vega 64, sometimes less. That's not enough to touch the RTX 2080. We shall see when reviews come out.
     
    vipa899, pharma, del42sa and 3 others like this.
  10. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    171
    Likes Received:
    143
    Gotta laugh seeing how "futureproof" VII is. Sure it got gimmicky 16GB of VRAM but the performance? It ties the 2016's 1080Ti. How relevant will it be in titles requiring 16GB VRAM? Lol

    TBH this launch is not disappointing as the hyped Vega 64 one. This GPU is here just to reuse dead dies from Pro cards. It also serves the purpose of saying "we are not dead".

    Sure many kids were totally expecting Navi. But yeah, youtubers might say it's coming every second but its hard to show a card without drivers or any other sign of life. So don't buy hype that is not based on evidence.

    In fact I'm glad they didn't hype (no Koduri huh) the last gen of GCN based stuff. Given the history of differences between GCN iterations, it's safe to assume Navi will have almost the same characteristics as Vega. At least, they might go over 4SE with 6k SPs.
     
    del42sa likes this.
  11. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,093
    Likes Received:
    530
    Location:
    France
    Well, some games already use more than 8gb, and perform well with Vega-like raw power. Same thing with some nvidia card having 11 or 12gb of vram. Yeah you maybe need to cut some effects here and here, but at least you can in most case push the textures to the max, and even have less "stutters" due to assets swap. That why I've a Vega FE (this, and a good deal on it at the time), because the 4gb on my Fury was just dumb...
     
  12. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,650
    Likes Received:
    4,315
    If HBCC works as advertised, 16GB on Vega VII seems overkill for whatever gaming scenario may come in the next handful of years. Which is why I think AMD should have promoted the card for Pro applications a lot more.


    The Radeon VII is using 4GB stacks and I don't think anyone is making 2GB stacks, though. So the choice for 16GB may be more related to raw bandwidth than VRAM size.
    However, 1 TB/s sounds like a total overkill when compared to Vega 64 bandwidth/performance ratio. That's why I really thought they'd go with 3 stacks for 12GB at 768GB/s.
    At first I thought the 4 HBM PHYs were directly connected to the back-end and that was the only way to enable all 128 ROPs. But it seems they're decoupled.

    In the end, there's a bunch of things I don't get on Radeon VII as a gaming card, but would make a lot more sense as a Pro card. Them not cutting down FP64 performance is another factor.


    Compared to Titan V, the Radeon VII is a card that has 53% higher bandwidth, 33% more VRAM and 12% higher FP64/FP32/FP16 throughput at less than 25% of the price.

    Perhaps this is AMD again trying to have one chip to compete on many markets due to very limited development resources for RTG.
    With a whopping 128 ROPs, Vega 20 was never going to be a chip 100% exclusive to datacenters..
     
    Lightman likes this.
  13. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,300
    Likes Received:
    234
    It’s also 20% larger going by transistor count alone. At some compute workloads the VEGA20 will be better.
     
  14. Rys

    Rys AMD RTG
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,140
    Likes Received:
    1,338
    Location:
    Beyond3D HQ
    Vega20 RBE rate isn’t 128 pixels per clock (it’s 64).
     
    milk, Cat Merc, DavidGraham and 10 others like this.
  15. Digidi

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    207
    Likes Received:
    88
    Thank you Rys for the Information. Can you tell us the status of primitive shaders and NGG on Vega 20?
     
  16. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,093
    Likes Received:
    530
    Location:
    France
    Thx a lot. It makes more sense.
     
    yuri likes this.
  17. Rys

    Rys AMD RTG
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,140
    Likes Received:
    1,338
    Location:
    Beyond3D HQ
    I can't, sorry.
     
    Cat Merc, Entropy, del42sa and 3 others like this.
  18. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    353
    Likes Received:
    240
    https://forum.beyond3d.com/posts/2054467/
    So.. it seems raster engines can still rasterise only 64 pixels per clk.
    Makes a lot of sense, considering that your avg triangle size will likely be much smaller than 32 pixels while projected on screen in modern games even in 4K
     
    milk, del42sa, pharma and 1 other person like this.
  19. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,029
    Likes Received:
    1,727
    Location:
    Finland
    I'm pretty sure the whole 128 ROPs started from someone just applying NVIDIA logic to AMD, forgetting that AMD doesn't have ROPs tied to memorycontrollers
     
  20. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,093
    Likes Received:
    530
    Location:
    France
    Anandtech corrected their article.
     
    Malo, Lightman and pharma like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...