AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

Thread Status:
Not open for further replies.
  1. CarstenS

    CarstenS Legend Subscriber

    I was told by an AMD rep, that there is a reason other than marketing, that salvage parts have CUs disabled according to the number of SEs. Nothing more specific though and this info is a couple of years old, so it could be obsolete with Vega already.
     
  2. del42sa

    del42sa Newcomer



    Buildzoid thoughts about NAVI. I agree with most of them.
     
  3. Bondrewd

    Bondrewd Veteran

    Everything after is still GCN.
    Or maybe we're back to sq. 1 and they're contractually obliged to not show anything relevant until E3.
     
  4. 3dilettante

    3dilettante Legend Alpha

    If this were a CPU architecture like x86-64, being listed as the same class of machine target for a compiler would go a very long way towards saying it's a member of the same overall architecture or a close derivative.
    x86 would be viewed holistically as a combination of instruction formats, behaviors, and properties the vendors and industry have either spelled out or have over time committed to upholding.
    That level of definition and abstraction from the implementation is why some wildly different microarchitectures from Athlon, Pentium, Pentium Pro, P4, Zen, and Core are still counted under the same overall umbrella of x86 even though for practical reasons there are architectural shifts like the various modes and corner cases between cores and vendors that make compatibility less than perfect.

    GPUs and GCN as an example of them are messier. At an ISA level, AMD has made changes many times that would be problematic for CPUs. In part, it's because GPUs often have intervening compilation and software layers that allow for less than ironclad adherence. What GCN is would be a complicated question. The LLVM changes, for example, reflect a more CU and compute shader focused view of the GPU overall. GPUs are themselves not as tightly defined as CPUs, as the basic operation of a GPU runs through many processors and cores versus the architected single pipeline of a CPU.
    Also GCN has committed to various behaviors or features within the scope of the CUs that many CPU architectures would not. Whether that's a side effect of marketing or a choice made purposefully, it increases the list of things that might then be considered architectural changes.

    We also don't have all the information yet about what has been changed, so significant alterations might be disclosed. The question about how many changes is enough is at least a little arbitrary and usually a judgement call, and as the vendor AMD has more judgement over it than I do.
    The more changes there are, the more I think it would be clearly justified, but I don't think there's any absolute arbiter over these matters and the situation has started out murky.

    I'd be delighted if AMD took the opportunity to outline what they consider the set of design-defining elements of GCN, so that we could get closer to the heart of the matter. Although one side effect of not being rigorous all along is that it's possible that AMD could have generated one or more counterexamples for some of them, which would weaken the credibility of the model put forward now.

    That's part of the distinction between architecture and implementation. IP blocks can be different, and every new process means reimplementing things as well. That's where there's usually a defined set of outcomes and behaviors that the implementation is supposed to adhere to that helps categorize things.
     
    BRiT and del42sa like this.
  5. del42sa

    del42sa Newcomer

    Perhaps ( and I hope so ) they made sufficient number of changes, so it deserve new RDNA name despite being GCN at the basis of the architecture. It remains to be seen , when more architecture detail emerges...
     
  6. Picao84

    Picao84 Veteran

  7. DieH@rd

    DieH@rd Legend

    Minus the electricity for GDDR6, both of two chips could end up in PS5. [I still hope that consoles will break traditional 200W total power limit]

    In PS4 case, it was a HD 7870 chip with 20CUs, 2 of them being turned off, so in reality it was in the middle between 7870 [20CU] and 7850 [16CU]. [with modifications for more granular async compute]
     
  8. del42sa

    del42sa Newcomer


  9. Wccftech's full quote:


    So 180W TDP chip uses 225W TBP boards, and the 150W TDP chip goes into 180W TBP boards.
    The TDP-to-TBP difference in the higher end model is 45W, whereas for the lower-end model it's only 30W.

    AMD is part of the VirtualLink Consortium so these cards may be coming with a USB-C connector that must provide at least 15W to power VR headsets.
    This difference could come from the fact that only the higher-end model bundles a USB-C VirtualLink port, which would explain the 15W difference in TDP-to-TBP between the two models.
    It would also put the power consumption of the 8 GDDR6 chips, plus power circuitry and video outputs, at 30W.

    Using some (VERY!) rough calculations, assuming the power regulators have a ~95% efficiency, then ~200W x ~0.05 = ~10W are lost in voltage converters, leaving ~20W for the 8 GDDR6 chips, or 2.5W per chip.
     
  10. Alexko

    Alexko Veteran Subscriber

    Interesting point about the USB-C connector. But it could also just be that the higher-hand variant uses significantly faster memory.
     
    Last edited: May 28, 2019
  11. Bondrewd

    Bondrewd Veteran

    It's maybe 14 vs 16Gbps difference.
    Shouldn't be that expensive in terms of power.
     
  12. Love_In_Rio

    Love_In_Rio Veteran

    What are the power usage estimated for:

    -8 cores zen 2
    -16 GB of HBM2

    ?.

    I am leaning now to believe the "leake" about PS5 going HBM2 to save power.
     
    Pete likes this.
  13. iamw

    iamw Newcomer

    GCN.png
    Navi should have broken many of the rules about GCN in this picture.
     
  14. 3dilettante

    3dilettante Legend Alpha

    Currently, the primary changes I can think of that are documented in code changes is No VGPR port conflicts, and back to back wavefront instruction issue.
    There are rumors that might indicate something like the specific SIMD configuration might change.

    To return one more time to my opinion that this is influenced by marketing or product evangelism, I think many of these aren't really rules or in my opinion shouldn't be.

    Everything from standard compiler down is more of a description of AMD's design preferences or aspirations, rather than a quantified element of an architecture.

    Some of the others, including No VGPR conflicts, could be construed as more firm rules, though my opinion is that a number of them allow extant low-level details to constrain future design changes.
    Some, like describing the number of SIMDs, MACs, or instruction bandwidth are just raw numbers that are nice to know but do not or should not bind anything.

    3 SRC GPRs is something I think would be a more fundamental architectural rule, though I don't think the code changes hinted at that being modified.
     
    Lightman, BRiT, yuri and 1 other person like this.
  15. del42sa

    del42sa Newcomer

    actually 1.5x efficiency doesn´t look that great if we consider 14nm to 7nm jump + architecture changes and 1.25x IPC .....
     
  16. itsmydamnation

    itsmydamnation Veteran

    Why doesn't it? What does IPC have to do with efficiency? I watched the keynote, read the published descriptions, read the round table transcript , no where have AMD actually stated what 1.5x efficiency relates to.

    But on another note, you posted earlier that you agree with buildzoid, but his entire rant was he doesn't care about uarch (that really shows because alot of stuff he said is really quite stupid) but about end performance yet here you are trying to use those things to shit on a product you know almost nothing about. So which way is it?
     
    Lightman likes this.
  17. Bondrewd

    Bondrewd Veteran

    Either way, what's with the total lack of any relevant Navi leaks?
    Was the RDMA acronym ever actually leaked?
    The "stupidity is contagious" one.
     
  18. anexanhume

    anexanhume Veteran

    Haven’t had a chance to fully dig into the links being made here, but they suggest RDNA is indeed Super-SIMD as conceived of in the patent.

     
    OCASM, w0lfram, Lightman and 7 others like this.
  19. del42sa

    del42sa Newcomer

    actually they said : ""And then, when you put that together, both the architecture – the design capability – as well as the process technology, we're seeing 1.5x or higher performance per watt capability on the new Navi products"

    although they did not specified which chip they used for comparison ( which making the statement a bit vague) it´s clearly 14/12nm and it´s been already posted here in this forum

    higher IPC have to do with efficiency one thing: If the chip can do more instruction per clock and thus more performance, it can be clocked lower and consume less power. Wasn´t that problem for most previous GCN chips ? More power consumption ?

    To Buildzoid note: I said I agre with most of them, not with all of them ! But I am curious, what was really quite stupid that he said ? That GCN didn´t scale well with high count SP units ? or what ?

    End note: I think you sould carefully choose words with your communication with others.....
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

Loading...