AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Why would it be anything else?
The one in Frontier is also GCN.

What was the last time a GPU vendor did that

Because GCN is clearly lagging behind what nVidia is doing ? And it seems to be harder and harder to have good gains between each gcn iterations ? They need to make a big jump like TeraScale to GCN again to stay in the game imo. The hope was maybe Navi was that jump.
 
Isn't GCN an ISA? I doubt AMD would switch off it completely in the foreseeable future. Every iteration of GCN changes some functions but the core of it still gets called GCN. Switching off GCN is kind of like switching off X86 for CPU. Even after x64 came out, people still call it x86.

They can probably redo the whole core and blocks and it can still be GCN. Only reason they would change might be a rebrand to show significant change.
 

Edit: tweet was deleted. It alleged Navi as 2x32 SIMD (or 2x16x2) instead of 4x16 in Vega.

more "rumors":

38eb5114a15509c09ddfa461df3da58f.png

7b788b4d1f7ee48919c0c441a01db356.png

a9530b3e87e65215b24ecaf2c4be839c232f35b2f3349cf60d0ae50d8c680237.png


take it with a mountain of salt ( as usual )
 
Because GCN is clearly lagging behind what nVidia is doing ? And it seems to be harder and harder to have good gains between each gcn iterations ? They need to make a big jump like TeraScale to GCN again to stay in the game imo. The hope was maybe Navi was that jump.

What part of GCN's architecture do you consider to be such a bottleneck compared to what nvidia is doing?
 
What part of GCN's architecture do you consider to be such a bottleneck compared to what nvidia is doing?

I look at the absolute performance, and the way amd is always late to the party. Plus, the performance / power usage is often not great at all, like they're pushing their gpus over the "nice balance" (sorry for my english. It seems to me that they are over the sweet spot in that regard)
I mean, am I the only one seeing that amd is not challenging nvidia for years now in the high end ? And if they can't produce a good high end card with GCN, it's a problem.
 
Bandwidth efficiency pretty obvious too.
Is it? Bandwidth efficiency is one thing and providing more bandwidth to compansate insufficient overal performance caused by low GPU-clocks is a different one. E.g. Raven Ridge with 47 GB/s shared with four CPU cores offers comparable gaming performance to GeForce GT 1030, which has 48 GB/s just for the GPU. How is it possible, if there's an issue with bandwidth efficiency?
 
Is it? Bandwidth efficiency is one thing and providing more bandwidth to compansate insufficient overal performance caused by low GPU-clocks is a different one. E.g. Raven Ridge with 47 GB/s shared with four CPU cores offers comparable gaming performance to GeForce GT 1030, which has 48 GB/s just for the GPU. How is it possible, if there's an issue with bandwidth efficiency?
Isn’t it pretty much universally accepted that Nvidia has better compression and better utilized/larger caches?
 
Lisa's keynote is unrelated to Navi session.
The title of the keynote doesn’t say much about its content which was why I specifically referenced Anandtech that stated that she would indeed have Navi as (on of) her topics.
And of course Navi has a session as well.
It would be odd for AMD to dwell so much on Navi if all they have to say is that it is the same old, only on 7nm.
 
Isn’t it pretty much universally accepted that Nvidia has better compression and better utilized/larger caches?
What does compression have to do with ALU's( you know the GCN part) ? In terms of Nvidia having better caching what part of that affects GCN, Nvidia seems to have caching advantages and bandwidth advantages related particularly to ROP's.

The anecdote i have for vega ( i own a 56) is that games that run well clock lower and games that dont clock higher, this to me points to a feeding the beast problem not a reinvent the wheel problem.

AMD have come out and said Zen is front end limited, they aren't throwing away x86 and the complete Zen uarch now are they, just fixing those pieces.

fix the major bottlenecks, general incremental improvements across the board, add 7nm gains and it should be a pretty good product.
If you give AMD 3-4 years for a new from the ground up GPU design and give it a start date of after Zen launch (R&D/revenue constraints) then we are still 1-2 years away from anything like that if they feel they need to replace GCN at all.
 
I look at the absolute performance, and the way amd is always late to the party.
Being late to the party and choosing not to compete with >750mm2 GPUs is AFAICS an AMD problem, not a GCN one.


Well the obvious one is power usage.
Plus, the performance / power usage is often not great at all, like they're pushing their gpus over the "nice balance" (sorry for my english. It seems to me that they are over the sweet spot in that regard)
AMD GPUs do have a problem with the clocks they reach within the optimal power curve, which is why AMD always puts their desktop solutions clearly above this curve.
But is that an intrinsic GCN problem?
Would changing e.g. from RISC SIMD to the "Super SIMD" that we saw in a patent solve the clocks / power curve problem?

Because to me this seems to be more of a process engineering problem than an architectural one, which is why they sent Zen engineers to help RTG.

Bandwidth efficiency pretty obvious too.
To be honest, this is the only architectural problem that I recognize, though I wonder how much is inherent to GCN (cache hits I guess?) and how much is just a less effective color compression.

Frontend scaling too - the same config is found in RX 570 and VII too
Would the Radeon VII benefit from a wider front end?
Seems to me that they didn't address that question because no solution in their pipeline really needed a wider front end. Vega 10 would have gotten too big for a solution that had to scale down, and Vega 20 would have gotten too big for a 7nm pipecleaner.

Vega 10 would have been competitive with the 1080 Ti if it could also turbo up to 1750MHz, and if it did we wouldn't be talking about the 4 shader engine "limit".
 
Last edited by a moderator:
Was Komachis tweet deleted? Doesn't seem to be part of the thread anymore

yes, it was very quickly deleted. So it may be some truth in it.... Who knows. Take it with ususal amount of salt. Strange thing about this info is why would they make such big change with last GCN and not with Fiji/Vega already.


It´s get quickly removed, while I wrote this post.... but I have back up https://uploads.disquscdn.com/image...f008d89a5810ca72ae0155f87a805.png?w=800&h=514
 
If this is true, will this help with drivers développement too ? Like, will this be easier to obtain good utilization / performances, or it won't matter much because this is handle at the hardware level ?
 
If this is true, will this help with drivers développement too ? Like, will this be easier to obtain good utilization / performances, or it won't matter much because this is handle at the hardware level ?

well yes and no.

A) why bother with Primitive Shaders and NGG gimmick ? Why pushing developers to do special coding in game engine to support PS if you have 8 Geometry Engines and wider raster engine ?
B) it won´t solve high power consumption of GCN chips
C) you still need to optimize drivers for games

However we still don´t know if 8SE is true.
 
why bother with Primitive Shaders and NGG gimmick ?
It's as gimmick as Mesh Shaders are (it's not).
it won´t solve high power consumption of GCN chips
You haven't seen anything remotely resembling actual Navi chips or their power targets.
you still need to optimize drivers for games
Well, I can't believe IHVs will do the thing they have been doing for over 2 decades now!
 
Status
Not open for further replies.
Back
Top