AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Ok guys tell me if I am crazy here: Is Navi using a Chiplet setup ?

COMPUTEX_KEYNOTE_DRAFT_FOR_PREBRIEF.26.05.19-page-0122.jpg

Navi uses infinity fabric:
https://pics.computerbase.de/8/8/1/1/7/16-1080.9ce6ffcb.jpg

Ryzen 3000 chiplets:
http://www.comptoir-hardware.com/images/stories/_cpu/7nm_amd/ryzen3000-package.jpg
Vega has IF too.

David Wang said Navi will only be monolithic and that chiplet GPU is essentially Crossfire.
 
Cost per mm^2 yielded is also going up.
Is client dGPU big enough of a market to amortize several sizeable N5 dies across the product stack?
Damned if I know.
That said, I did ask an AMD engineer about the costs of doing ”derivative” chips such as for instance halved or doubled from a given design, and he confirmed that such projects were a lot less costly to do. Give that there is a sizeable low end and laptop market, I would assume those chips will pay off. The larger the chip, the murkier the proposition of course. Then again, the PC market is an upgraders market and AMD and nVidia need to keep those upgrades happening. That the tail end of a lithographic process has the largest chips makes sense on many levels.
 
Volta is actually higher than Turing. 21.1B for 5120 SP.
Oh, right. I just realized a simple Turing - Volta comparison is made a bit harder by the fact that Volta only has a single chip, basically. A single chip with a completely different memory PHY and greater DP floating point acceleration. That being said, Turing did gain Volta's "dual-issue" SM, Tensor cores, NVLink connector (not the specific model chosen for the comparison, but the larger dies did get NVLink), doubled L2 cache size, increase the TMU count (a bit), reorganized the SM to contain half the number of cores (so some things like total GPU L1 cache was more than doubled, when comparing similar core counts; core/SM count of Pascal was already half of Maxwell), and there is now a L0 (microops?) cache.

So the cores themselves are already a bit harder to compare, especially since Pascal doesn't have separate INT32 cores, whereas Volta and Turing do (Volta also has separate FP64 cores).

At any rate, I now venture Turing's RT cores probably don't take up all that much space, though Tensor cores probably do.

Even the Turing chip that has all that stripped out, TU116, show they take a ~25% increase in transistor count per "core."

I'd also want to see a TU117 vs Pascal analog comparison, since the smallest Turing also removes the HEVC B-frame encoder, in favor of the Volta HEVC encoder. I'd have to imagine that was done to save die space, not licensing costs. Though the licensing costs are reportedly a factor in Google, Amazon, Microsoft, Netflix, etc choosing to fund their own competitor.

---

Rats, I cannot edit yet. I think my previous post came off as being sarcastic in the first few lines. I'm sorry, that wasn't the intent.

I also intended to link a Nvidia page comparing the GP106 to TU116, to support the second-to-last "paragraph."
https://www.nvidia.com/en-us/geforc...ti-advanced-shaders-streaming-multiprocessor/

---

Whoops, attempting to add a link in my apology&clarification nuked it.


ModMerge
 
Last edited by a moderator:
Indeed, RDNA 1 / Navi seems like an architecture that was developed for release in GPUs in late 2017 / early 2018 originally, probably to one or more canceled nodes. E.g. during early 2016 they had planned it for late 2017 release using GF 10nm before knowing the fab would skip directly to 7nm, then until mid 2017 they had planned it for Q3/Q4 2018 using GlobalFoundries' 7nm DUV before knowing the fab would drop out of the high-end nodes altogether.
Mix that with the incredibly low R&D budget they've had for GPUs until 2018 and they just couldn't keep up the redesigns to keep the GPU from getting delayed over and over, forcing AMD to compete in the gaming segments using GCN GFX9 GPUs.

End result is a GPU that (finally) competes well for the power segment because it's gaming focused and it's using a recent node, but isnt' bundling any technology or standard that would be expected for a GPU released in Q3 2019 like HDMI 2.1, VirtualLink, variable rate shading and hardware acceleration for DXR .

The pricing is a bit of a let-down to me but it seems to be solely based on nvidia's offerings though. I can't see anything that says the 5700 XT couldn't be sold for $300, and it probably will after 3 or more quarters when the RDNA 2 higher end card and nvidia's 7nm cards come up.
Without any new hardware features, I wonder if people won't find the currently discounted Vega 10 cards to have a better price/performance ratio than the 5700 family, and if reviewers will point out the poor price/performance comparison relative to their predecessors like they did with the RTX series against Pascal.
Every single AMD RTG graphics card release has been a monkey paw wish (i.e. there's always some negative factor that takes center stage), and I'm thinking those prices might be their undoing. Especially with the rumors of heavy price cuts on nvidia parts.


I wonder what the originally planned "January 2018 Navi" looked like. There would be no 7nm and no GDDR6. GDDR5X seems to have been almost exclusive to nvidia. 384bit GDDR5 and more CUs to compensate for lower clocks?


New Ok guys tell me if I am crazy here: Is Navi using a Chiplet setup ?
What exactly in a high resolution picture of a single-chip GPU makes you believe it's using a chiplet setup?
 
By far my biggest disappointment is price a 250mm die with 8gb memory , last time AMD had one of those it sold for $240 USD.

instead of NAVI maybe i will toss up the idea of strapping an AIO on my vega 56............

Completely agree. I was looking for a GPU between 200 and 300 bucks seems now that the mid range are 400 -500 dollars. fuck if this is continuing I will just pay something like Stadia in the future.

With this pricing I will I'm going for a 1660 or 1660Ti...specially now that they support FreeSync.
 
Ok guys tell me if I am crazy here: Is Navi using a Chiplet setup ?

COMPUTEX_KEYNOTE_DRAFT_FOR_PREBRIEF.26.05.19-page-0122.jpg

Navi uses infinity fabric:
https://pics.computerbase.de/8/8/1/1/7/16-1080.9ce6ffcb.jpg

Ryzen 3000 chiplets:
http://www.comptoir-hardware.com/images/stories/_cpu/7nm_amd/ryzen3000-package.jpg

IF is more about a coherent interconnect between components (e.g. within an APU or even within a just between core/uncore stuff). Chiplets/multi-die setups are just another use case for Infinity Fabric across physical dies.

https://www.overclock3d.net/news/gp...ega_utilises_their_new_infinity_fabric_tech/1

https://en.wikichip.org/wiki/amd/infinity_fabric
 
Last edited:
Completely agree. I was looking for a GPU between 200 and 300 bucks seems now that the mid range are 400 -500 dollars. fuck if this is continuing I will just pay something like Stadia in the future.

With this pricing I will I'm going for a 1660 or 1660Ti...specially now that they support FreeSync.

On the positive side gpu's age slowly nowdays. I have 2+years old 1080ti and for performance side of things it really is holding up very well. It's probably around 5+ years it currently takes to get double the performance for same pricepoint(fire/crypto bust sales excluded).

To me the smart money is in getting higher end gpu and upgrade less often. Also g-sync/freesync monitor makes giant difference on gaming experience.
 
On the positive side gpu's age slowly nowdays. I have 2+years old 1080ti and for performance side of things it really is holding up very well. It's probably around 5+ years it currently takes to get double the performance for same pricepoint(fire/crypto bust sales excluded).

To me the smart money is in getting higher end gpu and upgrade less often. Also g-sync/freesync monitor makes giant difference on gaming experience.

I disagree, For me the smart buy is in the mid range between 200 and 300. you spend how much 800 bucks? I can update my lets say 250 VGA 3 times and keep up with new tech like RT. but with the new mid range at 400 bucks this is getting ridiculous. Now is the completely opposite than before, now on CPU AMD is killing it while on GPU is killing us.
 
Indeed, RDNA 1 / Navi seems like an architecture that was developed for release in GPUs in late 2017 / early 2018 originally, probably to one or more canceled nodes. E.g. during early 2016 they had planned it for late 2017 release using GF 10nm before knowing the fab would skip directly to 7nm, then until mid 2017 they had planned it for Q3/Q4 2018 using GlobalFoundries' 7nm DUV before knowing the fab would drop out of the high-end nodes altogether.
Mix that with the incredibly low R&D budget they've had for GPUs until 2018 and they just couldn't keep up the redesigns to keep the GPU from getting delayed over and over, forcing AMD to compete in the gaming segments using GCN GFX9 GPUs.

End result is a GPU that (finally) competes well for the power segment because it's gaming focused and it's using a recent node, but isnt' bundling any technology or standard that would be expected for a GPU released in Q3 2019 like HDMI 2.1, VirtualLink, variable rate shading and hardware acceleration for DXR .

The pricing is a bit of a let-down to me but it seems to be solely based on nvidia's offerings though. I can't see anything that says the 5700 XT couldn't be sold for $300, and it probably will after 3 or more quarters when the RDNA 2 higher end card and nvidia's 7nm cards come up.
Without any new hardware features, I wonder if people won't find the currently discounted Vega 10 cards to have a better price/performance ratio than the 5700 family, and if reviewers will point out the poor price/performance comparison relative to their predecessors like they did with the RTX series against Pascal.
Every single AMD RTG graphics card release has been a monkey paw wish (i.e. there's always some negative factor that takes center stage), and I'm thinking those prices might be their undoing. Especially with the rumors of heavy price cuts on nvidia parts.


I wonder what the originally planned "January 2018 Navi" looked like. There would be no 7nm and no GDDR6. GDDR5X seems to have been almost exclusive to nvidia. 384bit GDDR5 and more CUs to compensate for lower clocks?



What exactly in a high resolution picture of a single-chip GPU makes you believe it's using a chiplet setup?
It was infinity fabric. I think it's the first time it's used for a monolithic APU without HBM ram ?

Ok I have another question. Is the 10 CUs for one shader engine design mandatory for RDNA ? Is it possible to use for instance 8CUs (well 4 Dual compute units) by SE ?
 
Completely agree. I was looking for a GPU between 200 and 300 bucks seems now that the mid range are 400 -500 dollars. fuck if this is continuing I will just pay something like Stadia in the future.


I think a more pressing concern from IHVs pushing these prices on mid-range cards is a mass migration of PC gamers to consoles. These "neo mid-range" $500 cards paired with $200-300 CPUs and $150 worth or RAM aren't offering a lot more than the mid-gen consoles that are now selling for less than $400.

With these prices, I feel like both IHVs are shooting themselves in the foot.

And if nvidia came out admitting their RTX line didn't make them nearly as much money as they had hoped, AMD trying to pull the same price range because of a 5-10% performance advantage seems a bit stupid IMO.
Unless they're producing Navi 10 in low amounts so they're not really interested in selling it for the masses.
 
I disagree, For me the smart buy is in the mid range between 200 and 300. you spend how much 800 bucks? I can update my lets say 250 VGA 3 times and keep up with new tech like RT. but with the new mid range at 400 bucks this is getting ridiculous. Now is the completely opposite than before, now on CPU AMD is killing it while on GPU is killing us.

I paid a little under 700$. Planning to keep it for ~5 years. As of today I would guess 1080ti is only bested by 2080ti and trades blows with 2080 and radeon vii. Pretty good for old piece of junk.

I'm much more in the camp buy high end and use it long time to get value rather than getting new mid tier crap every 1.5 years. There was time when updating often made sense but that is not anymore.
 
Last edited:
I paid a little under 700$. Planning to keep it for ~5 years. As of today I would guess 1080ti is only bested by 2080ti and trades blows with 2080 and radeon vii. Pretty good for old piece of junk.

I'm much more in the camp buy vigh end, use it long time to get value than getting mid tier crap every 1.5 years. There was time when updatinf often made sense but that is not anymore.

Purchasing top end graphics cards has only ever been for the premium experience of getting the best of the best, and never because the price/performance per year made more sense than buying mid ranges more often.

These past 3 years are the sole historical exception due to crypto boom inflating everything up and then IHVs trying to ride the inflated price wave after the crypto crash, to recoup the costs of competing with their own products that ate being sold in the 2nd hand market.
 
So, in the end we have our next gen super simd architecture?. RDNA 2, the so called next gen is then this plus Ray tracing ?. It has sense to evolve first in rasterizing architecture and after add the RT support.
 
Last edited:
Also streaming services...IIRC you can pay 6 years of Google Stadia with the money they ask you for a Radeon 7(to play at 4K) And we all now it will evolve with new tech with time so you are not limited with a solid piece of hardware. Yes we will have to see the latencies but the difference is absurd.
 
Is a big navi (64cu or more) really a coming thing ? I mean, they are already at 225w with the 40cu version and 7nm, I don't get how they can do a bigger chip not consuming around 300w...

The price are what I predicted, AMD need to make money. nVidia will have to move the 2060 and 2070 a bit, but it's still not a threat.

Meh, waiting for the "true" or "full" next gen now. Navi 10 seems, like always, 1 or 2 years late...

The RX5700 (non-XT) is a 36CU, 180W board.

64/36 * 180 = 320W.

Replace the GDDR6 with HBM2/3, maybe reduce the clockspeed a bit, and you've got a PCIE-compliant, 64CU RDNA GPU.
 
The RX5700 (non-XT) is a 36CU, 180W board.

64/36 * 180 = 320W.

Replace the GDDR6 with HBM2/3, maybe reduce the clockspeed a bit, and you've got a PCIE-compliant, 64CU RDNA GPU.
If consoles use both of them RDNA 2 with more CUs they must have signed less voltage chips or there is no way to fit the power envelope.
 
Last edited:
So, in the end we have our next gen super simd architecture?. RDNA 2, the so called next gen is then this plus Ray tracing ?. It has sense to evolve first in rasterizing architecture and after add the RT support.
And VRS.
Don't think we know anything more about RDNA 2.

I'm assuming hdmi 2.1
 
Status
Not open for further replies.
Back
Top