AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
I'm still skeptical about this magical cache, like if it was so easy to just add more cache why haven't it be done before? even in a smaller scale...But at the same time what I find more intriguing is why Sony is being so extremely secret about It's GPU...We have no pics of the chip, or even de PCI or anything. Maybe it's AMD prohibiting it's revealing until they officially lunch the architecture? Or maybe this specs are AMD doing the same as Nvidia "leaking" false specs to hide the real one.

For me I just want to be able to play at 1440p at 120FPS for less than 400 bucks.
 
In my analysis, a "256-bit double Navi 21" comes out at ~363mm². Wouldn't it be funny if the rumoured die sizes were all for the "next chip up in size"...
That is an interesting thought. The 128CU CDNA chip with 4096HBM would be the ~500mm2 ASIC in that case.

More GDDR6 PHYs alone don't give the chip more effective bandwidth. They'd need to pair it with more memory chips, which come at a (very unpredictable, lately) cost. Plus, it seems that GDDR6 is especially picky in regards to signaling and PCB placement, which is probably why 384bit arrangements are now reserved for >$1400 graphics cards (nvidia had 384bit GDDR5 cards in the $650 range).
I don't think you can draw any conclusions of GDDR6 pricing based on a +enthusiast segment GPU with a newly developed signaling spec and uses 24 GDDR6X ICs.

For me I just want to be able to play at 1440p at 120FPS for less than 400 bucks.
IMO- that has been possible for over a year now with the RTX2060S and RX5700/XT though it still depends on the game and using sweetspot IQ/performance settings. As long as you have realistic allowances of your minimums, it is doable, and Freesync definitely helps.
 
IMO- that has been possible for over a year now with the RTX2060S and RX5700/XT though it still depends on the game and using sweetspot IQ/performance settings. As long as you have realistic allowances of your minimums, it is doable, and Freesync definitely helps.

Yes but none are less than 400 bucks new and you still need to lower the quality and forget about some effects. I want a more flawless experience with some performance to spare to maintain it for a reasonable time.
 
Yes but none are less than 400 bucks new and you still need to lower the quality and forget about some effects. I want a more flawless experience with some performance to spare to maintain it for a reasonable time.
5700XT's can be had new for under $400.

There's an MSI for 375 after rebate at newegg
 
I don't think you can draw any conclusions of GDDR6 pricing based on a +enthusiast segment GPU with a newly developed signaling spec and uses 24 GDDR6X ICs.
Ok. How about all the 384bit implementations of GDDR6 non-X releasing at $2500 or more?
 
Yes but none are less than 400 bucks new and you still need to lower the quality and forget about some effects. I want a more flawless experience with some performance to spare to maintain it for a reasonable time.

My guess is, since Navi is one feature set all the way through, the lowest end cards (128bit bus?) will have Rx590-ish performance for $250-200, but that might only hit 1080p at best. If you want more than that though, well... hopefully the 6500... 6400? and 3060 will be $299.
 
I'm still skeptical about this magical cache, like if it was so easy to just add more cache why haven't it be done before? even in a smaller scale...But at the same time what I find more intriguing is why Sony is being so extremely secret about It's GPU...We have no pics of the chip, or even de PCI or anything. Maybe it's AMD prohibiting it's revealing until they officially lunch the architecture? Or maybe this specs are AMD doing the same as Nvidia "leaking" false specs to hide the real one.

For me I just want to be able to play at 1440p at 120FPS for less than 400 bucks.

Or maybe it's hard to have an efficient cache, and you need the right architecture to exploit it fully. So, it was not on the table before RDNA2.
 
My guess is, since Navi is one feature set all the way through, the lowest end cards (128bit bus?) will have Rx590-ish performance for $250-200, but that might only hit 1080p at best.

That already exists and it's called the RX 5500XT. There's no reason to launch such a similar card unless they can make it significantly cheaper.
 
My guess is, since Navi is one feature set all the way through, the lowest end cards (128bit bus?) will have Rx590-ish performance for $250-200, but that might only hit 1080p at best. If you want more than that though, well... hopefully the 6500... 6400? and 3060 will be $299.

I was thinking in 6600, 6700 for 350-400 maybe(for me that would be about 600 dollars because of taxes..... ) as it seems AMD will compete with price so I expect more cheaper prices than Nvidia,


Or maybe it's hard to have an efficient cache, and you need the right architecture to exploit it fully. So, it was not on the table before RDNA2.

Yeah but something like that out of nowhere? Like sure AMD can innovate, as saw that with Zen but just creating a big cache to "forget" about ram interface? I find hard to believe that Nvidia with its huge R+D bugged hasn't try it or even AMD with RDNA with a more simpler implementation like from zen 1 to zen 2 where Zen 2 is basically zen 1 but split into 3 parts.

But I'm all in to be surprise in this.
 
Hmm... Zen2 has double the FP units & associated bandwidth compared to Zen, it has improved branch predictors, µOPs cache, TLB, load/store bandwidth, L3 cache size, L2 cache latency, reorder buffer, No need for NUMA, and a lot of other improvements.

Yes Zen 2 is an improvement. My point was that AMD first created the modular Arch created the CCX and then move to the next step dividing them, they didn't start with the zen 2 approached. I would be surprise they didn't include a simpler version of this magical cache in the first RDNA1 and use it to boost performance.
 
Yes Zen 2 is an improvement. My point was that AMD first created the modular Arch created the CCX and then move to the next step dividing them, they didn't start with the zen 2 approached. I would be surprise they didn't include a simpler version of this magical cache in the first RDNA1 and use it to boost performance.
They didn't move to chiplets because it would be magically better, they moved to chiplets because of cost and bad scaling of analog parts like memory PHYs at smaller processes.
Moving to chiplets allowed them to easily fill parts of their obligations to GloFo by producing the IO-die at 12nm and 14nm (for X570 chipset), as well keep the expensive 7nm chips as small as possible. Being able to scale server parts without putting a ton of CPU chiplets when the customer only wants huge bandwidth didn't hurt either.
 
Moving to chiplets allowed AMD to position themselves well vs Intel's 10nm architectures. AMD did moar-corez (16c desktop and 64c server) to compete with Intel. It's not realistic to do that utilizing NUMA.
 
I'm still skeptical about this magical cache, like if it was so easy to just add more cache why haven't it be done before? even in a smaller scale...But at the same time what I find more intriguing is why Sony is being so extremely secret about It's GPU...We have no pics of the chip, or even de PCI or anything. Maybe it's AMD prohibiting it's revealing until they officially lunch the architecture? Or maybe this specs are AMD doing the same as Nvidia "leaking" false specs to hide the real one.

For me I just want to be able to play at 1440p at 120FPS for less than 400 bucks.

intel iris pro had an edram cache of 128mb specifically for the reason of saving memory bandwidth. I could imagine amd doing something similar.
 
intel iris pro had an edram cache of 128mb specifically for the reason of saving memory bandwidth. I could imagine amd doing something similar.
That is true .

Also AMD has talked about bringing over cpu designers , perhaps they are just better at optimizing large amounts of cache ?
 
Like sure AMD can innovate, as saw that with Zen but just creating a big cache to "forget" about ram interface? I find hard to believe that Nvidia with its huge R+D bugged hasn't try it

Nvidia aren't really in the business of making the fastest GPUs, they're in the business of making the most profitable ones. And they excel at it.
If nVidia really wanted to bring the fastest gaming GPU they could possibly make to the PC market, they wouldn't use Samsung's 8nm node. They'd go with TSMC's N7+.
They also wouldn't use GDDR6X, they'd use HBM2.
They can't sell GA100-like gaming GPUs for $10 000 in the gaming market in enough quantities to justify the investment, so there's no such product.


Putting a lot of cache into a GPU or SoC makes it considerably larger. If your economics dictate a maximum die area, using lots of cache will eat away at the available area for execution resources and significantly lower the perf-per-mm^2 compared to a competitor that just uses faster external memory (e.g. XBOne with eDRAM vs. PS4 with wider GPU).

My point is that lots of cache isn't a magic bullet. If true, it's just a means to compensate for not using a wider VRAM bus, at the cost of having a larger chip to fabricate.
 
Status
Not open for further replies.
Back
Top