AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Status
Not open for further replies.
ِAMD's David Wang shuts the door on the MCM approach for Navi: it's not happening. Not now, and not for a long time.

We are looking at the MCM type of approach, but we’ve yet to conclude that this is something that can be used for traditional gaming graphics type of application.”

To some extent you’re talking about doing CrossFire on a single package,” The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance

We’re going down that path on the CPU side, and I think on the GPU we’re always looking at new ideas. But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features... The multithreaded CPU is a bit easier to scale the workload. The NUMA is part of the OS support so it’s much easier to handle this multi-die thing relative to the graphics type of workload

That’s gaming In professional and Instinct workloads multi-GPU is considerably different, we are all in on that side. Even in blockchain applications we are all in on multi-GPU. Gaming on the other hand has to be enabled by the ISVs. And ISVs see it as a tremendous burden

https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1
 
What a total shocker!

But let’s give AMD credit for providing fuel for hundreds of thousands of forum posts around the world to discuss this idea.
How is an article providing evidence to the contrary refuting the design though? AMD stating "we are all in on multi-GPU", and in turn MCM, for professional and workstation. The only catch was graphics, where it was possible, but the pipeline needs to adapt and that is already occurring. This article would seemingly confirm the MCM approach.
 
ِAMD's David Wang shuts the door on the MCM approach for Navi: it's not happening. Not now, and not for a long time.
https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1

So now Radja is at Intel, his MCM GPU ideas may come to fruitation with EMIB instead of Infinity Fabric.
But as AMD already concluded this will be a failure for graphics, Intel may surprise us again.

When the previous RTG lead, Raja Koduri, had been waxing lyrical about his Vega baby he had introduced the notion that the Infinity Fabric interconnect would be the perfect system to splice a bunch of discrete GPUs together on a single ASIC design.

"Infinity Fabric allows us to join different engines together on a die much easier than before," Koduri explained. "As well it enables some really low latency and high-bandwidth interconnects.This is important to tie together our different IPs (and partner IPs) together efficiently and quickly. It forms the basis of all of our future ASIC designs."
 
The AMD Vega chips might be the last big Radeon GPUs they ever make.
“We haven't mentioned any multi GPU designs on a single ASIC, like Epyc, but the capability is possible with Infinity Fabric."
Worth noting though that is going back to end of last year.
The same author yesterday did that article posted earlier:
Contrary to what most of us in the tech press had hoped, the next-gen AMD Navi graphics cards will use a familiar monolithic GPU design as opposed to the multi-chip layout we'd hoped might deliver an efficient high-end gaming card.


The earliest it will come to market would be as an HPC/AI/Cloud compute dedicated solution, and even dedicated I would say Navi is too soon to get it to work as a true unified/coherent memory-cache/compute solution so its usability is limited beyond possibly a specific single solution (sort of like how Nvidia has created specific single product solution for now with the NVSwitch-NVLink, context applying-implementing the tech to a single expensive large node solution for near future rather than fully comparing AMD's IF to NVSwitch-NVLink); trying to do this for gaming/consumer would be more complex and to be expected IMO afterwards.

Edit:
Added cache to unified-coherent aspect.
 
Last edited:
So now Radja is at Intel, his MCM GPU ideas may come to fruitation with EMIB instead of Infinity Fabric.
But as AMD already concluded this will be a failure for graphics, Intel may surprise us again.

Except they didn't say that. What they said is that currently it doesn't provide any benefit to the gaming market as it relies too much on software developers making it worthwhile for the gaming market. It is however, useable for the non-gaming market.

They also mentioned they are still looking at it for the gaming market, but until it can be made invisible to ISVs then it's not suitable to the gaming market.

While it isn't suitable now for the gaming market, they haven't closed the door on it for that market in the future.

Regards,
SB
 
Except they didn't say that. What they said is that currently it doesn't provide any benefit to the gaming market as it relies too much on software developers making it worthwhile for the gaming market. It is however, useable for the non-gaming market.

They also mentioned they are still looking at it for the gaming market, but until it can be made invisible to ISVs then it's not suitable to the gaming market.

It could also dovetail with the rumour of Navi being designed primarily for Sony. Consoles don't have to wait for software to catch up.
 
If Navi is being "designed primarily for Sony" what does AMD have in store for MS? Surely MS won't take something "primarily designed for someone else", considering they've both been influencing the architecture development so far
 
If Navi is being "designed primarily for Sony" what does AMD have in store for MS? Surely MS won't take something "primarily designed for someone else", considering they've both been influencing the architecture development so far
Stock Navi without unique Sony patented features maybe?
And after nvidia-nintendo deal I think nvidia could provide viable graphics + AMD Ryzen with MCM.
Sony could go MCM route too to help yields initially.
 
And after nvidia-nintendo deal I think nvidia could provide viable graphics + AMD Ryzen with MCM.
Microsoft's quest for console superiority could sway them to NVIDIA's side to outmaneuver Sony. Backward Compatibility would suffer but that's something they would be willing to sacrifice to get a leg up. And their successful effort in maintaining X360 compatibility on XOne ensures they can effectively mitigate that sacrifice.
 
I'm pretty sure MS isn't interested in new console deal with NVIDIA after the original Xbox shenanigans. Also I'm pretty sure AMD wouldn't allow NVIDIA to do MCM with their CPUs.
And lastly, how does NVIDIA deal with Nintendo change anything about their possible future console business? They did nothing but dump their inventory of old SoCs with no customisations on them
 
Aiming for a 400$ or a 300$ sweet spot means Microsoft and Sony don't have many options regarding GPU specs, die size, frequency ..etc. Meaning they would both end up having very similar GPUs. Microsoft knows this very well, one of their viable options here is to aim for the more power efficient architectures on NVIDIA's side, this way they can one up Sony with a more powerful GPU in the same power budget.

And businesses hold no grudges. They would go where their interests lie. Regardless of anything else. And Intel's Vega M is a bright example of that.
 
Aiming for a 400$ or a 300$ sweet spot means Microsoft and Sony don't have many options regarding GPU specs, die size, frequency ..etc. Meaning they would both end up having very similar GPUs. Microsoft knows this very well, one of their viable options here is to aim for the more power efficient architectures on NVIDIA's side, this way they can one up Sony with a more powerful GPU in the same power budget.

And businesses hold no grudges. They would go where their interests lie. Regardless of anything else. And Intel's Vega M is a bright example of that.
Oh, businesses can hold grudges alright. They are run by human beings, after all. And we learn from bad experiences.

Be that as it may though, if Microsoft turns to nVidia for their GPU, how do you propose they solve the CPU/memory part of the package? Going with ARM would seem the only viable path from an economic and system architecture point, but would the lack of binary backward compatibility be worth it?
 
Last edited:
Not the thread for discussing non-navi items.
 
Be that as it may though, if Microsoft turns to nVidia for their GPU, how do you propose they solve the CPU/memory part of the package?

All I know is that after the fierce almost one sided competition this generation, expecting both vendors to go the same route (and thus deliver the same experience) is extremely naive, either Sony will lock Microsoft out (through an exclusive deal with AMD), or Microsoft will diversify CPU or GPU wise to get an advantage.
 
All I know is that after the fierce almost one sided competition this generation, expecting both vendors to go the same route (and thus deliver the same experience) is extremely naive, either Sony will lock Microsoft out (through an exclusive deal with AMD), or Microsoft will diversify CPU or GPU wise to get an advantage.

What dictates the experience is the games first and foremost, and then the console's features. Whether shaders are run on Compute Units or Streaming Multiprocessors, however, changes little. I doubt most console gamers even know what's in their machine.

There's plenty of ways to differentiate with the hardware: different CPU:GPU ratios, different absolute performance levels, different amounts of memory, different accessories (VR, controllers, etc.), different kinds of video support (4K, 8K, HDR, adaptive sync, stereoscopy, etc.).
 
Status
Not open for further replies.
Back
Top