AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Status
Not open for further replies.
Do you think a chip with 2080Ti performances in "classic" rasterisation 3D, without "rt cores", but with a 1080 type of price could be a right move ? Like "ok, right now hardware RT is not ready for prime time, so let give rasterisation a last big push" ?

But typing that, I realise Navi won't be here before H2 2019... so maybe they need even more performances by then (if they chase high end gaming...)
 
I don’t think Navi needs to break the 4096 shader barrier.

I expect Navi to be around the same size as Polaris, but at 7nm thus allowing for a doubling of cores. I don’t think they need more than that, a better idea I think would be to increase clock speeds.

A small chip with 4k shaders clocked at 1.5+ GHZ with a new microarchitecture could potentially match 1080ti performance at a decent price. I think that would be a very big hit.
 
I don’t think Navi needs to break the 4096 shader barrier.
....
A small chip with 4k shaders clocked at 1.5+ GHZ with a new microarchitecture could potentially match 1080ti performance at a decent price. I think that would be a very big hit.
Vega 10 has 4096 SPs and is clocked at 1.5-1.6+GHz. Still, the 1080Ti is ~30% faster. So the 'new microarchitecture' of yours would have to deliver exactly this 30% improvement in order to match the Ti...
 
Do you think a chip with 2080Ti performances in "classic" rasterisation 3D, without "rt cores", but with a 1080 type of price could be a right move ? Like "ok, right now hardware RT is not ready for prime time, so let give rasterisation a last big push" ?

Definitely yes.

I haven't seen a single review of the RTX cards saying "go buy Geforce RTX because raytracing is looking great in all these games". There aren't any games with it so far, and the predicted performance hit for turning RT on is so big that it makes wanting to turn it on questionable.
OTOH, I've seen some reviews even telling people to purchase a 1080 Ti instead of a 2080 because it performs similarly but comes with more memory.


That said, as mentioned above Navi doesn't really need to break the 64 CU / 4 Compute Engines barrier. If they're able to clock 64 Vega-ish CUs at ~2.1 GHz with 8GB 256bit GDDR6, they're already hitting 2080/1080Ti performance at rasterization.
To be honest, at TSMC's 7nm there's a really good chance they can clock it even higher. There was a ~20% difference in TSMC's 16FF+ vs. GF's 14LPP, and now it's a 30% difference between TSMC's 16FF+ vs. TSMC's 7FF.
It's not that far-fetched to think Vega 20 may reach well over 2GHz at 250-300W TDP, and Navi could be very close to a Vega 20 with GDDR6.
 
Vega 10 has 4096 SPs and is clocked at 1.5-1.6+GHz. Still, the 1080Ti is ~30% faster. So the 'new microarchitecture' of yours would have to deliver exactly this 30% improvement in order to match the Ti...

One would expect Navi on 7nm to clock at least as high as Turing on 12nm, so 1.8GHz or more. So that's 20% right there. The remaining 8~9% shouldn't be too hard to get from micro-architectural improvements (or fixes).
 
The thing is - by the time Navi arrives to the market (Q3/2019?), nVidia will also switch to 7nm. So it'll be same story again

Honestly, I don't think that will happen at all. I think nvidia will give Turing at least 18 months on the market.

If Navi can get the performance, I think they could provide a very competitive solution to a 2060/70 tier at a lower price. I doubt it will be competitive with 2080/2080ti so Nvidia can wait a release a big chip on 7nm in 2020.
 
Honestly, I don't think that will happen at all. I think nvidia will give Turing at least 18 months on the market.

Nvidia will have 7nm parts next year. Probably RTX 2050/2050 Ti parts. Also a die shrink of the next larger Turing dies would help make them more profitable.

I don't see Nvidia sitting out for a year+ when 7nm is available in mass quantity's.
 
Do you think a chip with 2080Ti performances in "classic" rasterisation 3D, without "rt cores", but with a 1080 type of price could be a right move ?
Wolfenstien 2 is generally regarded as the best case scenario for Vega64 in the current times. Recent reviews put 2080Ti anywhere between 80~100% faster than Vega 64 in that game @4K. That's a lot of miles for Navi to cover.

Granted in other games the difference might be closer to ~70% to ~80% or more @4K (85% in TPU, 74% in CB). Still, very large ground to cover.
 
Last edited:
Their first 7nm chips (Vega20) will be shipped to customers when? In H1? So there is no way they'll have gaming chips before Q3. Though I wish I was wrong

Supposedly, it's an H2'18 product. So H1'19 seems plausible for Navi. Plus, it's really hard to say because we know little about Vega 20 and Navi 10, but the latter might actually be smaller (hence potentially better yields).
 
Supposedly, it's an H2'18 product. So H1'19 seems plausible for Navi. Plus, it's really hard to say because we know little about Vega 20 and Navi 10, but the latter might actually be smaller (hence potentially better yields).

I think it is very likely as i think it will have low fp64.
I also dont expect it to have dedicated AI cores, or RT cores.
I expect ( just a guess) for its CU's to have additional logic/structures for AI and hopefully for RT ( assuming sony/MS wanted some kind of tracing acceleration) , how to stacks up to NV i have no idea
I expect Navi to be GCN based but a solid incremental step, more then the changes from GCN 1, to 4
I also expect Navi to clock way higher then most people will be game to say, as i think GF 14mn to TSMC 16nm losts 10-15% in the clocks race so i think 7nm HPC could see clocks in the 2.0 to 2.2ghz range( yes a 50% clock increase) .

lets see in ~6months how wrong i am .... rofl
 
Has there ever been a PC game that actually fully used all 4096 SP, efficiently, through async compute via Parallelism?

Games like Forza 7 and Forza Horizon 4 play very well on GCN as well as other console ports (It seems UWP Games in particular). Wolfenstien 2 isn't the only 'PC' game using Async Compute.

IMO Navi's HW scheduler might be a bit more robust and their ACE could be re-organized with a lower SP count. Something that would take them beyond Async Shader quick response queuing
 
Last edited:
Here is a review of Horizon 4 (UWP title).
https://wccftech.com/forza-horizon-4-pc-graphics-card-performance-comparison-and-core-scaling/

I have to wonder how many SP's are actually in use here. Granted, the 1% lows could be better and can only assume that some fine tuning in the drivers will only improve on this.
I posted this because IMO this looks to be the best case scenario of GCN in Vega. However, I'm not convince it's fully utilizing the ACE.

If this is what we can see from Vega then how much more could we see in performance using Navi. Again best case scenario.
As it stands right now those results could be at 2070 levels.
 
Hmm, I didn't know there was an 'issue" with WCC. However, I used the website because they've specifically used Adrenaline 18.9.3 drivers.

Be that as it may, having seen results from another benchmark and pcgameshardware WCC inclusion shows similar results. This only reinforces the same outcome. So no harm done. AMD's GCN performance really shines when given the opportunity to. I recall another UWP title, Forza 7, which showed similar results.

The point is that if Navi continues with GCN and is able to secure more titles like this it's possible that it could be a very competitive ultra high end product. As shown in this title, Vega is capable of reaching Turing level of performance (assuming where the 2070 will land in it's performance). GCN seems to be very good in games that use async compute through parallel execution be it through DX12 or Vulkan.
 
Last edited:
Hmm, I didn't know there was an 'issue" with WCC.
They're generally a click-bait site that takes content from other sites and posts it as "news". They sometimes do their own stuff but more often it's simply re-posting from other journalists for page views.
 
  • Like
Reactions: ECH
Status
Not open for further replies.
Back
Top