AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Status
Not open for further replies.
I wonder if there is potential for AMD to add a dual Rasterizers to each shader engine, similar to what they did for the HD5870, and if this would be any benefit.

However I suspect that the HD5870 was already a 2 SE design essentially, so it is probably not possible to add 2 Rasterizers to one shader engine.
 
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2

These are AMD's statements. You may believe them or you may not.

The text leading up to that sentence and the referenced 2013 Fiji article indicate there is a set of architectural limits for GCN. There isn't a physical reason why GCN could not be extended, modified, or replaced to expand the limits, but they haven't been.

AMD decided there would be limits to how GCN implementations can vary in terms of units without revisiting the foundations of the architecture, and decided not to revisit the architecture.
 
The text leading up to that sentence and the referenced 2013 Fiji article indicate there is a set of architectural limits for GCN.

I don't think it does. Here is the text you mention:

Back in 2013 we learned that the then-current iteration of GCN had a maximum compute engine count of 4, which AMD has stuck to ever since, including the new Vega 10. Which in turn has fostered discussions about scalability in AMD’s designs, and compute/texture-to-ROP ratios. Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work.

The text claims there was a set of architectural limits to 2013's GCN2. That barrier has since been crossed but RTG chose not to pursue that path.
Anyone is free to question AMD's honesty regarding the answer they gave to @Ryan Smith but according to the article they researched how to go beyond 4 SEs. Otherwise they wouldn't claim they know (present, not will know) how to go past that threshold.
 
I don't think it does. Here is the text you mention:

The text claims there was a set of architectural limits to 2013's GCN2. That barrier has since been crossed but RTG chose not to pursue that path.
Anyone is free to question AMD's honesty regarding the answer they gave to @Ryan Smith but according to the article they researched how to go beyond 4 SEs. Otherwise they wouldn't claim they know (present, not will know) how to go past that threshold.

Knowing how something ought to be modified to allow for some specific feature is very different from having actually modified it.
 
AMD continues to be evasive on the subject of Ray Tracing in Navi:

John Pitzer

One question I get asked, it's our opinion that we're probably too early on the content side for ray tracing to be really important in the near term, but with NVIDIA’s new architecture around Turing, how do you think about your ability to compete with ray tracing? I think you guys do it vis-à-vis software emulation. Is that an effective solution? Is there plans to try to bring out a more silicon based solution, or how do you think about the share dynamics as ray tracing becomes a bigger part of the content side of the gaming?


Lisa Su

Yes. And if I take a step back and just talk about the overall GPU market I think we believe we will be very competitive overall and that includes the high-end of the GPU market. Obviously, there are some new products out there from our competition. We will have our set of new products as well and we'll be right there in the mix. As it relates to ray tracing in particular, I think it's an important technology. But as with all important technologies it takes time to really have the ecosystem adopt. And we are working very closely with the ecosystem on both hardware and software solutions and expect that ray tracing will be an important element especially as it gets more into the mainstream frankly of the market.

https://seekingalpha.com/article/42...d-annual-credit-suisse-technology?part=single
 
Well I'm not sure what answer they're after, a timeframe that they would have a gaming GPU out with some kind of dedicated silicon? They're not going to give any indication of such, nor how they plan to support DXR.

It's seems to me they are preferring to wait for more widespread adoption and see how the market responds over the next couple of years. It's not as if DXR is going to become mainstream in games anytime soon. New tech like this, especially with such a high performance hit isn't going to be widespread quickly. It's not like they're heavily competing in gaming GPU and are going to lose market share if they don't adopt it in earnest fast.

edit: fixed errors from mobile
 
Last edited:
Even though there will be a sprinkling of games supporting DXR coming out over the next couple months, I don't think game producers will be sitting on their "haunches" waiting a few years to experiment with DXR. With the fallback in place for hardware not supporting ray tracing, it's advantageous for game producers to gain experience/exposure by testing different DXR techniques that might best suit their game engine. In the end, the incentive for sidelined producers to build in-house knowledge and remain competitive with other gaming studios embracing the technology at a more rapid pace may be the deciding factor.
 
Well I'm not sure what answer they're after, a timeframe that they would have a gaming you out with some kind of dedicated silicon? They're not going to give any indication of such, not how they plan to support DXR.

It's seems to me they are preferring to wait for more widespread adoption and see how the market responds over the next couple of years. It's not as if DXR is going to become mainstream in games anytime soon. New tech like this, especially with such a high performance hit isn't going to be widespread quickly. It's not as if they're heavily competing in gaming GPU and are going to lose market share if they don't adopt it in earnest fast.

I'd wager they are waiting for the console chips. Its the easiest way for AMD to insure widespread adoption of their standard / hardware implementation. Put it in a ps5 / xbox 2 and you'll have tens of millions of units out there for devs to target.
 
AMD continues to be evasive on the subject of Ray Tracing in Navi:



https://seekingalpha.com/article/42...d-annual-credit-suisse-technology?part=single
They largely did not manage to keep up with directX features either. I wouldn’t be surprised if they have no answer yet.

There would be no answer that she can give at this time.

Lisa Su knows both what Sony and MS want in their console RFPs. But if what they desire is different she cannot disclose it. Thus she must answer from AMDs perspective.
 
They largely did not manage to keep up with directX features either. I wouldn’t be surprised if they have no answer yet.
Huh? What features hasn't AMD been able to keep up with? D3D12 Feature Levels? Yes, NVIDIA got to 12_1 first but that's about it in recent history?
 
The 3DMark ray tracing benchmark will be released on Dec 8, it has been developed in cooperation with AMD and Intel, not just NVIDIA. So I would say AMD has a very good idea of how to accelerate DXR, they are just not ready to implement it yet. As for Intel they will most likely have DXR support by 2020.
3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies.

https://videocardz.com/79161/ul-benchmarks-reveals-3dmark-port-royal-for-ray-tracing
 
Nvidia has top to bottom 12_1 products. AMD only Vega.
Vega is pretty much top to bottom though ;) (Vega 8-11 in Raven Ridge, Vega 16/20 in MacBook Pros, Vega 10/20 on desktop/workstation/server)
But what does 12_1 actually mean in the end? AMD has supported stencil value from PS since GCN1, NVIDIA still doesn't support it on anything and their resource heap support was brought to Tier 2 only with Turing, while AMD has had it since GCN1, do those mean that NVIDIA couldn't keep up with Direct3D features?
 
The 3DMark ray tracing benchmark will be released on Dec 8, it has been developed in cooperation with AMD and Intel, not just NVIDIA. So I would say AMD has a very good idea of how to accelerate DXR, they are just not ready to implement it yet. As for Intel they will most likely have DXR support by 2020.
https://videocardz.com/79161/ul-benchmarks-reveals-3dmark-port-royal-for-ray-tracing
It was more than anything developed with Microsoft. AMD, Intel, NVIDIA etc only gave their input on what they'd like to see in there
 
AMD has supported stencil value from PS since GCN1, NVIDIA still doesn't support it on anything and their resource heap support was brought to Tier 2 only with Turing, while AMD has had it since GCN1,
All of these are either secondary or marginal features, Conservative Radterization and Raster Order are essential features for image quality improvements in DX12. For example NVIDIA made HFTS in DX12 through conservative Rasterization. AMD GPUs unitl Vega can't support these visual enhancements because they lack these features.
 
AMD has supported stencil value from PS since GCN1
That's an understatement, AMD supports that since Evergreen actually (but of course no dx12 there), so that feature is getting really old (though intel only has it since skylake).
 
Nvidia has top to bottom 12_1 products. AMD only Vega.

Because of an arbitrary decision by Microsoft about what 12_1 should contain. They could as well have chosen to make it require Async Compute, then none of the NVidia products up to today would qualify.
 
Because of an arbitrary decision by Microsoft about what 12_1 should contain. They could as well have chosen to make it require Async Compute, then none of the NVidia products up to today would qualify.
I don't think there is a reason to get defensive here. We're just noting observations. Nvidia runs into the same problem when MS chooses things on GCN architecture and it's not present yet on Nvidia.

Intel was the first to be fully compliant.
 
They could as well have chosen to make it require Async Compute, then none of the NVidia products up to today would qualify.
Sigh, I thought we are past this already, most DX12 GPUs are Async capable, now whether their architecture stands to benefit from it or not is another matter entirely.
 
Maybe the point is, things like this are totally pointless and have nothing to do with anyone being slow? Or not having an answer, or whatever?
 
Status
Not open for further replies.
Back
Top