The narrative that AMD was caught by surprise is ridiculous. Will they implement something similar to Nvidia's RT cores (which technically aren't even "cores"..) ?
The described functionality of the RT cores is along the lines of an offloading co-processor or semi-independent sequencer. It's not entirely clear how they are physically integrated relative to the SIMD units or other blocks, but their behavior sounds closer to a core than the SIMD lanes, and might be nearer the polymorph or texturing units.
In that regard, perhaps AMD could benefit from exposing to the outside world a new sequencer domain in their SIMDs, like the texturing block which does a fair amount of computation on its own and runs on the far side of a set of data and command buses.
The scalar unit as we know it is linked more tightly to the scheduling and flow control elements that likely had something similar in hardware pre-GCN, just not exposed to the programmer.
Adding more such resources may allow more concurrency in non-vector work and take advantage of the elements in the pipeline that do things like coalesce accesses and handle long-latency fetch loops, while adding a level of programmability the RT cores lack, although this doesn't guarantee the compactness in hardware and may not play well with the much more confined caches in current GCN.
Saying they have a vision of cloud computing and are working with Microsoft is corporate-speak. It doesn't indicate the scope of the work, what part of the console space it encompasses, or how much has actually been committed to.
From
https://seekingalpha.com/article/42...18-deutsche-bank-technology-conference?page=5 (some transcription irregularities, but the sentences in bold seem clear):
Devinder Kumar
We like the semicustom model a lot. Semicustom model is one of those; as you observe the game consoles, you win the designs; some of the engine and the expenses gets depraved by the input from the customers; we go ahead and get the chip out; and after that, it’s a mutually exclusive deal where you can predict revenue. Going back to 2012, 2013 timeframes, we’ve had predictably somewhere between $1.5 billion to $2 billion of revenues coming from the game console business, both Sony and Microsoft and that has allowed us to invest in exactly the roadmap that is delivering right now. We like that business a lot. We are competing for the next generation product. But Sony and Microsoft have to make their decisions and then taken we'll take it from there. But we like it a lot from an overall standpoint
In both cases, it's an executive that most likely does not want to speak for a partner, but also means they are not claiming that the consoles have been committed to be AMD hardware.
Competing for a contract and working together with the console makers are also not exclusive. Projects of this scope can have a lot of cooperative work between a hardware vendor and the platform holders, and that cooperation in setting down a candidate design could broadly count as "working with" Microsoft or Sony even if it is rejected. AMD's EPYC processors might be part of the cloud infrastructure for a console that is all or in part not AMD, and a worse scenario is working with Sony or Microsoft on a framework for backward/forward compatibility with a different vendor's CPU and/or GPU.
Hasn't AMD mentioned that Navi was going to be their first major GFX revision since GCN was introduced? IE - marking a significant move away from the derivative GCN architectures that we've seen since the 7970?
Possibly yes and no? Perhaps the most significant ISA change came with GCN3, where VI changed its encoding significantly for scalar memory writes, as well as data-parallel operations and sub-word addressing. Also, AMD touted Vega as the biggest jump ever, and probably had similar marketing of other transitions.
And the most anticipated features not working
One idea that came to mind is that with Nvidia's task and mesh shaders, both vendors have now offered a re-tooling of the geometry front end. There is some overlap of scope, and some of their decisions probably align because they are facing similar challenges. However, I think they also diverge in various parts in ways that may conflict. One of the vendors has gone ahead and committed to offering its new shaders with a more clear API extension for its methods, and the other vendor has for some reason let things drop. The possibility exists that even a functional NGG might be threatened if the design target it had was replaced due to outside factors.
Compared to 2080ti the frontend of Vega is not bad. I will get the same Drawcall Limits like the 2080 ti in the API Overhead test.
If Nvidia gets its way, however, there's going to be a path available for a significant fraction of those calls that would bypass that bottleneck.