IHV Business strategies and consumer choice

Ah well they backtracked on it earlier then. lol It was originally slated to be a High NA node I think back in 2021 when it was first announced.

I guess it kind of goes in hand with Intel saying they have the first 'prototype' High NA machine, while their original claim was that they'd have the first 'production' High NA machine, so perhaps this new tech isn't going as smoothly as expected in terms of getting online(hardly a surprise).
I really don't think 18A was ever High NA (or at least the first incarnation of it - who knows what 18A-P was originally which might explain why some of their statements were rather vague). The timeframes never made sense given "5 nodes 4 years". I agree some of Intel's early statements were confusingly vague, but I am pretty sure they never explicitly said 18A was High NA, and any 3rd party who said so probably just read too much into what they said (if there was an official statement from Intel to that effect somewhere that'd be prove me wrong obviously).
 
and any 3rd party who said so probably just read too much into what they said
That's possible, though usually Anandtech is pretty reliable and good about these things. But if Intel were being exceptionally cloudy with their wording, I can see how it'd get misinterpreted.
 
Personal attacks are against community rules.
You mean the strategy that put them in the ~10% AIB market share ballpark? Not a single objective person would call it a success...
1709914196391.png
man you're really not all that smart huh
 
It's probable that AMD doesn't see a future in how hardware ray tracing can succeed especially as we go deeper into the current generation. Virtualized geometry could potentially catch on in other applications beyond just UE5 based games
That would be a great oversight on AMD's part, UE5 titles are not exactly known for their performance to visuals ratio as of right now, you need the trifecta of Lumen, Nanite and Virtual Shadow Maps to achieve good performance at the required next gen visual quality, which just costs too much at native resolution, and you endup needing upscaling to run with acceptable performance on high end hardware. Worse yet, on consoles you need a much heavier upscaling (from 720p/1080p) to maintain acceptable performance, all while still not using the latest visual technologies.

Contrast that to Ray Tracing which can be selectively applied to Reflections, Shadows, Lighting, or full blown Ray Tracing for everything (with similar performance profile to UE5), while looking much better and technologically superior. Ray Tracing is also available in many many more games, and the number is about to explode on PC with initiatives like RTX Remix and Path Tracing. AMD can't afford to look like the technologically inferior option on PC.

Focusing on pure rasterization is a dead end for AMD, we are no longer in 2018, upscaling is the norm now, games released without upscaling are ridiculed and frowned upon, and in the upscaling department AMD is further behind. DLSS2 is exclusively available in many many more games, which means NVIDIA users have access to higher performance that isn't available to others, all while looking virtually indistinguishable from native TAA (especially as most games come with awful TAA implementation anyway).

Even in the games that support both DLSS2 and FSR2, they are not the same. DLSS2 offers either higher image quality at the same performance as FSR2 or outright higher performance at the same image quality, as often FSR2 Quality is equal to DLSS2 Balanced or DLSS2 Performance. DLSS Ray Reconstruction is another dimension entirely.

Either way, AMD has to step up their game, introduce more powerful upscalers to compensate for their rasterization deficit and ensure they don't fall further behind in ray tracing.
 
That would be a great oversight on AMD's part, UE5 titles are not exactly known for their performance to visuals ratio as of right now, you need the trifecta of Lumen, Nanite and Virtual Shadow Maps to achieve good performance at the required next gen visual quality, which just costs too much at native resolution, and you endup needing upscaling to run with acceptable performance on high end hardware. Worse yet, on consoles you need a much heavier upscaling (from 720p/1080p) to maintain acceptable performance, all while still not using the latest visual technologies.

Contrast that to Ray Tracing which can be selectively applied to Reflections, Shadows, Lighting, or full blown Ray Tracing for everything (with similar performance profile to UE5), while looking much better and technologically superior. Ray Tracing is also available in many many more games, and the number is about to explode on PC with initiatives like RTX Remix and Path Tracing. AMD can't afford to look like the technologically inferior option on PC.

Focusing on pure rasterization is a dead end for AMD, we are no longer in 2018, upscaling is the norm now, games released without upscaling are ridiculed and frowned upon, and in the upscaling department AMD is further behind. DLSS2 is exclusively available in many many more games, which means NVIDIA users have access to higher performance that isn't available to others, all while looking virtually indistinguishable from native TAA (especially as most games come with awful TAA implementation anyway).

Even in the games that support both DLSS2 and FSR2, they are not the same. DLSS2 offers either higher image quality at the same performance as FSR2 or outright higher performance at the same image quality, as often FSR2 Quality is equal to DLSS2 Balanced or DLSS2 Performance. DLSS Ray Reconstruction is another dimension entirely.

Either way, AMD has to step up their game, introduce more powerful upscalers to compensate for their rasterization deficit and ensure they don't fall further behind in ray tracing.
UE5 is only out done in terms of visuals to performance ratio by the latest NV GPUs. Going down the HWRT route is just not an option where current consoles are the lead development platform. I suspect this plays into AMD’s HW decisions. IMO they need a proper DLSS competitor more than better RT performance to possibly regain marketshare. They are also going to have to eat some profit margin and undercut NV prices for a while to get people to consider withdrawing from the NV ecosystem.
 
I suspect this plays into AMD’s HW decisions
No, they just focus on PPAmaxing first and foremost since the primary target for radeon as IP is your APUs and consoles and exynoses of the world.
Adreno or Mali are much the same wrt RTRT perf.
IMO they need a proper DLSS competitor more than better RT performance to possibly regain marketshare.
The trick part is the performance.
Without MFMA it's gonna be kinda (well, dogshit) slow, and even slower on DP4a-only parts.
You've seen DP4a XeSS, it's slow and the IQ is middling.
They are also going to have to eat some profit margin and undercut NV prices for a while to get people to consider withdrawing from the NV ecosystem.
They need to just execute.
And build a halo shotgun which opens them moving past 30% mss.
RDNA3 is 0/2 for that; better luck next time!
 
That would be a great oversight on AMD's part, UE5 titles are not exactly known for their performance to visuals ratio as of right now, you need the trifecta of Lumen, Nanite and Virtual Shadow Maps to achieve good performance at the required next gen visual quality, which just costs too much at native resolution, and you endup needing upscaling to run with acceptable performance on high end hardware.

UE5 will be everywhere though. I wonder if there’s anything AMD can do to give themselves a leg up in that engine or is it truly just flops and bandwidth.
 
UE5 will be everywhere though. I wonder if there’s anything AMD can do to give themselves a leg up in that engine or is it truly just flops and bandwidth.
In the absence of HWRT being used, the consoles will probably take care of that for them.
 
In the absence of HWRT being used, the consoles will probably take care of that for them.

True UE5 seems be doing well enough on consoles but that won’t necessarily help in an RDNA 4 vs Blackwell PC matchup. Consoles are pretty old now and UE5 has evolved a bit since their launch. Andrew mentioned 64-bit atomics as a key dependency for Nanite. Curious if there are other things like that where IHVs could lean in to better support the engine.
 
UE5 will be everywhere though. I wonder if there’s anything AMD can do to give themselves a leg up in that engine or is it truly just flops and bandwidth.
@Bold That's exactly what they intend to do with the upcoming GPU Work Graphs API ...

The hardest problem with Nanite is per-cluster hierarchal LoD selection. The naive solution is to just do multiple indirect compute dispatch to calculate the LoDs while traversing ALL levels of the directed acyclic graph but that just drains the GPU out of work. The other solution that's used on consoles is a single dispatch multi-producer/consumer persistent work queue with a global spinlock to coordinate between the different waves within the dispatch ...

As a bonus we could potentially extend GPU Work Graphs with pixel shader nodes where we can implement the "Loose Tiling" technique (material classification system) for Horizon Forbidden West. They have a hybrid compute/graphics work queue (compute is producer & pixel is consumer) where they shade pixels with the compute shader ...
 
In the absence of HWRT being used, the consoles will probably take care of that for them.
UE5's HWRT will be used. Epic is improving its performance (aim is to hit 60 with HWRT on consoles IIRC) and it is highly likely that the further we are into the generation the more "PC exclusive" features games will get.
This whole idea that h/w RT won't be used because of consoles hasn't been panning out so far and I don't see why it would be any different in the future.

That's exactly what they intend to do with the upcoming GPU Work Graphs API ...
That's a DX upgrade which will be (is in fact) supported by all IHVs. Unless RDNA is somehow better suited to take advantage of that particular approach all GPUs will get an improvement from that. The baseline will remain the same.
 
We've seen it time and time again. API changes come to better suit AMD's hardware and improve performance, and by the time they're used in practice, Nvidia already has improved hardware support that completely negates the difference.
 
We've seen it time and time again. API changes come to better suit AMD's hardware and improve performance, and by the time they're used in practice, Nvidia already has improved hardware support that completely negates the difference.
That's assuming that Nvidia has to improve anything at all. The only recent time that was the case was with compute (async or not) on pre-Maxwell h/w but that arguably happened way into Maxwell's lifecycle anyway.
 
@Bold That's exactly what they intend to do with the upcoming GPU Work Graphs API ...

The hardest problem with Nanite is per-cluster hierarchal LoD selection. The naive solution is to just do multiple indirect compute dispatch to calculate the LoDs while traversing ALL levels of the directed acyclic graph but that just drains the GPU out of work. The other solution that's used on consoles is a single dispatch multi-producer/consumer persistent work queue with a global spinlock to coordinate between the different waves within the dispatch ...

Which approach is used on PC? Anything that keeps work on chip and keeps the GPU busy would be a god send.

That's assuming that Nvidia has to improve anything at all. The only recent time that was the case was with compute (async or not) on pre-Maxwell h/w but that arguably happened way into Maxwell's lifecycle anyway.

Seems it’s still just AMD with a work graphs preview driver and none of the other IHVs are talking about it. That likely means it’s more of a priority for AMD and/or their hardware is already well suited to it.

It probably needs a beefy scheduler.
 
Going down the HWRT route is just not an option where current consoles are the lead development platform
That's no longer true, the landscape is shifting, I called this back in 2020, and I will call it again now, when consoles are technologically behind PCs, fewer people will be willing to migrate, in fact more people will migrate to PC to enjoy the better visuals and performance features. And now we have publishers and developers migrating to PC as well, all PlayStation, Xbox and PC games are on PC! The biggest market is in fact PC. Consoles are not the sole lead development anymore. Consoles population can't grow any larger to justify their running costs, and may -in fact- be under the threat of shrinkage (if they aren't already shrinking).

True UE5 seems be doing well enough on consoles
I wouldn't say well to be honest. Performance leaves a lot to be desired. And Image quality is not always the best.

For a UE5 title to sport the next gen "look" of the engine, they need to use Nanite, Lumen and Virtual Shadow Maps, after which the title won't have every good performance on consoles. The title will either have to start from a very low resolution (720p to 1080p) like in Immortals of Aveum and Lords of the Fallen, or sacrifice a great deal of visual features, like in Ark Survival Ascended or Talos Principle 2 (they omitted Lumen Reflections entirely).

Worse yet, in my opinion Lumen doesn't always elevate the visuals to next gen quality in several cases, leading to mixed results. Lumen does look excellent in certain outdoor scenarios, but it falls apart in most indoors, as software Lumen has lots of screen space elements, especially around emissive lights and ambient occlusion, which breaks apart as you move the camera, it also has light leaks and light boiling in areas with secondary bounces/emissive lights, and many many any indoor sections lack indirect lighting entirely. All of this on top of the lackluster reflections.

This was shown here in the DF analysis of UE5 games.


On PC, there is lots of CPU related problems, single threaded bottlenecks, GPU underutilization, PSO stutters and traversal stutters. Now most of these PC problems should be fixed in UE5.4 but we shall see.
 
in fact more people will migrate to PC to enjoy the better visuals and performance features
PC TAM is at record low this year (~260m units).
What did you mean by that?

(Not like consoles are doing that great; AA/AAA gaming is finally crashing for the first time since what, 2008?)
 
Seems it’s still just AMD with a work graphs preview driver and none of the other IHVs are talking about it. That likely means it’s more of a priority for AMD and/or their hardware is already well suited to it.

It probably needs a beefy scheduler.
Nvidia supports current D3D12_WORK_GRAPHS_TIER_0_1 (I don't think that there is anything higher in the API yet? AMD supports the same at least) since R545 in public driver branches.
 
UE5's HWRT will be used. Epic is improving its performance (aim is to hit 60 with HWRT on consoles IIRC) and it is highly likely that the further we are into the generation the more "PC exclusive" features games will get.
This whole idea that h/w RT won't be used because of consoles hasn't been panning out so far and I don't see why it would be any different in the future.
I hope I'm wrong, but I don't expect HWRT to become too common place in UE5 games on consoles. WRT PC, I only expect a minority of devs will add much in the way of PC only RT in these titles.
 
I hope I'm wrong, but I don't expect HWRT to become too common place in UE5 games on consoles. WRT PC, I only expect a minority of devs will add much in the way of PC only RT in these titles.
We don't need all or even many releases to use HWRT for it to be a desired feature, several high profile games will be enough for that.
 
Which approach is used on PC? Anything that keeps work on chip and keeps the GPU busy would be a god send.
They currently use the multi-pass ExecuteIndirect approach on PC because implementing persistent threads with a global spinlock to do inter-workgroup synchronization caused too many deadlocks over there and it's massively undefined behaviour too ...

I guess that's what Andrew meant by "forcing the issue" since Epic Games is major ISV that can't be ignored and have a lot of POWER to outright terrorize IHVs and Microsoft into doing whatever they want. Epic Games absolutely had it their way in the end with GPU Work Graphs ...
 
Back
Top