AMD Execution Thread [2024]

Is it a fact that Vega was targeting the 1080ti instead of the 1080?
No one has any idea but the die is GP102 size with none the perf.
I still think RDNA 3 is worse off
No.
DLSS alone makes it hard to consider an AMD GPU without a huge price difference.
Bling.
We're talking core h/w PPA metrics where Vega was an absolute catastrophe and RDNA3 is still manageable, even if it doesn't clock 3.5GHz.
No it doesn't rule anything, and it's was never the nature of UE to rule anything
Yes it does!
UE3 and UE4 despite being used everywhere in lots of games never dictated anything, they never replaced other engines or pushed them away, and they never dictated how GPUs evolve in major ways
Engine development costs were far cheaper back then and barrier to entry far lower.
You're forgetting the important bits.
Gamedev got a lot more expensive overall, especially at AA/AAA scale.
UE5 is a cheap pathway to a good AAA engine so it DID in fact devour middle-market and is now going into AAA just fine.
Like CDPR killed RED engine off in favour of UE5.
Writing's on the wall.
yet it never influenced anything.
Pretty sure the 7th gen brownwave-core visuals were a distinct UE3 collateral (but don't quote me on that, 15 years ago and all).
 
UE5 with Nanite was released in early access in 2021. So 3 years now.
Sure but it wasn't in a "production ready" state until another year later ...

It's catching on very quickly since it's inception. We'll have AAA titles this year such as Silent Hill 2, MGS Delta, STALKER/Bloodlines 2, and I imagine Hellblade 2/Avowed are likely candidates too all of which are games that IHVs would want to do well in. The most prestigious projects from Square further down the line are using UE5 as well as Crystal Dynamics' new Tomb Raider game and I think both want to use nanite badly. Then we have CD Project RED too who will definitely want to use nanite too ...
 
Right now virtual geometry technology is in it's relative 'infancy' (under 2 years old) in comparison to ray tracing (over 5 years old) so it might not just be UE5 IHVs should be worried about when similar technology could come to other engines ...

AC Red seems to use virtualized geometry


Assassin’s Creed Red to Feature Ray Traced Global Illumination and Virtual Geometry; Stealth to Be Expanded​

 
Gamedev got a lot more expensive overall, especially at AA/AAA scale.
Same thing happened with UE4, it amounted to nothing in the end.
UE5 is a cheap pathway to a good AAA engine so it DID in fact devour middle-market and is now going into AAA just fine.
I disagree with this. Again, the same thing happened with UE4, in the end, nothing came out of it. Also not all UE5 games are the same, some use Nanite alone, some use Lumen alone, some use all of the features, and some don't even use any of the features, so lumping all of these together into one group, and expecting UE5 will somehow dictate how GPUs are made is the wrong point of view, not even UE5 game developers agree on what's important in UE5.

Sure but it wasn't in a "production ready" state until another year later ...
Same excuse could be made for ray tracing. Despite being released in 2018, APIs, engines, drivers, developer experience and best optimizations took sometime to mature. I don't see why Nanite would get a "pass" for not being production ready upon release, while Ray Tracing don't get the same pass.

MGS Delta
Does MGS Delta really use Nanite? It certainly doesn't look like it from the trailer.

That's also my point really, not all UE5 games will ship with Nanite, we already have several ones that don't (The Finals, Tekken 8, Layers of Fear, Slender The Arrival, Stray Souls, Cepheus Protocol .. etc). Some games have Nanite selectively (Satisfactory, Quantum Error). There is a real possibility that most AA UE5 games won't ship with Nanite. So expecting that Nanite will somehow change the hardware landscape because it will be everywhere is not a realistic view of the situation.
 
Same thing happened with UE4, it amounted to nothing in the end.
No? If anything 8th gen was a recovery from the 7th one where UE3 was crawling all over the middle market.
I disagree with this
Subjectivism is nice but UE5 is objectively crawling all over the industry that's already struggling with costs as is.
See all the mass gamedev layoffs.
and expecting UE5 will somehow dictate how GPUs are made is the wrong point of view, not even UE5 game developers agree on what's important in UE5.
Epic has to agree. They make the thing.
3rd parties just use what they provide with optionalities.
 
AC Red seems to use virtualized geometry
We'll have to see some more later on to corroborate that claim ...

I wonder what other engines are candidates to integrate virtual geometry next ? Frostbite, Decima, IW, id Tech, perhaps Insomniac ?

Does MGS Delta really use Nanite? It certainly doesn't look like it from the trailer.
Well the same studio doing the project has a GDC presentation in the next couple of days discussing the subject matter so I can only assume ...
 
I wonder what other engines are candidates to integrate virtual geometry next ? Frostbite, Decima, IW, id Tech, perhaps Insomniac ?

Anvil, Decima, Insomniac, RE seem like good bets.

Once Nanite is everywhere the other guys will be hard pressed to match the environment detail so need to do something.

Haven’t seen much footage of Alan Wake 2. Where does that fall on the spectrum of geometry detail? Anywhere close to Nanite?
 
Competition for Nanite and Lumen would be fantastic. After all Nanite is still a rasterizer and comes with all the usual limitations of rasterization. The holy grail is high resolution geometry alongside proper light transport. In terms of engine adoption there are far more engines supporting RT today than Nanite level assets. So question is will they all drop RT or will they evolve their RT implementations over time to work with higher resolution geometry.

Either way there’s no indication RDNA holds a natural advantage in UE5 or any potential virtualized geometry solution. It’s all cache and compute right and everyone has that. AMD needs a clear win to steal market and mind share.
I'm not sure where the idea that Nanite is somehow against h/w RT is even coming from. It's not. If anything it should be faster to use h/w RT with something like Nanite instead of using s/w RT with a similar resulting quality - which we see in games already.
 
I'm not sure where the idea that Nanite is somehow against h/w RT is even coming from. It's not. If anything it should be faster to use h/w RT with something like Nanite instead of using s/w RT with a similar resulting quality - which we see in games already.

Nanite as currently implemented isn’t compatible with RT. It’s a rasterizer to start with and the geometry data structure isn’t compatible with current BVH formats.
 
Nanite as currently implemented isn’t compatible with RT. It’s a rasterizer to start with and the geometry data structure isn’t compatible with current BVH formats.
The exact same can be said about it's relation to s/w Lumen. None of these are in any way stopping the RT h/w from providing better rendering results still.
 
The exact same can be said about it's relation to s/w Lumen. None of these are in any way stopping the RT h/w from providing better rendering results still.

I’m not sure what you mean. Lumen was designed to work with Nanite assets and the Nanite rasterizer so of course it works. RT is a completely different renderer to Nanite.
 
Lumen was designed to work with Nanite assets and the Nanite rasterizer so of course it works.
It works but not with Nanite assets, it works with SDF represantation of such which is just the same "proxy" hack required for h/w RT to work with Nanite as well. There is no difference in aporoaches but there are differences in the end result where h/w RT provides better fidelity. So this whole talk about how virtualized geometry is incompatible with h/w RT is completely pointless since even now there is no better way of doing RT against that and it should be possible to do h/w RT without proxies against Nanite triangles in the future, nothing prevents this from happening.
 
Epic has to agree. They make the thing.
3rd parties just use what they provide with optionalities
Which is what I am saying, not all 3rd parties are using Nanite.
Subjectivism is nice but UE5 is objectively crawling all over the industry that's already struggling with costs as is.
Let's just agree to disagree and move on, time will tell -as always- who is right.

On another note, Tinycorp is ditching their effort to do AI on consumer AMD hardware, opting for NVIDIA and Intel instead.

 
I doubt he will get any more love from Intel or Nvidia.

Yeah all the problems described in the article will likely be worse on Nvidia’s stuff.

It works but not with Nanite assets, it works with SDF represantation of such which is just the same "proxy" hack required for h/w RT to work with Nanite as well. There is no difference in aporoaches but there are differences in the end result where h/w RT provides better fidelity. So this whole talk about how virtualized geometry is incompatible with h/w RT is completely pointless since even now there is no better way of doing RT against that and it should be possible to do h/w RT without proxies against Nanite triangles in the future, nothing prevents this from happening.

Yes Lumen uses the SDF proxy for hit tracing. It can also use hardware RT against a proxy mesh for the same hit tracing. In both cases it depends on rasterizing Nanite meshes to populate the surface caches used to bounce light around the scene. And those caches are much lower fidelity than the per-pixel resolution of something like RestirGI.

I think I understand what you’re getting at. Technically you can render primary visibility with Nanite and populate your visibility & gbuffer. Then raytrace your shadow and GI passes. Maybe this is exactly what Wukong is doing.

This is a random guess but I expect RTXDI shadows will be a net quality and performance win over VSMs with a non trivial number of lights.

For GI it’s a tough call. Lumen’s surface cache updates are spread out over multiple frames and sampling the surface cache is probably very cheap. Per-pixel RTGI + denoising is likely much more expensive.

Again it begs the question. What’s AMD’s plan to get out ahead on any of these options. More cache and more CUs may not be enough.
 
It works but not with Nanite assets, it works with SDF represantation of such which is just the same "proxy" hack required for h/w RT to work with Nanite as well. There is no difference in aporoaches but there are differences in the end result where h/w RT provides better fidelity. So this whole talk about how virtualized geometry is incompatible with h/w RT is completely pointless since even now there is no better way of doing RT against that and it should be possible to do h/w RT without proxies against Nanite triangles in the future, nothing prevents this from happening.
One major advantage to using offline generated SDFs is that they can be efficiently streamed in and out of the level as opposed to a rebuild/refit for a BVH ...

Even with proxy geometry the limits of a BVH becomes much more obvious when you use virtual geometry with "world position offsets" ...
 
Again it begs the question. What’s AMD’s plan to get out ahead on any of these options. More cache and more CUs may not be enough.
BTW its not like there's hard a 'consensus' as to what the employees at AMD think will be the one dominant indirect lighting technique in the near/extended future ...

There's one that's considering improving reflective shadow maps (one bounce diffuse only/light leaking/streaking) but others are working on sparse distance fields (shadowing only/no material data) or two-level radiance caching (prohibitively expensive updates for data structure) and the most popular AAA game PC/console engine appears to be settling on signed distance fields with surface cards ...

Even AMD's competitor realizes that it's unsustainable to keep integrating unused hardware (extraordinarily so with the same process technology) which is why they've been looking for more use cases like texturing (wasn't put into practice) and especially denoising/radiance caching for ray tracing ... (higher quality temporal upscaling alone may not turn out to be a good enough justification for them keeping hardware for it)
 
So as I was saying yesterday, here are the improvements epic made to Hardware Lumen in UE5.4.

Substantial Improvements have been made to hardware raytracing (HWRT). These improvements offer speed gains of 2x in the case of primitives and it helps to ship 60hz experiences which use HWRT.
  • GPU instance culling, parallelization for instances and primitives.
  • Additional primitive types
  • Optimized path tracer with roughly a 15% speed improvement over Release 5.3 and roughly equivalent to Release 5.2 without any reduction in features or need to introduce additional shader permutations.
  • HWRT uses the path tracer light grid and consequently supports very large numbers of lights.

 
Even AMD's competitor realizes that it's unsustainable to keep integrating unused hardware (extraordinarily so with the same process technology) which is why they've been looking for more use cases like texturing (wasn't put into practice) and especially denoising/radiance caching for ray tracing ... (higher quality temporal upscaling alone may not turn out to be a good enough justification for them keeping hardware for it)
That's not RT h/w and saying that it's "unused" when everyone is trying to build it into their chips one way or another is quite a bit weird.
 
Haven’t seen much footage of Alan Wake 2. Where does that fall on the spectrum of geometry detail? Anywhere close to Nanite?

AW2 doesn't have geometry detail on any level similar to UE5, it's still traditional geometry just pushed to what current console hardware can handle.

For their next title it was mentioned in a tech article that they are looking into some form of geometry rendering similar in density to what can be accomplished by nanite.

Regards,
SB
 
Back
Top