AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
I think I have an idea of what you're asking and the answer is no because mesh shaders are still tied to the graphics pipeline because it's output is fixed entirely for HW rasterizer consumption ...

It'd be a different story if mesh shaders we're truly a part of the compute pipeline just exactly like compute shaders are ...
I feel that Mesh Shaders tier 1.1 is incoming.
 
Software rasterisation in UE5 is a nice wake up call. When you go really big you can do something magical.

The software rasterizer isn’t really the magical part. I’m sure it’s trivial to beat it in hardware suited to the task. The real magic in Nanite is the art pipeline and dynamic LOD and culling system. Wouldn’t it be cool though if next gen hardware has both big triangle and micro poly rasterizers.
 
The software rasterizer isn’t really the magical part. I’m sure it’s trivial to beat it in hardware suited to the task. The real magic in Nanite is the art pipeline and dynamic LOD and culling system. Wouldn’t it be cool though if next gen hardware has both big triangle and micro poly rasterizers.

I don't know for PC but Epic said on PS5, they used the Primitive shader pipeline for big triangles maybe they use mesh shader for big triangle too on PC/Xbox Series.
 
The software rasterizer isn’t really the magical part. I’m sure it’s trivial to beat it in hardware suited to the task. The real magic in Nanite is the art pipeline and dynamic LOD and culling system. Wouldn’t it be cool though if next gen hardware has both big triangle and micro poly rasterizers.

Micropoly rasterizers aren't necessary since Nanite's pipeline can exploit more parallelism in comparison to the traditional geometry pipeline. More compute power will allow Nanite to naturally scale with higher performance ...
 
Micropoly rasterizers aren't necessary since Nanite's pipeline can exploit more parallelism in comparison to the traditional geometry pipeline. More compute power will allow Nanite to naturally scale with higher performance ...

Sure about that? Ampere has double FP32 performance per compute unit and yet Ampere doesnt perform much better than RDNA2 or Turing.
 
Sure about that? Ampere has double FP32 performance per compute unit and yet Ampere doesnt perform much better than RDNA2 or Turing.

There's more to compute performance than just pure addition or multiplication throughput ...

Occupancy, atomics (nanite uses this to perform depth testing), local memory usage, & etc ...
 
Micropoly rasterizers aren't necessary since Nanite's pipeline can exploit more parallelism in comparison to the traditional geometry pipeline. More compute power will allow Nanite to naturally scale with higher performance ...

There are other game engines besides UE5. Maybe devs want to use mesh shaders to handle their high resolution geometry and hw micro poly rasterizers would come in handy for that.

Thing is that NV/intel gpu's can do both.

I honestly don’t know why AMD has such a strong preference for inline RT. What about their software or hardware makes it unsuitable for callable shaders? All the other big guys (MS, Nvidia, Intel) seem to be fine with it.
 
I honestly don’t know why AMD has such a strong preference for inline RT. What about their software or hardware makes it unsuitable for callable shaders? All the other big guys (MS, Nvidia, Intel) seem to be fine with it.
I presume the recommendation stems from expectation of inline RT being "simple" when compared to a more "complex" separate RT state. Meaning that it's not exactly a h/w deficiency in usage of callable shaders, it's general lack of RT performance - which in case of "simple" RT implementations could be hidden by some other bottleneck.
 
There's more to compute performance than just pure addition or multiplication throughput ...

Occupancy, atomics (nanite uses this to perform depth testing), local memory usage, & etc ...

But then it doesnt scale with compute performance at all. And doubling compute units will make efficiency and scaling worse than doubling CU throughput.
 
There are other game engines besides UE5. Maybe devs want to use mesh shaders to handle their high resolution geometry and hw micro poly rasterizers would come in handy for that.



I honestly don’t know why AMD has such a strong preference for inline RT. What about their software or hardware makes it unsuitable for callable shaders? All the other big guys (MS, Nvidia, Intel) seem to be fine with it.

Lack of hardware ? My guess is they will be fine with callable shaders when with next gen / decent RT pipeline.
 
But then it doesnt scale with compute performance at all. And doubling compute units will make efficiency and scaling worse than doubling CU throughput.
I think it's a bit premature to say that whatever benchmarks of UE5 we've seen so far are limited by Nanite performance.
In fact I'm fairly sure that they are not. The most likely limitation is the good old shading.
 
I honestly don’t know why AMD has such a strong preference for inline RT. What about their software or hardware makes it unsuitable for callable shaders? All the other big guys (MS, Nvidia, Intel) seem to be fine with it.

There is a better API design available on AMD HW for the ray tracing pipeline but it would involve developers resorting to use driver extensions like AGS where you can replace the TraceRay() intrinsic with TraceRayAMD() intrinsic ...

All ray tracing shaders get compiled as compute shaders on AMD HW so the only possible advantage specialized ray tracing shaders is the lower register usage but risk of introducing more different shaders as opposed to using an "uber-material shader" instead is that function calls stop getting inlined which causes spilling in the process ...
 
There are other game engines besides UE5. Maybe devs want to use mesh shaders to handle their high resolution geometry and hw micro poly rasterizers would come in handy for that.

True but listening to a software developer (Rami Ismail) with connections across the industry (many developers hire him to find developer talent for them when they have job openings), he's heard a lot of talk from many developers (AAA, AA, Indie) at GDC that UE is now significantly ahead of any other engine on the market WRT features as well as ease of implementation and that many will be switching to UE5 or seriously considering dropping plans to either implement their own engine, continue iterating on their own engine, or using a competing engine (like Unity).

Basically, until another engine can show that it can achieve similar visuals to UE5 with as little effort (relatively speaking) to implement as UE5 with better resource usage than UE5, then we're going to see a rather large proliferation of UE5 based games over this current console generation.

Regards,
SB
 
Wrt UE5 I feel that it's also a bit early to call doom and gloom on all other engines prior to actually seeing anything remotely close to a shippable game being made on UE5.
Realities of visuals in such games can be quite a bit different from what was shown in UE5 tech demos so far, to a point where UE5's advantage won't be as obvious as people seem to think based on what was shown in the tech demos.
Development backend and availability of assets is a different story of course.
 
Wrt UE5 I feel that it's also a bit early to call doom and gloom on all other engines prior to actually seeing anything remotely close to a shippable game being made on UE5.
Realities of visuals in such games can be quite a bit different from what was shown in UE5 tech demos so far, to a point where UE5's advantage won't be as obvious as people seem to think based on what was shown in the tech demos.
Development backend and availability of assets is a different story of course.

For consumers, this is certainly true. But we've already seen some developers already announce that they are dropping their proprietary internal engines in favor of switching to UE5. It may or may not be just the tip of the iceberg, but at least talk among developers at GDC reflected a growing desire to switch away from other engines to UE5.

Obviously it's far more likely for indie developers to do this as UE5 allows them to make games that are closer to the visuals of AAA games with far less effort than at any time in the past.

For AAA developers it's all about how much effort they would need to bring their internal engines up to UE5's quality along with improvements to their tools to match what is available for UE5.

Regards,
SB
 
For AAA developers it's all about how much effort they would need to bring their internal engines up to UE5's quality along with improvements to their tools to match what is available for UE5.
Well that's the point - we don't really know how much effort it would take. In many areas this effort likely won't need to be huge and in some cases tools may be just as good already.
Who has we seen so far who dropped their own tech in favor of UE5? CDPR? AFAIU they plan to still develop their own RED Engine, it's just that building a new team for the new Witcher games is easier when using UE.
Crystal Dynamics? It's unclear so far if it's a "switch" or they are doing something similar to CDPR and using UE5 to build up their new Austin studio.
Anyone else?
 
Same doom and gloom threats for other engines are thrown around every time there's major update to Unreal Engine. So far none of them came true.
I obviously don't have any data but my feeling is that this past generation the usage of UE(4) in AAA projects have actually been smaller than it was with UE(3) in PS3/360 era.
So if UE5 will win some AAA projects back from own or other engines then it will be more of a return to how AAA scene was back in PS3/360 days than a complete substitution of proprietary tech or engines like Unity.
 
Status
Not open for further replies.
Back
Top