Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

PS4 should be perfectly fine doing it too. 2011 GPU's should aswell (fermi?). Its a question of performance and theres where hardware acceleration provides. If musle deformation/DRS is enough then yeah, though i think AI/ML is able to offer much more than that.



Right, i thought it was Fermi. Still, thats over decade ago too.

Yes . After I expect to see ML use extensively in animation, VFX, post processing.

 
Sampler Feedback requires invasive code changes and it's a common example to use pixel shaders to compute the lighting pass because you need access to derivatives which have only been traditionally offered in pixel shaders ... (detailed post)
Traditionally - so is this what is now available in SM6.6?
 
Traditionally - so is this what is now available in SM6.6?
I don't think many developers are just going to drop older hardware just to use one invasive feature. What even is the quantifiable benefit for using sampler feedback that enables content on a renderer like Nanite ? With sampler feedback the idea is to implement a decoupled shading scheme so that you can rasterize your scene at a higher rate while reusing shading. With Nanite, there's a path where you can bypass the hardware rasterizer depending on what content your rendering ...

Is sampler feedback compelling to use in the absence of hardware accelerated rasterization ?
 
  • Like
Reactions: snc
I don't think many developers are just going to drop older hardware just to use one invasive feature. What even is the quantifiable benefit for using sampler feedback that enables content on a renderer like Nanite ? With sampler feedback the idea is to implement a decoupled shading scheme so that you can rasterize your scene at a higher rate while reusing shading. With Nanite, there's a path where you can bypass the hardware rasterizer depending on what content your rendering ...

Is sampler feedback compelling to use in the absence of hardware accelerated rasterization ?
Think you miss understood my post. It was a literal question.
It wasn't in anyway talking about the negatives or positives of SFS.
But I think I get from your post that yes it's a newly available feature that was a limitation of the api in the past.
 
DX12U does not seem to have much relevance for UE5. We have to see whether or not Mesh Shading for Nanite makes a big difference (if the r.nanite.meshshadingraster command in UE5 is indeed working correctly, then there's close to zero difference in performance)

Sampler Feedback and VRS are not supported either.

The command from Epic that DX12U is the gold standard of graphics were a blatant lie. They did not announce anything in regards to DX12U and they keep ignoring it.

UE supports VRS but it seems to be limited to XR or what epic calls Extended Reality (VR/AR) based projects. UE XR may default to a forward rendering mode (the XR docs goes over forward shading but doesn’t spell out if it’s default for XR projects) which if true may allow for the use of VRS. For non-XR project that for some other reason wants to use forward shading, VRS may be available. I don’t know because the XR and forward renderer documentation hasn’t been updated for UE5.
 
No PS5 doesn't have mesh shader, VRS, SFS and int4/int8 ML improvement but they have an equivalent of Mesh shader with primitive shader, flexible scale rasterization for VR like VRS, ID Buffer but they have no feature equivalent to INT4/INT8 ML improvement of DX12U.


SFS is not supported in PS5... And also not needed. So far not a single game used SFS and there is no expectation that it ever will.

VRS is not required on PS5, and the PS5 solution seems superior. The changes on the pipeline created for the Geometry Engine on the PS5 allow culling much sooner in the stage. Has Matt Hargret explained, "VRS doesn't hold a handle to the Geometry engine capabilities since it is shading triangles at half/quarter rate that the PS5 doesn't even need to draw.

As for mesh shaders I'm not sure about what's implemented on Xbox, but I read on a corean fórum a tech guy claiming that mesh shaders on the Xbox are emulated. According to him, primitive shaders are native to RDNA2, and what Xbox is doing is not different from what AMD does on driver's to allow it's RDNA2 cards to run 3DMark's mesh shaders test.

As for int 8 and int 4 I find it strange PS5 does not support it, since the RDNA white paper mentions support for both since RDNA 1.1.
 
SFS is not supported in PS5... And also not needed. So far not a single game used SFS and there is no expectation that it ever will.
Not needed? Well, an awful lot of niceties are not needed, but they make the systems better overall! If SFS was present in PS5, it'd get more use in both consoles and both would benefit. By its absence, XBSX ends up with a capability that'll be sidelined, as so typically happens with niche hardwares.
 
Not needed? Well, an awful lot of niceties are not needed, but they make the systems better overall! If SFS was present in PS5, it'd get more use in both consoles and both would benefit. By its absence, XBSX ends up with a capability that'll be sidelined, as so typically happens with niche hardwares.
Well it happens eventually regardless to many hardware features if we take common examples like Geometry Shaders, Tessellation, MSAA, etc ...

Is a system really comparatively better off if features end up being unused ultimately due to the lack of it's own merits rather than the lack of support by competitors ? Is this what we call the placebo effect ?
 
Not needed? Well, an awful lot of niceties are not needed, but they make the systems better overall! If SFS was present in PS5, it'd get more use in both consoles and both would benefit. By its absence, XBSX ends up with a capability that'll be sidelined, as so typically happens with niche hardwares.
I agree. Niche specs is a large obstacle for their mass use. Specially when they require large engine changes.
If SFS was common to both consoles I believe we could see it more widely used.
 
I agree. Niche specs is a large obstacle for their mass use. Specially when they require large engine changes.
If SFS was common to both consoles I believe we could see it more widely used.
1. PS5 has SF just not SFS additional tweaks I believe
2. Its stil very much cross gen period, very few games would've truly benefited from investing in upgrading to SFS
3. Too early to say how much use it will or won't get imo

So I personally don't think we would've seen it particularly used much, even if it was common across both consoles.
To be clear I'm not saying it will definitely get much use, I'm saying imo to early to say.
 
UE5 titles likely won't use it, but it might be used with Xbox and PC titles like Forza or Fable running on a custom engine.
 
  • Like
Reactions: Jay
UE5 titles likely won't use it, but it might be used with Xbox and PC titles like Forza or Fable running on a custom engine.
Agree that if you want absolute broad adoption needs to be in UE5.
But there's still other engines in use.
If it was included at the expense of something or less performance then that would be a different issue.
If you're talking internal titles could expect to see it used in forza tech, id tech etc engines and which ever titles use them.

Also I don't know enough about if it definitely can or can't be used in UE5, but it would take MS to do it if it can and fed back.

But I still stand by too early to say for the reasons given in previous post, as I'm including 1st party as we've not seen it used there yet either.
 
Hardware features are good but they can be killed by the evolution of game renderer or because it is not implemented everywhere or slow on some IHV or older GPU. MSAA was killed by the fact most engine goes deferred and consoles developers began to use post process AA.

Geometry shader was fast on Intel GPU but slow on AMD and Nvidia GPU. Tessellation was not flexible enough and slow on AMD GPU. Currently HW tessellation is fast enough on AMD GPU but adaptative tessellation is better with compute shader.

Visibility buffer will probably make obsolete pixel shading and any hardware feature linked to this.

But the mass murderer of hardware feature is and will be compute shader. Nanite is a problem for HW-RT shadows*, Flexible scale rasterization of PS5 or HW-VRS or sample feedback for texture shading non useful on Unreal Engine 5 because it is a software rasterizer.

*Proxy geometry is not an option for shadows
 
Last edited:
As long as there is a 3D Pipeline there will always be new features added there. If all we need was compute our GPUs would have dropped FF hardware by now, but there would be a long languishing drought of games for years to move entire engines to compute only.

I love the idea of compute only engines, it would certainly open the door for more creative ways to use GPU power, but Compute shaders have been around a long time and no one seems interested in fully divesting away from 3D.
 
As long as there is a 3D Pipeline there will always be new features added there. If all we need was compute our GPUs would have dropped FF hardware by now, but there would be a long languishing drought of games for years to move entire engines to compute only.

I love the idea of compute only engines, it would certainly open the door for more creative ways to use GPU power, but Compute shaders have been around a long time and no one seems interested in fully divesting away from 3D.
Nanite is a gimmick unreal engine 5 doesn't look better than unreal 4
 
But the mass murderer of hardware feature is and will be compute shader. Nanite is a problem HW-RT shadows*, Flexible scale rasterization of PS5 or HW-VRS or sample feedback for texture shading non useful on Unreal Engine 5 because if it is a software rasterizer.

*Proxy geometry is not an option for shadows
Compute shaders are slow. And with raytracing using HW-RT for one feature means every other one has less costs. Lumen reflections in 5.0 looks really bad. HW-reflections are the way to go and then you can add nVidia's DDGI and RTXDI, too. And with Lovelace there is support for SER which improves the performance of HW-RT reflections.
 
Compute shaders are slow. And with raytracing using HW-RT for one feature means every other one has less costs. Lumen reflections in 5.0 looks really bad. HW-reflections are the way to go and then you can add nVidia's DDGI and RTXDI, too. And with Lovelace there is support for SER which improves the performance of HW-RT reflections.

Slow against what try to use the hardware rasterizer to do the same thing than Nanite and we will see which solution is faster. Where did I talk about RT reflections? I talk about RT shadow and if HW reflection are better but not physically accurate because object into a reflection don't lose details. At least this is environment if Nanite was working for character it would have been much more visible.
 
Back
Top