Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
Mesh shaders are basically vertex shaders that aren’t constrained by fixed input formats. Should be super easy to add to an engine.
You have to split your mesh up into clusters/meshlets to get any benefit. Anything that affects the production/tools pipe is not something that can be trivially added. I'm sure more folks will use it over time, but it's not low hanging fruit compared to other stuff.
 
You have to split your mesh up into clusters/meshlets to get any benefit. Anything that affects the production/tools pipe is not something that can be trivially added. I'm sure more folks will use it over time, but it's not low hanging fruit compared to other stuff.

It must be easier than Nanite and Nanite already exists :)
 
Wait. Mesh shaders must always write directly to the rasterizers? You can’t save the geometry output from mesh shaders?

Nvm. I already see the problem. If you cull back face triangles and write that to BVH, it becomes useless for RT. Since you can no longer have reflections or bounce GI.

I'm the least qualified person to say this, but I believe that ideally amplification shaders send shit into to mesh shaders, which send info to the rasteriser, all without doing multiple sheep herding jobs in vram. Could be wrong of course
 
Is DF planning to cover the recent mobile resident evil port? I’m quite interested in their opinions.
I did some primitive testing but the iPhone version doesn’t seem to support fps readings. From my eyes, it cannot maintain a stable 30fps even at the lowest resolution with Performance MetalUpscaling. A lot of hiccups when wandering in the first indoor area with the baby. Tbh this doesn’t look well optimized.. or maybe my iPhone is overheating, but then again they should consider it anyway.
Yes, I am looking at it. You are on the right track here sadly...
 
I'm the least qualified person to say this, but I believe that ideally amplification shaders send shit into to mesh shaders, which send info to the rasteriser, all without doing multiple sheep herding jobs in vram. Could be wrong of course
Yea that is what it does. It has hardware paths to ensure data is kept in cache instead of needing to write results back to VRAM.
 
Really wish Remedy would have dumped the GI system of the game in favor of a full scale Path Traced GI solution, instead of this mixed hack. (ie, going full Cyberpunk/Metro route).

Maybe, Nvidia will partner with Remedy as they did with CD Projekt on developing a full PT solution. Actually, I wouldn't be surprised if Nvidia/Remedy aren't doing this now.
 
I finished watching the video and yes, he mentions it at the end. A photo mode would be very helpful in the game. The camera is very far away from all the objects.

To return to lighting once again: I already said after a short time of playing that the lighting of Alan Wake 2 is still inconsistent sometimes. Cyberpunk 2077 looks more like pathtracing. Alan Wake 2 looks more like a heavy ray tracing game to me. In Cyberpunk 2077 the standard GI was also enabled when Psycho Raytracing was used.

As mentioned in the video the mixed patch tracing may reduce RT noise and ghosting while Cyberpunk 2077 has more problems with it.
 
Last edited:
Alan Wake suffers from the same issue with lighting as Dying Light 2 does in that it varies massively.

There are certain times within Dying Light 2's time of day cycle where it's the best looking game around, and there are other times where it's over saturated (Artistic?) and it really doesn't look good.

Alan Wake 2 at certain times (typically when it's dark or getting darker) is the best looking game around and other times it doesn't look like a game with RT.
 
Last edited:

Timur has previously explored AMD’s NGG pipeline. There are actually two shader stages in NGG. There are surface shaders and primitive shaders in NGG. The surface shader is only available if tessellation is activated.

Timur calls the surface shaders uninteresting but also stated he was unaware of how it works differently from the old pipeline.

He also mentioned when he benchmarked the new pipeline it offered no performance gains over the traditional pipeline. It was only after he employed “shader culling” or “NGG culling” that gains were made with the caveat the improvements were limited to games that pushed a lot of triangles, where the old fixed function culling HW was a bottleneck and there was no in-app solution to deal with ff bottleneck.

He actually got shader culling to work on RDNA1 but it offered no gains over the old method.
 
1.7 GB/s SSD tested in PS5 Spider Man 2, and it works fine.


Very interested in PC tests once ported! Shows how efficient that optimised streaming assets and engines are. How once again points to PS5's SSD being over-engineered and a far slower, cheaper SSD would have sufficed. I wonder if we'll ever see a game max it such that a slower SSD will fail?
 
1.7 GB/s SSD tested in PS5 Spider Man 2, and it works fine.


Very interested in PC tests once ported! Shows how efficient that optimised streaming assets and engines are. How once again points to PS5's SSD being over-engineered and a far slower, cheaper SSD would have sufficed. I wonder if we'll ever see a game max it such that a slower SSD will fail?
One one hand it is over engineered. On the other I wonder if all that head room can be taken advantage of in much more unique and interesting ways
 
And what it would take to actually do that? What's the limiting factor such that < 3 GB/s is still ample? I dare say efficient streaming engines render the peak performance moot. Kinda like the SSD choice was made to solve a different software paradigm, but newer engines leave it redundant. Similarly more RAM isn't needed if you can stream it faster enough.

Although load and operation times are notably improved. Maybe that's its only benefit in the end?!
 
Last edited:
One one hand it is over engineered. On the other I wonder if all that head room can be taken advantage of in much more unique and interesting ways
I'm curious if the 5lim has the same raw performance as the original model. Looking at the early teardowns, it looks like it's only got 2 chips for storage. Perhaps Sony engineered it some more to be less engineered.
 
Cutting BW on the Lite would be very telling. Would be very apparent in load and copy times though. Easily benchmarked.
 
Status
Not open for further replies.
Back
Top