Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
They have FSR 2 sharpening off on the XSX according to that, I guess that explains the IQ differences even though the internal res is the exact same. According to that it is bugged/artefacts on XSX? IMO, FSR2 kind of oversharpened on PC by default.
I think it’s bugged and disabled on X which seems to be base for XSX . XSX overrides does not enable it thou. So it’s disabled onX and this setting is carried over to XSX.
Other options seems to use identical values for the parts when they match in both configs (Ps5 /XSX).
Config for Xboxes has more options with unknown values on PS5.
Hard to draw any conclusions from it.
 
This is not what I'd call an indy developer. They spent 5 years on this with over 100 people.
they grew to 100 over the course of the 5 years. Technically the average employee size is likely to be much lower over the course of 5 years
 
tumblr_lnl9dkTPfn1qarvo3o1_500.gifv
 

So based on that video the PS5 is performing roughly in line with a 2080 but the R5 3600 CPU looks to be falling behind a bit.

So pretty much in line with what you would expect from the hardware.

Also interesting to note even on PC FSR is heavily sharpened making it appear sharper than DLSS. Seems reasonably conclusive that that is the reason for the PS5's greater sharpness over the XSX given the ini shows no FSR sharpening on that console.
 
Oh wow, that's pretty bad. Looking forward to Alex's coverage because I couldn't pick out a single visual feature in that video that justifies the performance.
The game started on UE4 so I bet a lot of the art wasn’t built with nanite or lumen in mind which explains a lot.
 
Doesn't that make the assets less heavy? I don't get how it can become a problem -- I see it more like a benefit.

I would think issue is more so that the art assets/direction wasn't designed with Nanite and Lumen in mind. Therefore you'd get the performance hit from those features on the backend but aren't leveraging them for the most visual gain.

There's the scaling issue also.
 
Doesn't that make the assets less heavy? I don't get how it can become a problem -- I see it more like a benefit.
They are paying the cost of the nanite geometry system without the visual benefit of art that actually looks high geometry. It seems like nanite is being relegated to just smoothing out edges and eliminating LOD transitions like in those early tessellation demos.
 
They are paying the cost of the nanite geometry system without the visual benefit of art that actually looks high geometry. It seems like nanite is being relegated to just smoothing out edges and eliminating LOD transitions like in those early tessellation demos.

One thing I'd be interesting in seeing is how memory use compares with nanite vs a traditional LOD system. You have more data per model (more vertices), but you also don't have to story many LODs. Don't remember how texturing works either. I think it's some form of virtual texturing. What I'm getting at is at what point do they run out of useable memory if they're trying to dial all of the meshes to 10/10.
 
They are paying the cost of the nanite geometry system without the visual benefit of art that actually looks high geometry. It seems like nanite is being relegated to just smoothing out edges and eliminating LOD transitions like in those early tessellation demos.

From DF interview: “Originally there was a high poly model done in ZBrush, which was baked down into the classic texture and material to get the shape of it while maintaining a low poly count. But when we switched over to Nanite, suddenly we were able to just bump back to the high poly assets.”

I’m not sure what happened but the assets certainly don’t look high poly.
 
One thing I'd be interesting in seeing is how memory use compares with nanite vs a traditional LOD system. You have more data per model (more vertices), but you also don't have to story many LODs. Don't remember how texturing works either. I think it's some form of virtual texturing. What I'm getting at is at what point do they run out of useable memory if they're trying to dial all of the meshes to 10/10.

One of the early Epic presentations was at 750MB for the Nanite memory page. They didn't mention what it was for Avium.

Nanite forces virtual texturing. They mentioned this caused some issues with the fixed page size on consoles

"Nanite does a good job of using the memory it has available, but the exception to that is that virtual texture pools in UE cannot be resized - they have to be initialised at engine startup and cannot be touched again, [which provides] fully allocated contiguous memory which is wonderful from a performance standpoint but [you can have problems where, for example] there's a goblet way off in the distance, two pixels, and it needs one piece of one texture [from a 500MB pool allocation], and you don't have any of that back until the texture goes away. PC doesn't care [if you run out of memory]; worst case, it goes into virtual memory. Console goes "I don't have virtual memory, I'm done." And it won't crash, but it will cause substantial issues. This caused what was internally known as the infamous landscape bug, where you would just walk into certain parts of the game and it would like someone painted an anime landscape on the ground, because it couldn't allocate for the virtual texture pool."
 
Last edited:
Status
Not open for further replies.
Back
Top