Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
MAybe we should start embracing natural light, as in some cinematic approaches (dogma, etc.). I understand sometimes a certain visual (or even sound) vision is what the director wants to achieve, but I'm a bit tired of this dissonance between real life and how we think real life MUST look and be heard in movies, series or even static pictures.

No, we should aim for what we think looks actually looks good rather than what's realistic, that may lead to having similar results to cinema as people in that field also know that's the real approach.
 
I wonder if adding an invisible light source is more expensive than using a surface to bounce light in the case where you’re already doing global dynamic gi.
At least it'll give DF another category of things to check for; fake lighting sources you can see what's going on.
 
Great video. As posted in the Unreal thread, I already tested the persistence of the AI earlier when I found one stuck against a lamppost (which is something that should be easy to fix but also suggests their movement patterns are pretty simplistic at the moment, also witnessed from that they clearly walk predetermined and not always sensible paths, like in my video where almost all normal people rounding that corner would have taken the obvious shortcut.

But if they still have Epics ear I would love to hear confirmed that even though the AI may be persistent, they are swapped out against different models.

 
Those lighting tricks are mostly relevant for cut scenes where the devs are in control of the camera and want to touch up the scene just like in a movie. For regular gameplay the fake lights are placed all over the place for a different reason - to fake GI. That’s no longer necessary. There will still be cases where lights are placed for dramatic effect but just like in real life it should require far less of them.

Not all games want to have non-interactive cut-scenes, however. So tricks like these or falling back to non-RT lighting will still be required. Metro: Exodus, for example, lacked these lighting tricks and it really shows in the flat and uninteresting scenes where characters are talking compared to the non-RT version where those scenes are significantly more engaging and impactful due to how the character's faces are lit.

In regular gameplay scenarios you can likely get away with just "natural" lighting, but anytime there's a cut-scene (interactive or non-interactive) you're going to need them. And I'd argue there are some game genres where you're going to require lighting tricks in gameplay as well (horror genre especially).

Regards,
SB
 
Great video. As posted in the Unreal thread, I already tested the persistence of the AI earlier when I found one stuck against a lamppost (which is something that should be easy to fix but also suggests their movement patterns are pretty simplistic at the moment, also witnessed from that they clearly walk predetermined and not always sensible paths, like in my video where almost all normal people rounding that corner would have taken the obvious shortcut.

But if they still have Epics ear I would love to hear confirmed that even though the AI may be persistent, they are swapped out against different models.

You went quite far, could this be different agent after first one got trough the pole during low resolution simulation? (time step and/or world)
 
You went quite far, could this be different agent after first one got trough the pole during low resolution simulation? (time step and/or world)
Or it could of been bumped by someone else which put it away from the pole, its obvious none sees the pole, thus I assume it must not of been baked into the collision geometry, simple fix.
I have the same issue in my game, person trying to walk to A but something is blocking the path, thus I measure if its trying to walk to A and its moved less than say 0.1m in the last second, reasses do I really want to go to A, choose somewhere else.

The more videos I see of this demo the better it looks, Ive always said lighting/shading is 50% of an images quality, I take that back its prolly closer to 80%. Is it possible in the demo to greatly reduce polygon count? eg characters @ ~300 triangles, i.e. circa year 2000, texture quality down to maximum 256x256 etc but just keep the same lighting in?

edit: you would think they would have some mechanism to stop the AIs getting trapped like that. I guess maybe they trust the world builders to construct a perfect world (though what about moving objects) with me since I know the worldbuilder (also me) is a lazy cunt, there will be errors, so I have to work off the premise something will go wrong :LOL:
 
Last edited:
edit: you would think they would have some mechanism to stop the AIs getting trapped like that. I guess maybe they trust the world builders to construct a perfect world (though what about moving objects) with me since I know the worldbuilder (also me) is a lazy cunt, there will be errors, so I have to work off the premise something will go wrong :LOL:
I'm more inclined to think they just had to finish there for the demo and this aspect is still WIP. ;)
 
When content is letterboxed, the borders aren't simply black overlays as is often the case - Epic is concentrating GPU power into the visible area, meaning that we're actually looking at native 1066p-1200p rendering.

In the most intense action, 1080p or perhaps lower seems to be in effect, which really pushes the TSR system hard. At the entry level, it's incredible to see Xbox Series S deliver this at all but it does so fairly effectively, albeit with some very chunky artefacts. Here, the reconstruction target is 1080p, but 1555x648 looks to be the max native rendering resolution in letterboxed content with some pixel counts significantly below 533p too
So both PS5 and XSX can render up to 4 times more pixels than XSS. That's a enormous gap and they are doing so with higher settings.

Objectively here XSS performs awfully even more so if we compare against PS5 specs. Usually the scaling is better with lower specs GPUs as often you'll get like only 50%-75% more performance with a 100% more powerfull GPU. But here the way XSS scales so bad is incredible. It shows this is a very badly designed console for next-gen engines with heavy RT utilization.

And this sentence from Digital Foundry is really out of place when objectively this is probably the worse scaling ever. Usually we see 2.5x-3x less performance compared to XSX but here we have 4x worse (at lower settings). It's an incredibly bad performance from XSS here.
At the entry level, it's incredible to see Xbox Series S deliver this at all but it does so fairly effectively
 
Honestly even if it looks worse and lower res on XsS, the fact that it works at all in a very playable form still amazes me for the price and size of this little machine !
Yeah I think that is the point - it is impressive that full diffuse GI + RT Reflections is running at all on the XSS. RDNA2 is not an RT beast as we know, yet here we are getting a range of RT effects that is extremely heavy on a machine with little RAM and a weak GPU. I think given what is visually happening, XSS is doing a good enough job for an entry level machine.
 
Honestly even if it looks worse and lower res on XsS, the fact that it works at all in a very playable form still amazes me for the price and size of this little machine !
its one way to look at it, but performance per dollar ps5 and xsx have far superior
 
And this sentence from Digital Foundry is really out of place when objectively this is probably the worse scaling ever. Usually we see 2.5x-3x less performance compared to XSX but here we have 4x worse (at lower settings). It's an incredibly bad performance from XSS here.

There's more to performance scaling that just resolution scaling. Not everything scales linearly with resolution - some things scale little if at all.

A compute generated BVH tree might, for example, be only loosely tied to resolution and represent something of a fixed cost. A particle system with complex patterns / behaviours (mimicking animals for example) might also not scale linearly with resolution. Potentially the same with RT GI. Compute based culling might be another example. And compute based streaming workloads might be yet another example (the demo appears to see a frame rate hit for fast traversal).

The actual performance of the XSS might be fantastic, it's just that's not represented by the resolution scaling.
 

00:00:00 Introductions
00:00:33 The Game Awards reaction
00:21:18 Horizon Zero Dawn gets DLSS
00:32:22 PlayStation rumoured to make new subscription service
00:44:23 Bulk Slash English translation releases
00:46:08 DF Content Discussion: Halo Infinite and the physical disc situation
01:03:00 DF Content Discussion: Christmas content
01:05:56 DF Supporter Q1: Tom's videos say "XBOX ONE" while everyone else goes with "XBOX ONE S". Is it fair to assume Tom's the only one using OG Durango?
01:07:12 DF Supporter Q2: What is the general level of support for surround sound setups in modern games?
01:11:09 DF Supporter Q3: How useful is dynamic resolution scaling?
01:14:16 DF Supporter Q4: Why do devs tend to limit dynamic resolution bounds so much?
01:20:19 DF Supporter Q5: Do you foresee more developers making use of or at the very least allowing users to play their games at above 60fps in cases where perhaps 120fps is not feasible?
01:23:23 DF Supporter Q6: Do you think poor response times of commercial LCD televisions at the time helped standardise 30fps during the Xbox 360 and PS3 era
01:28:37 DF Supporter Q7: What is everyone's favourite ever controller?

DF Article @ https://www.eurogamer.net/articles/...he-game-awards-horizon-pc-dlss-sony-game-pass
 
Status
Not open for further replies.
Back
Top