Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
Craziest shit I've seen.
gosh, I am going to take a spin -a LOT of times- in this game. I can hardly believe how good it looks, in fact this video is better than the promotional videos I've seen.
Can't wait for the ironman mod.
what do you mean by that?
I did half my capture at 4K and another half at 70% of 4K. It was not a stable 60 fps at Ultra settings, either CPU or GPU limited at times. I thought for this game showing the best settings for the preview was "better" than showing the most stable framerate.
the video looks superb at 1440p! Maybe that's the sweet spot, at least for my 1080, it should hold 60fps at that resolution and high settings, which imho judging from your words and the images showing what you meant, is the ideal setting.
 
Flying around the world in Flight Sim: no pop-in.

Go up a lift in Halo Infinite on PC: pop in.

Seriously, I'd by a Halo: Pelican game at a high price. You're a Pelican pilot who doesn't die every time you drop someone off. That's it. That's the pitch.

Yes strange, both games are supposed to scale? Flight sim 2020 is on pc so scaling should be important, not only to 2080 and above users. Hell the game is going to run on HDD systems too, since it's going to be on Xone?
 
Yes strange, both games are supposed to scale? Flight sim 2020 is on pc so scaling should be important, not only to 2080 and above users. Hell the game is going to run on HDD systems too, since it's going to be on Xone?

Well if you do it right, and balance the immediate detail against the average detail .... yes?

F.S. Engine is built around scaling, performance, and streaming.

Halo infinite is built around Halo CE. Which is cool as fuck. But after 19 years of hacks? I can't even remember the questionable code I wrote yesterday :/ *

* I blocked it out psychologically. It's terrible.
 
Flying around the world in Flight Sim: no pop-in.

Go up a lift in Halo Infinite on PC: pop in.

Seriously, I'd by a Halo: Pelican game at a high price. You're a Pelican pilot who doesn't die every time you drop someone off. That's it. That's the pitch.

I would just fly to the center of the air space in the Halo to see if my instruments would blow up.

Me: “Which way is up?”
Pelican’s instruments: “Fuck if I know!”
 
I would just fly to the center of the air space in the Halo to see if my instruments would blow up.

Me: “Which way is up?”
Pelican’s instruments: “Fuck if I know!”

That's proper sci-fi! Trying to use all kinds of different relative navigation methods and switching between them would be really fun.

I've just had an idea for the best big budget, guaranteed flop, licensed game ever. A 2 player co-op, VR, Pelican crew game. Drop off troops, use the ramp gun, bring a hog, try not to burn up in atmosphere, make emergency repairs, navigate in small spaces (aerial insertion, from underground) with systems knocked out. Most games lose money anyway. Please lose money on this, MS.
 
Flying around the world in Flight Sim: no pop-in.

Go up a lift in Halo Infinite on PC: pop in.

Seriously, I'd by a Halo: Pelican game at a high price. You're a Pelican pilot who doesn't die every time you drop someone off. That's it. That's the pitch.
Lucky 13 game in Halo universe?
Certainly would be interesting.

Would be nice render target to hit visually as well.
 
Something that I meant to ask about the DLSS video but forgot to.
We had a quick discussion about xsx ml upscaling but what about Lockhart if it's 4TF?
Would it be viable to do it from a base of 720p or better still 900p, and have enough to render a decent image.
 
Something that I meant to ask about the DLSS video but forgot to.
We had a quick discussion about xsx ml upscaling but what about Lockhart if it's 4TF?
Would it be viable to do it from a base of 720p or better still 900p, and have enough to render a decent image.

540 to 1080p seems to work really quite well for Nvidia:

First question would be "do MS have a comparable solution". Nvidia have invested a ton in this and while MS are also spending on ML tools, no-one seems to be showing anything like Nvidia at the moment. They really do seem to be out there on their own.

Second question is if MS INT8 performance is good enough, given that it directly takes away from other performance. I suppose you could schedule it for async compute during some low usage times of the pipeline, but adding an extra stage would add to latency.

A final thought is, what if you did a full res depth and lighting pass, complete with an object ID and direction vector stored in another buffer, then used that to up-res individual areas/sections of the screen based on what was there? You could train the machine to up-res individual objects while ignoring local lighting conditions.
 
Last edited:
Lucky 13 game in Halo universe?
Certainly would be interesting.

Would be nice render target to hit visually as well.

I'd forgotten about Lucky 13! Probably my favourite of those Love Death Robots things. But yeah definitely about one Pelican and one set crew, lucky in the sense that this Pelican doesn't just blow up every time it's near a Spartan! :LOL:
 
Nvidia have invested a ton in this and while MS are also spending on ML tools, no-one seems to be showing anything like Nvidia at the moment. They really do seem to be out there on their own.

What im very curious about is what NV's next (ampere) will pack. Their RT from 2018 seems comparable to RNDA2 in ray tracing (judging by the XSX and PS5 showcasings). I guess that ampere will only improve from there.
 
Second question is if MS INT8 performance is good enough, given that it directly takes away from other performance. I suppose you could schedule it for async compute during some low usage times of the pipeline, but adding an extra stage would add to latency.
Part of the reason I said 720p or 900p was because it doesn't have the same level of tensor performance of RTX cards.
Would be nice to have an indication if it would have enough to realistically do it from 720p or lower.
Even if it was based on the what is currently the best case scenario of DLSS2.0.

As you say would also be interesting to hear how motion vectors can be integerated into engines and what else they can be used for etc
 
What im very curious about is what NV's next (ampere) will pack. Their RT from 2018 seems comparable to RNDA2 in ray tracing (judging by the XSX and PS5 showcasings). I guess that ampere will only improve from there.

I'm finding it hard to get direct comparisons of RT performance, as all we've got at the moment is the MS Minecaft demo, vs lots of stuff online with DLSS involved! I get the feeling that pure RT performance from RDNA2 at the XSX level is probably above a 2060 (maybe?), but I guess we'll find out whether Nvidia's RT cores have less impact on the rest of the GPU for hybrid rendering in due course.

Ampere will be very interesting, and I'm personally eyeing it up for my next GPU. Given the way DLSS is coming on it's going to be hard to justify RDNA2 unless it's a lot cheaper.

Part of the reason I said 720p or 900p was because it doesn't have the same level of tensor performance of RTX cards.
Would be nice to have an indication if it would have enough to realistically do it from 720p or lower.
Even if it was based on the what is currently the best case scenario of DLSS2.0.

As you say would also be interesting to hear how motion vectors can be integerated into engines and what else they can be used for etc

It's really hard to know isn't it. DF have an interesting DLSS graph that shows even though the 2080Ti has about double the INT8 performance of the 2060S, it's only about 50% faster at 1080p DLSS output from 540p (I think). As the base resolution and and output resolution increase, then more of the 2080Ti's Tensor performance seems to come into play. Maybe there's some fixed function element getting in the way at lower resolutions.

A 2060S' Tensor cores should be about 7 times faster at INT8 than a hypothetical Lockhart at 4TF FP32, but that's pitting the entire MS GPU against just the Tensor cores in the Nvidia chip. But then again, at a base resolution (before MLSS) of 540p you'd probably have low utilisation of the 3D pipeline and be able to make good use of async compute to regain some of the utilisation lost to 540p rendering (huge pixels compared to polygons so inefficient for rasterisation and all that).

While we're on the speculation train, lets stay on for one more stop!

2060S in Death Stranding is giving DF figures of a 0.736 ms cost for the DLSS. Taking this at face value (assuming it's not a separate stage with the full cost hidden), if half your Lockhart GPU time was taken up with ML upscaling, and the figures between the two are directly comparable (probably not), that'd be about 14 x 0.736 ms = ~10.5 ms. Or less than one third of a 30 fps frame. Would this be better than native 1080p or 900p with sharpening for a 30 fps game?

Errr ... maybe? (And it might let you get away with shockingly low res textures and less time lost to RT too...)
 
540 to 1080p seems to work really quite well for Nvidia:
I'd love to see 540p -> 4K. I think this was shown as screenshot in the DLSS thread, but not sure if one can set this in options menu.

IMO, using high sharpening is a bit of cheating to make an impression. It explains most of the extra detail over previous versions, but personally i don't like sharpening even if it does not cause crawling.

upload_2020-8-4_14-39-42.png
Most interesting artifact is background moving left and right, following the arm animation in scenes like this.
Like SSAO and SSR, third person player characters cause most issues in practice it seems.

It's nice, but i still hope we find some better use for ML in games, like character animation. Who knows, maybe ML becomes the next big thing and my depression about missing 'new games' comes to an end :D

Aside that, texture upscaling to reduce storage would be interesting and likely already practical. But without temporal subpixel samples, that would be very different from DLSS.
 
Status
Not open for further replies.
Back
Top