Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.

milk

Like Verified
Veteran
Scott's post directly above here captured everything, but I wanted to reply since you asked me specifically! Like the quote in scott's post said, the thing about motion matching is that the system is creating a new animation from the bottom up based on the situation -- rather than the additive "ok, you're playing running animation, now also play reload animation and stumble animation over the top so it looks dynamic" approach in conventional animation blends, motion matching says "player is moving based on these metrics, sample the relevant parts of 50 animation clips and create a new motion that captures all of the goals animators laid out"... when you drill all th eway down, there's a fundamental similarity in terms of like, what the code is doing at the end (playing multiple animations at once + various code driven modifications to bone position) but that doesn't make it the same any more than you'd say two renderers are the same because they both use fragment shaders to fill buffers.

I'm not an animator by trade and only dabble the tiniest bit in rigging, so I unfortunately can't offer any more insight than the quotes and videos linked abov, but I know enough to know it's a big change in the way both the art team and the game's code approach putting animations on screen, and the results speak for themselves in games where it's used.

If I can add just a little bit extra layman's imput there, but all I've seen of motion matching research so far always points out the fact it's not yet a silver bullet for all kinds of common game animation, and its, at this point, mostly used for locomotion. I remember seeing ND devs themselves pointing out on twitter that while motion matching was indeed a new trick up their sleeve being emplpyed in LoU2, the game still relied a lot on traditional hand crafted animation blend points and the such. An extra tidbit one animator shares was that a very effective thing they did waa never do these blends linearly, but use some sort of curve, and that made the blend that much more natural.
 
Last edited:

SumoSaki

Newcomer
n extra tidbit one animator shares was that a very effective thing they did waa never do these blends linearly, but use some sort of curve, and that made the blend that much more natural.

This is very interesting and very noticeable in quite a few current generation games.
 
I'm surprised that during actual gameplay the players have no self-shadowing whatsoever. That can't be that expensive...

It would be expensive to do it right; with a close up the resolution does not need to be that high, but with the default gameplay view it would probably require high shadow resolution in order for it not to flicker/glitch and 'break the illusion'. At least that is my guess
 

milk

Like Verified
Veteran
It would be expensive to do it right; with a close up the resolution does not need to be that high, but with the default gameplay view it would probably require high shadow resolution in order for it not to flicker/glitch and 'break the illusion'. At least that is my guess

Maybe a naive single shadowmap sollution may be too expensive or flickery. But maybe they could do with a per-player shadowmap tightly fitted to each one's bounding box, like UE3 used to do. Or maybe use a cheap screen-space aproximation since that seema to be the new fad these days.
 

thicc_gaf

Regular
Maybe a naive single shadowmap sollution may be too expensive or flickery. But maybe they could do with a per-player shadowmap tightly fitted to each one's bounding box, like UE3 used to do. Or maybe use a cheap screen-space aproximation since that seema to be the new fad these days.

Also maybe worth mentioning, time crunch could've been a factor. Team(s) may've had to weigh options on what gets in and what wasn't as critical, self-shadowing might've been one of the things they deemed less required.

Though, I would be surprised if in-game there are none, but are they present in the replays/cutscenes? If so they maybe they couldn't get it to be less expensive on performance impact and still get it out by launch. Next-gen systems should be able to do it if the dev has the time, resources and budget.
 

KOF

Newcomer
For gaming, both the Micro LED (Samsung the Wall at Toronto Samsung store) and the Dual Layer LCD (Hisense U9E at an electronic mall in Beijing) pale in comparison with my Panasonic GZ2000 OLED TVs. For aiming in Call of Duty, Battlefront, and Battlefield, I tend to sit closer then I usually sit, which is not ideal for other two display types. For the Wall, that means nasty seams between modules will be more apparent. For the U9E, that means phostorization will be even more apparent then it already was. Dual Layer LCDs already have terrible viewing angle and uniformity due to combining two cells, but the worst is phostorization where one color in a cell is smeared unto other cell. It was especially bad with the U9E as it used two different resolutions for each cells (one at 1080p, one at 2160p) All this for barely better black than a regular FALD.

I also have trouble watching my Panasonic GZ2000 near field, but for a totally different reason. It's because the dynamic range is too good, with APL that can be set lower than any LCDs, the difference between the peak luminance and low APL will present spectacular dynamic range at the expense of hurting my eyes. So if I want to prioritize resolution and aim better, I will need to sacrifice HDR dynamic range. (no matter how good it looks) If I want to prioritize HDR dynamic range, then I will need to sacrifice resolution, sit back, and suffer with game deaths more. So in that case, I tend to engage on Dynamic Tone mapping and raise APL more like LCDs. My Panny is also relatively safe from IGZO burn-in due to having a custom heatsink design that can combat IGZO burn-in up to 16,000 hours compared with the regular LG OLED. I've seen zero IR while using mine for extended period of gaming so far at 7,000 hours.

But I do envy this year's Panny JZ2000 that comes with HDMI 2.1 feature levels with lower input lag (Mine is relatively fine at 22ms, but it's true I can't engage BFI on top of it unlike the LG. BFI adds 8ms to both LG and Panny) But I think I've had enough with 65 inches for now, so now my next TV will be an 83 inches OLED...
 
Status
Not open for further replies.
Top