Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Wouldn't increased radius alleviate some pressure on the system to load things JIT? Assuming future games feature similar player movement speed through the environment a bigger "world cache" should make it a bit easier to swap stuff around in the background without disrupting the player's immediate surroundings.

An issue mentioned is that of scaling. You're likely looking at exponential capacity cost relative to processing speed gains.

An overarching problem than, as mentioned in a previous post, is that memory scaling is poor on the hardware progression side. In a hypothetical world in which we had say x8 more memory capacity at each tier the cost/benefit analysis would likely be different. The PS5 really would have 64GB or even 128GB memory, with PC hardware also adjusted comparably, in terms of inline scaling. At something like 128GB I do wonder if the design paradigm would be akin to something like loading the entire game world essentially.
 
Wouldn't increased radius alleviate some pressure on the system to load things JIT? Assuming future games feature similar player movement speed through the environment a bigger "world cache" should make it a bit easier to swap stuff around in the background without disrupting the player's immediate surroundings.
It would seem to but it doesn't actually :) Think about drawing a circle around a dot (the character) that you move around as the dot moves. The amount of stuff you need to cycle in/out you can think of as roughly the amount of area covered by the leading edge of the swept circle. If you move faster, that area will be larger per unit time. Critically though making that circle bigger (i.e. a larger streaming radius) actually makes the amount of things you need to stream in/out slightly *worse* for the same speed movement, because you effectively pick up more area on the "sides" (relative to your movement direction). The only case in which it is a benefit is if the new area you are covering is "off the map", in the limit being your circle is big enough to cover all the assets in your level.

To your point of "it means things that get swapped at the edge of a larger radius might be less noticeable": that could certainly be the case, but it doesn't do anything for the worst case - i.e. a player proceeding at the max speed in a single direction. This is also unfortunately a very common case as players traverse from point A to B. If you streaming throughput is not fast enough then as you continue moving your effective streamed in radius will decrease to a point where you are back to the original problem. This is similar to SSD SLC caches; having a larger cache can mitigate short bursts of I/O, but it doesn't help the case where someone writes a pile of sequential data, where it will eventually revert to the underlying non-cached throughput.

That's fair. However we can't expect gamers to accept the explanation that it's complicated...forever.
And I'm not saying gamers should accept it at all. In these conversations I'm only trying to provide some explanations when people say things like "I don't understand why this isn't the case". I'm not trying to say people should ever judge things by anything other than the results. We have plenty of good and bad examples when it comes to this stuff; it's completely legitimate to call out the bad cases.
 
That all makes sense but the issue is that these have been known problems for multiple console generations now. At what point can we stop using the excuse that it’s complicated? Multi-threading and resource management aren’t going to get any easier as games get bigger in the future. Core counts and memory pools and interface bandwidths will only keep growing. Software needs to catch up.
Just increasing the multi-threadedness might not help the problem, though, because that introduces different bottlenecks depending on how CPUs handle multi-threaded workloads. There is latency between AMD CCXs, for instance, and you could in theory no longer be limited by the time it takes to calculate your way though a workload but stall waiting for the parts of that workload to be assembled in a useable state. Just making it multithreaded might not solve how the problems present. I think we've seen something similar happen with the current implementations of DirectStorage. We may have had a stutter caused by the CPU being busy decompressing data, and moving it to the GPU cases a stutter because the GPU is busy decompressing data, sometimes even causing a larger performance hit.

I also think it's important to recognize that you notice the stutter a lot more at higher framerates, and the framerates of consoles have shifted from mostly 30 in 360/PS3 to mostly 60 now, with many titles offering a 120hz mode.
 
@Andrew Lauritzen How well does the engineering/programming side of game dev scale with headcount?

From my, rather limited - in games anyway, experience.
It's less about scaling with headcount and scaling with EXPERT headcount.

Tech lead, and senior level programmers, are always going to be your limiting factor.
Both in terms of gatekeeping the code that is accepted, and they will be the hardest people to find and keep!

20 junior and mid level devs can produce lots of new code and thus new features, AND new bugs.
But it takes expert programmers to review all that code, make and inform larger architectural decisions,
point out pitfalls to certain designs and highlight problem areas.
And experts are few and far between, especially for any decent sized codebase even a new expert will need months or years to become an
expert in that codebase.

This is all assuming you have a really good org around you, fully staffed QA and test systems.
good Dev Ops to handle everything that is required but isn't actually writing code.

tldr; imho it really only scales with the top level expert programmers, and they are VERY hard to find and keep.
But my wide area guess would be beyond 5-10 expert devs, and at least the same supporting mid level devs is around the point where
productivity starts to slow down if your adding more people - For a single specific project.
Obviously for something like UE you can break it up into many different components.

Thankfully game design scales much better, ie art creation.
 
However we can't expect gamers to accept the explanation that it's complicated...forever.
Well it kinda is. ;) And where a complicated problem is finally solved in principle, it always comes down to a financial, business problem whether it's applied or not. What will it cost to solve this, and what is it worth?

Not wanting to derail this great tech talk with business talk! Just...that's what gamers are going to face. The world is imperfect as there aren't enough resources to make it perfect to the standard we want, although if we settled for an older standard like 12th Century Western Europe standards or 1990s gaming, we could easily make it 'perfect'. The envelope is always going to be pushing something shy of ideal until tech plateaus. Tables and wheels are pretty good and reliable these days...
 
Back
Top