CET!1700 hours? Which time zone?
All frostbite games do this, I swear - it is frustrating that they essentially do in-engine renders for their cinematics and "gameplay" reveals.At the time of the in-engine-gameplaywontlooklikethat BF6 teaser ?
Do you happen to have a link? I'd be curious to see.My video should be going live today at 17 and at one point I mention a visual artefact with Nanite that I have seen no one else post about before - I do wonder if it is a feature of how nanite functions or if it is just an issue in the current EA version of UE5. I would be curious to hear what people think could be the cause! Essentially there is a some shuffling of nanite when the camera changes position, not anything like tessellation boiling or a discrete LOD shift, but more as if the world pieces shuffle into place as the camera comes to a rest. Unfortunately that is the best way I can describe it - it needs to be seen in video form really.
You mean almost as if it's shuffling back and forth between two states unable to properly settle as the camera starts to rest?My video should be going live today at 17 and at one point I mention a visual artefact with Nanite that I have seen no one else post about before - I do wonder if it is a feature of how nanite functions or if it is just an issue in the current EA version of UE5. I would be curious to hear what people think could be the cause! Essentially there is a some shuffling of nanite when the camera changes position, not anything like tessellation boiling or a discrete LOD shift, but more as if the world pieces shuffle into place as the camera comes to a rest. Unfortunately that is the best way I can describe it - it needs to be seen in video form really.
If you want to replace surface triangles with SDF, you also need a 3D texture for material (at least for UVs). So two volumes. That's much more RAM. And you constantly need to search for the surface. SDF is hard to compress (maybe impossible), because doing something like an octree with gradient per cell breaks the smooth signal and resulting small discontinuities break local maxima search or sphere tracing methods i guess.
I'd really like to know how collision detection between 2 SDF volumes works. Never came across an algorithm. Could be interesting...
Multiple surface representations is just one source of exploding complexity. There also is diffuse vs. specular, transparent vs. opaque, raster vs. rt, static vs. deforming, fluid vs. rigid...
I share this detail amplification ideas and will try something here. My goal is to move 'compression using instances' from a per object level to a texture synthesis level. Requires blending of volumetric texture blocks, where SDF is attractive. Instead duplicating whole rocks, we could just duplicate a certain crack over rocky surfaces. LF repetition vs. HF repetition. Surely interesting but very imperfect. Can not do everything, so just another addition to ever increasing complexity.
I've chosen surfels because they can represent thin walls even with aggressive reduced LOD pretty well. Voxels or SDF can't do that, and volume representations are always bloated if we only care about surface and require brute force searching. So they become a win only if geometry is pretty diffuse at the scale of interest. (Generally speaking. It always depends...)
Unfortunately. We already have too much complexity, e.g. thinking of games using thousands of shaders.
It's not that i rule out searching for the ultimate simple solution which solves everything quickly. But i doubt we'll ever find it. It keeps adding new things, removing some older ones, trying for the best compromise.
I remember the paper pretty well, although i did not understand it in detail. However, it inherits the typical grid problems: If the model has complex sections (mouth in- and outside, space between fingers, etc.) texture leaks over multiple surfaces. To solve this, you need high volume resolution again. Also it adds a level of indirection: Object space -> UV volume -> 2D texture. Having UVs on triangles avoids this. Finally, it does not really solve any texturing problems. You still need to solve for good UVs and they still have seams across their boundaries.Volume encoded UV maps are already a thing though, don't take up a ton of ram, and use standard texturing! Why bother with 3d texturing when you can use the same art pipeline everyone already has:
Can't run on my VirtualBox, but seems about particles colliding against SDF volumes? That's easy because just ray tracing. But to intersect two SDFs, we would need to do something like finding the set where both distances are zero. Maybe a hierarchical top down method would do, but sounds more work than Minkovski sum over convex polyhedras. I'm puzzled how Atomontage or Claybook do this. Recently i did read somewhere SDF is generally faster than polyhedras. But i can't believe it until i know how it works. It's not that rigid body simulators can only do a small number of objects.Others have done everything and anything colliding with SDFs: https://spite.github.io/sdf-physics/
Yeah, but that's not what i meant. It makes sense to use different methods for low frequency lighting (indirect diffuse bounces), and high frequencies (specular / sharp reflections, but also hard shadows). I don't know a method which can do both efficiently.As for other things, I've been going through those, and I suspect a lot of it can be collapsed. Specular and diffuse for example, this was separated to begin with out of sheer performance and data representation. However, as area lights and brdfs and etc. advance the argument for calculating the two separately seems to be vanishing. After all you're just integrating a single signal, the same signal ultimately with energy preservation.
Easy? Maybe, but slow. It's a volume, and you need to transform all of it. If you work with surface directly, you transform just that. That's n^3 vs. n^2. And this translates to lighting or rasterization as well. The difference is much too big to 'go all volumetric because it's easy'. Notice i do so actually, but only for the editor, so offline. And i'm not sure if i'm maybe a bit crazy with that.and signed distance fields are practically built to be deformed, you can just update the texture, easy!
There's always something to improve and to criticize. Finding such weaknesses is part of out job.To me, doing a lot of work deciding what it is you're doing, and how you're even going to do it, first and foremost can offer a lot of timesaving later on. The appropriate example is of course UE5 itself. It has multiple texturing approximations, with cards and virtual texturing. It has multiple mesh approximations with nanite and standard geo and distance fields and RT proxies. And multiple people had to take all this into account, had to make tools to make everything work together, have to maintain compatibility all the way through along with having all of that as a dependency for any future changes. That's a lot of work, a lot that might have been largely avoided if they'd communicated, and worked together, and figured out the best course for what everyone needs to do.
To summarize:
1) Desktop 2060 has similar performance to the 2080 Max-Q;
2) Desktop 2060 runs Valley of the Ancients at 1080p 37FPS;
3) Without considering possible engine performance boosts that could have happened throughout last year;
4) 2021's Valley of the Ancients demo is less demanding than 2020's Lumen in the Land of Nanite. I.e. the 2020 demo would show lower performance on any GPU.
5) The 2020 demo on the PS5 ran at 1440p30. I believe it was average 1440p + reconstruction to 4K (which takes away performance too).
Therefore, the claims (or perhaps a botched/hopeful translation) of a laptop with a 2080 Max-Q running the 2020 demo @1440p 40FPS are false.
Tim Sweeney didn't lie and the insults thrown at him in this thread are baseless.
Well I wouldn't personally go *too* far with this narrative. You definitely don't need some crazy-fast SSD and DirectStorage or similar, but regular HDDs (and obviously optical drives) are not able to keep up with the random access latency requirements when you move quickly through a detailed Nanite world. You will see pop-in and low-res geometry in conventional HDDs and for a lot of games that will likely not be shippable.A year later, turns out that a more advanced demo of UE5 runs "fine" on a desktop with a freaking HDD and not a crazy amount of RAM in editor mode.
Does the console version of the demo use the Epic settings or High settings for Lumen?EDIT: DF video
Does the console version of the demo use the Epic settings or High settings for Lumen?
Well I wouldn't personally go *too* far with this narrative. You definitely don't need some crazy-fast SSD and DirectStorage or similar, but regular HDDs (and obviously optical drives) are not able to keep up with the random access latency requirements when you move quickly through a detailed Nanite world. You will see pop-in and low-res geometry in conventional HDDs and for a lot of games that will likely not be shippable.
True, a SATA SSD should do the trick in terms of cutting down latency. But what about the awesome software rendering that your team has implemented? How does it stack up to mesh/primitive shaders in terms of cache behaviour and minimizing roundtrips to VRAM Do you envision a time in the near future where a software renderer's flexibility will defacto trumps any efficiency gained from the mesh/primitive shader path (like mesh/task shaders beating the input assembler and tessellator) ?
Epic settings at 1080p TSR at 30 fps.
Well, after watching Lumen stream, i'm no longer sure we disagree about complexity at all.Thus I'm not sure complexity need explode unnecessarily, not if its taken into account as a problem to be worked.
If the engine is that resource heavy they sure need to optimize the hell out of it.
wonder why its so heavy, ps5 was rock steady 1440p30 in first iteration, now 2070super stuggle with 1080pLumen is very heavy. a 2070 super nearly reach a real 30 fps locked at 1080p TSR. Same a 3080 and and a 6800 XT aren't able to reach 1080p TSR 60 fps locked. And the demo is more heavy than the first one.
Lumen is very heavy. a 2070 super nearly reach a real 30 fps locked at 1080p TSR. Same a 3080 and and a 6800 XT aren't able to reach 1080p TSR 60 fps locked. And the demo is more heavy than the first one.