Old good nyquist sampling theorem fits well to displaced surfaces as well.Since people are still talking about it.
A small example of swimming character meshes in Messiah. The scientist on the right. Courtesy of Digital Foundry.
Should be timestamped at 24:16.
Regards,
SB
Old good nyquist sampling theorem fits well to displaced surfaces as well.
Characters were few cylinders with tesselation and hole filling polygons in between.
Pretty sure they didn't double amount of displaced points each LoD level, so displacement sampling locations could move around full resolution heightmap.
With properly selected mipmapping and tesselation sampling forced to keep sample point locations, it might become quite stable.
Yup.Smooth tesselation morphing is done by doubling the tess factor all at once, and ajusting the displacement smoothly.
For some reason, the standard DX11 aproach was not that. It smoothly moved the new vertices tangentially through the surface out of a "mother" vert, all while sampling the heightmap along the way. That aproach only maximized the "swimming". Missed oportunity...
Nothing about hardware tessellation prevents tessellating an entire mesh at once. Doing that is not performant, and also has little benefit over just sending a higher res mesh to the gpu in the first place. The main advantage of hardware tesselation is dynamically changing tessellation per patch.
I'd like to play a game where I can turn from a giant into an ant
The theme is irrelevant to the question. It was just an example.Antgiant feels like a bit of niche genre.
The theme is irrelevant to the question. It was just an example.
Will Nanite allow for universally indentical detail level?
For example, I'd like to play a game where I can turn from a giant into an ant with no change in graphical detail. Imagine a gameplay situation where you are stomping around as a giant in a densely underbrushed forest. With the press of a key you'd begin to transform into an ant. I say 'begin' because the transition would not be instant but smooth and rather slow, so you'd be able to clearly see everything during the miniaturizing and enlarging process.
The limiting factor there will be content requirements. You'd need the whole world modelled at a scale for a giant also being modelled in the detail for the ant scale. Short of some incredible online streaming tech, you couldn't fit enough data in the local storage for such graphical density, plus the costs to produce are likely prohibitive. Even using photogrammetry, take a street for example, you'll need to photo the whole thing an inch at a time for ant scale! Capturing an entire city, inside and out, at the detail level suited for an ant, would take too much time and effort and, even if that were automated, too much data.Will Nanite allow for universally indentical detail level?
For example, I'd like to play a game where I can turn from a giant into an ant with no change in graphical detail. Imagine a gameplay situation where you are stomping around as a giant in a densely underbrushed forest. With the press of a key you'd begin to transform into an ant. I say 'begin' because the transition would not be instant but smooth and rather slow, so you'd be able to clearly see everything during the miniaturizing and enlarging process.
It's OK. There was already DRS before. The problem is it's hard to compare to others solution. But I immediatly noticed the new Fortnite was running at a lower resolution with more aliasing. So it's a trade-off, less resolution in exchange of RT reflections, GI and higher poly count. In the end the image fidelity obviously greatly improved as the loss of resolution is not a big deal because of high diminishing returns.I looked at the Unreal Engine 5 version of Fortnite and I am very satisfied with the TSR upscaling, which runs together with dynamic resolution. I think it raises the low native resolution to a great image quality. In general, what are the experiences and opinions about TSR?
I'm not sure about this specific case, but in general arch vis scenes with clean lines and light colors are more challenging as a significant portion of the lighting comes from bounced (often multi-bounced) light and there are no diffuse textures or complex geometry/direct lighting to hide noise.I'm curious as to whether Is a minimal / very white indoor environment is a easy or challenging use case for Lumen.
Interesting differences between offline RT light baking and Real Time Lumen.