Unreal Engine 5 Tech Demo, Release Target Late 2021

Discussion in 'Console Technology' started by mpg1, May 13, 2020.

  1. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    967
    Likes Received:
    1,094
    Disagree in general. RT is not efficient in integrating the whole visible half space of a given sample point. For that a simplified representation of the scene can give results faster and even more accurate.
    Not sure if that's the case for Lumen, but they have overall better quality and similar lag to Metro Exodus IMO.
    So, if you want path traced lighting, and compute beats RT in a certain aspect, you would just combine both. They do not exclude each other. E.g. Instead tracing paths of 10 segments to capture 10 bounces, you would just trace 1 or two 2 segments and in the last hit sample the result from Lumen. You get infinite bounces and less noise in less time.

    Mesh shaders would not help with subpixel triangles eventually. There is no need to draw triangles for a single pixel, and mesh shaders still are about feeding the hardware rasterizer which is not built for tiny triangles.
    Mesh shaders options to enqueue work and transfer data through on chip instead main memory is interesting and could help to draw the remaining >1px triangles faster i guess, but there's not much of them.

    So if we want, we could look at it the other way around: Epic has achieved more with flexible compute than others with new fixed function HW like RT and mesh shaders were able to, so far.
    It is thus questionable what is left behind, or which direction is better. But we might need more flexibility on the hardware side so we do not run into incompatibilities and restrictions to hold things back.
     
  2. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,162
    Likes Received:
    5,463
    I'll be curious to see if the push for 1 poly per pixel leads to AMD and Nvidia investing more in their rasterizer hardware, or if this ends up becoming the point where software rasterizers start to take over. There must be a reason why they've never pushed the fixed function rasterizers, like having two per shader array on AMD instead of one.

    Edit: Actually a good way to think about it is each raster pipe on RDNA can output 16 pixels from 1 polygon per clock. I'm not sure what the average case would be. Most games probably have a lot of polygons that cover more than 16 pixels. So say the average is something like 8 pixels per polygon (made up number just for demonstration), then if you want to push for 1 pixel per polygon you probably need 8 times as many raster units. GPUs would really need to make massive improvements to keep up with software rasterizers for that particular use.
     
    #982 Scott_Arm, May 21, 2020
    Last edited: May 21, 2020
  3. ThePissartist

    Veteran Regular

    Joined:
    Jul 15, 2013
    Messages:
    1,559
    Likes Received:
    507
    I just checked out the original video from Epic and towards the beginning where there's one coloured polygon per pixel. If you look in the upper right of the image as the camera pans, you can actually see the pixels transitioning between what appears to be different LODs.

    Does that contradict what Epic stated, or was it only DF that suggested LODs wouldn't be present?

    Unless it's just my eyes.

    Here's the link FYI:
     

    Attached Files:

    John Norum and JoeJ like this.
  4. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,162
    Likes Received:
    5,463
    @ThePissartist I don't think so. The engine probably adjusts LOD dynamically. What they were talking about was actually creating multiple static LODs to store on disk and swapping between them. This solution should have one model stored on disk as the highest LOD and then the engine will dynamically scale as it's rendered.
     
    jlippo, Inuhanyou, milk and 4 others like this.
  5. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,679
    Likes Received:
    3,578
    Location:
    Barcelona Spain
    Exactly it is non authored LOD or continous LOD or some people think true LOD.
     
    milk likes this.
  6. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,494
    Likes Received:
    7,746
    Location:
    London, UK
    An issue that may arise with this is if you have a crazy large LOD for, say for example, realtime cutscenes with closeups, and you use that same LOD for gameplay how much memory overhead do you need for these higher-level LOD models because presumably they need to be in memory for the engine to scale down?

    I think it's impressive that UE5 can do this in realtime but I wonder in terms of assets size in memory, how desirable it will be to have one incredibly detailed LOD per object. You may still want high and intermediate, one for cutscenes and the other for gameplay.
     
    John Norum likes this.
  7. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,162
    Likes Received:
    5,463
    That's where the "virtual geometry" part comes in. We don't know how it works yet, but it's assumed they don't need to load the entire mesh off disk, or at least if they do they don't need to keep the entire mesh in memory, but only the chunks that are needed to render, like virtual texturing.
     
    jlippo, Inuhanyou, milk and 2 others like this.
  8. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    967
    Likes Received:
    1,094
    Looks a bit like triangle patches at different LODs, but still triangles for both. Contrary to my assumption of triangles only in the leafs. This could explain their claim to support any platform, and tech would be even more impressive.
    Probably worth to download from vimeo to take a closer look... :)
     
    jlippo and chris1515 like this.
  9. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,525
    Likes Received:
    15,981
    Location:
    Under my bridge
    Not at all. They're virtualised and only portions streamed on demand. Sweeney explicitly stated it allows more geometry to be used than can fit in memory. ;)
     
    DSoup likes this.
  10. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,162
    Likes Received:
    5,463
    I do think they've stated nanite will only work on high end PCs, next-gen consoles. For older platforms and mobile devices the UE editor will have tools to convert the assets to a traditional render pipeline. I'm assuming it will have configurable options to reduce geometry and bake normal maps if you're targeting low-end platforms. I'm not sure if the barrier is just raw compute power (likely).
     
    PSman1700, JoeJ, milk and 1 other person like this.
  11. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,494
    Likes Received:
    7,746
    Location:
    London, UK
    Good call, I'd forgotten that. I do wonder how this will work, perhaps the model exists in more conventional LOD-level-like layers. I get in a closeup can you only use the bit of the model visible, but if you pull back then you have most of the model in view, you don't want a massive models scaled down if you're removing all the detail.
     
  12. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,414
    Likes Received:
    3,273
    Exactly. They'll try to bridge the gap with tools. How effective they'll be, future will tell.
     
    PSman1700 and pharma like this.
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,525
    Likes Received:
    15,981
    Location:
    Under my bridge
    This whole thread (outside of the bits talking about storage speeds and compression ;)) has been considering that. We have interesting developments like the representation of geometry as flat, texture-like data, and expectations of tree structures. The geometry will certainly be subdivided into 'tiles' of some description, possible 3D titles, although 2D representations would save that. A hierarchy seems a given with LOD progressing down layers within the object and detail progressively childed. this might be part of the 'cooking' alluded to in the Chinese presentation, with automated tools taking the source data and processing it into a usable format for Nanite.

    I guess the immediate interpretation of the description of using the source geometry and not having to create game-level content makes us assume the same raw vertex info in storage just being streamed, but I guess that's not the case. Alternatively, that is the case, and Epic are somehow fetching the necessary triangles from this raw data. that seems unlikely.
     
    jlippo, PSman1700, milk and 2 others like this.
  14. goonergaz

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    3,554
    Likes Received:
    1,016
    To me it seems this tech is a way to create a close to h/w version of audio, GI and shadows via software instead of using the h/w RT thereby freeing up resources. This should free up the h/w RT to work on reflections...but maybe I got the wrong end of the stick?

    I know, the misinformation that PS5 lacks certain features...

    I personally think PS5 potentially does have more to gain in the sense that due to the extra GPU grunt of XSX this will help close the gap.
     
  15. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,494
    Likes Received:
    7,746
    Location:
    London, UK
    You ignore my attempts to educate those who don't under stand how Windows works, about how Windows works. :-(

    I vaguely recall in a previous job we had a 'deep bit packing' method whereby data of increasing levels (it was mapping data) was interleaved in a single stream, e.g. 2x bytes highest level, 4x bytes intermediate level, 8x bytes lowest level, repeat ad infinitum. This relies on your CPU/bus/RAM architecture not penalising this type of data access.
     
    BRiT likes this.
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,525
    Likes Received:
    15,981
    Location:
    Under my bridge
    the problem here is how do you trace against triangles when half the triangles of you mesh aren't present in RAM? How do you harmonise Nanite loadng just the front half of an enemy and raytracing a reflection of their back half in a mirror? Nanite may be an optional feature for some games that don't use reflections...

    Was that in RAM or storage? I guess we've had interleaved data like PNGs for decades, but it's always with a view to a complete serial load AFAIK and doesn't work with partial access.
     
    zupallinere and BRiT like this.
  17. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,494
    Likes Received:
    7,746
    Location:
    London, UK
    The mapping data was tens of terabytes in size and was read off a RAID of HDDs, streaming into RAM in chunks but we wanted to keep memory usage down to minimum hence interleaving different levels. Because you're effectively reading all three detail levels from HDD through various caches, switching between detail levels was seamless for both disk I/O and RAM under came under pressure when it needed too when the highest detail levels were used. Mostly it balances out you see more lower detail mapping or fewer higher level detail mapping and disk I/O is content and therefore predicable which is always nice!
     
    jlippo and BRiT like this.
  18. goonergaz

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    3,554
    Likes Received:
    1,016
    Can't you 'just' draw the bits that need reflecting? The algorithms/rays are already there for the light path so add a path that says 'if this is reflective work out what it will reflect and draw that bit' kind of thing?
     
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,525
    Likes Received:
    15,981
    Location:
    Under my bridge
    I don't think so, because the reflected space isn't contained, unlike the camera space. The camera will view a cone in front. If you had a reflective puddle on the floor and two windows either side of the screen, you'd need geometry data for three different viewpoints. The rate of change in these viewpoint could also be dramatic, far more than the rate of change in the player camera. You'd also need this triangle data in a BVH for the hardware to trace against, and this data would be constantly changing.

    I think you'd be better off with simplified proxy geometry. You don't need 1:1 fidelity with reflections at distance. Have a traceable geometry for the level (which could be whole objects streamed in and out) with simpler shaders and trace this for reflections and maybe even lighting, combining the results with the detailed geometry of Nanite.
     
  20. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,414
    Likes Received:
    3,273
    Yeah, that's what I have been thinking. Build and update a lower poly proxy of the scene, still using nanite, for the RT rays. No culling for this one. Maybe some less agressive LODing based solely on distance to camera's centerpoint may stil be a win, but this can be relaxed in the name of updating the proxy less often.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...