Unreal Engine 5 Tech Demo, Release Target Late 2021

Discussion in 'Console Technology' started by mpg1, May 13, 2020.

  1. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,440
    Likes Received:
    565
    Location:
    Finland
    Texture resolution is independent of screen resolution.

    Smallest peble or arrowhead can use 8k texture. (or whole landscape.)


    Virtual texturing like the one used in the demo allows loading only areas of textures which are visible and in detail level needed.
    So a peble with 8k texture which never takes more than 50 pixels in screen, only has small parts of it loaded.

    It is also good to remember that loading can be distributed to multiple frames.
    So for 1440p image amount of data could be around single 4k image, if scenery is not changing.

    Another advantage is that the amount of memory used can be constant independent of amount of textures or their resolutions.
    http://silverspaceship.com/src/svt/
     
  2. j^aws

    Veteran

    Joined:
    Jun 1, 2004
    Messages:
    1,939
    Likes Received:
    42
    Mind blowing aspect is the geometry detail in Nanite. For Lumen, baked light maps used in the past can give similar lighting. It's the dynamic nature of this lighting that is lighing this immense geometry, which you don't get with light maps, that is mind blowing.
     
    egoless and BRiT like this.
  3. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,482
    Likes Received:
    15,934
    Location:
    Under my bridge
    UE5 uses virtual textures. Each object has 8K source textures on disk, but only the parts of those textures that are visible and drawn are required. In a perfect VT engine, you need one texel per pixel, so 1440p would be 3.7 million pixels needing 3.7 texels. I can't remember what maps Epic said they were using, so for illustration let's say 10 bits per pixel for textures. That'd be 37 megabits, or about maybe 5 MB per frame. At 60 fps, if you were refreshing every single texel, that's 300 MB/s tops. However, use of texels is largely shared across frames, so reality is far less than that. You also store tiles including multiple used texels, so RAM requirements are higher than just 5 MB but streaming requirements are much lower.

    The game Trials HD used VT at 720p. The calculations for that are on this forum and Sebbbi stated IIRC 7 MB/s. 1440p is 4x that, so 28 MBs doing the same thing trials was.

    This is the joy of virtualisation. The old way of rendering graphics was keeping it all in RAM and only using a tiny part of that dataset per frame. Virtualisation allows you to just keep the parts in RAM necessary for what you see (and are about to see). It introduces a huge efficiency in data requirements by reducing the working set. The upside is far, far smaller RAM footprint, allowing more variety etc. The cost of this is higher steaming requirements, but that dataflow isn't particularly bandwidth intensive for the amount of geometry you get to see.

    Conceptually, it's like foveated rendering. there's no point drawing the bits of the screen in high fidelity if your eye can't resolve them, so render just the smallest portion your eye is looking at in high quality and render the rest in low. the result is exactly the same but you reduce rendering requirements to a tiny fraction. Here, don't bother storing all the vertices (or texels) if you aren't using that data anytime soon. The future of rendering is moving towards efficiency, which is one of the key arguments some of us have raised over the raw TF comparisons with next-gen consoles over current-gen. Better ways of using the Flops means doing more work for the same raw resources. We're working smarter, not harder, and that evolution is happening all-round in the computing space.
     
    Remij, goonergaz, chris1515 and 2 others like this.
  4. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,482
    Likes Received:
    15,934
    Location:
    Under my bridge
    I felt the environmental lighting pretty good, although the character isn't well lit or connected. I think one of the issues if the reuse of the same assets and basically, it's a whole lot of stone and rock. Not terribly exciting. Not even banners or flags or stuff. But the underlying principle is 'infinite geometry', solving a key problem with art-work creation. If Epic had infinite storage and didn't care about artistic integrity, they could have thrown thousands of unique items into the mix. As a tech, Nanite deserves the interest.

    And the lighting I think very effective and 'next-gen', even if it's leaning heavily on current-gen techniques (SDF shadow casters for example). It gives a very coherent lighting model in the environments. It's not ground-breaking in so much as we've seen that level of lighting in games this gen, such an Uncharted 4 's home environments, but these are very static. Or the Tomorrow Children where the environments were super simple. That level of lighting in realtime in complex environments is a clear step up.
     
  5. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,440
    Likes Received:
    565
    Location:
    Finland
    I just had realization that people may have missed those wonderful years and excitement which surrounded progressive meshes and other continuous Lod schemes back in the day.
    http://hhoppe.com/proj/pm/

    Hugues Hoppe had (has) amazing research which seem to become back again in multiple ways.
    http://hhoppe.com/
     
    turkey and j^aws like this.
  6. Mskx

    Newcomer

    Joined:
    Apr 20, 2019
    Messages:
    166
    Likes Received:
    163
    Yeah, but 500 of the exact same statue and using lots of triangles and high res textures to make jagged brown rocks even more jaggedy, browner and rockier is super redundant after a point, not very interesting to look at and also not a great showcase for what actual games could/will be like using the engine.

    Yeah, the lighting being dynamic is impressive but it just doesn't look *THAT* good? Maybe if it was 60fps...
     
    John Norum likes this.
  7. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,660
    Likes Received:
    3,554
    Location:
    Barcelona Spain




    60 fps is the target, early tech, early devkit, early library...
     
    egoless likes this.
  8. Mskx

    Newcomer

    Joined:
    Apr 20, 2019
    Messages:
    166
    Likes Received:
    163
    Nice.
    I hope that is true for actual games and not just tech demos.
     
    John Norum likes this.
  9. BRiT

    BRiT Verified (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    15,531
    Likes Received:
    14,070
    Location:
    Cleveland
    Exactly, it's likely at least 9 months before Early Preview builds and 17 months before Final Release; Nearly an eternity in terms of being able to refine and optimize.
     
  10. j^aws

    Veteran

    Joined:
    Jun 1, 2004
    Messages:
    1,939
    Likes Received:
    42
    Well, the scene content that you are not impressed by is more art direction than technology. Although, I do think the scene suits the technology to look its best.

    Seeing 500 statues of that same statue isn't that impressive. But dynamic GI with bounced lighting for that scene geometry density really is impressive. If I had camera control, and I was playing like a Tomb Raider game, I would zoom in on a close up of that statue, and marvel at its geometry detail and lighting, which the engine should scale to with 1:1 triangles to pixels ratio when you use REYES.

    Or the last flyby scene at extreme speeds, where I can imagine a Siperman, Mirrors Edge, or Gravity Rush game, flying around an immensely detailed cityscape.
     
    egoless likes this.
  11. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    965
    Likes Received:
    1,092
    Exactly, but we have to admit that we already do the same since forever with reusing texture. Geometry is just the new texture it seems. Artists will deal with repetition, but the issue is there.
     
    megre and John Norum like this.
  12. goonergaz

    Veteran

    Joined:
    Jun 3, 2005
    Messages:
    3,551
    Likes Received:
    1,016
    Cheers, I can’t believe my understanding of how this is working was pretty spot on lol.

    I suppose I was trying to work out how this tech would work on a HDD, but I’m still a bit perplexed with that. But at least it’s slowly sinking in. I’m guessing this might be where the PS5 cache scrubbers might help out with all the swapping out of textures. Also I recall people were worried about SSDs overheating and surely this will mean they are constantly chugging away.
     
    megre likes this.
  13. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,660
    Likes Received:
    3,554
    Location:
    Barcelona Spain
    From a technological point of view, I think next-generation will probably be more interesting than current-gen out of titles like Dreams or Claybook or No Man Sky or Inside. Very fast storage but there is a size problem and innovation will come from this. I am happy to see a REYES renderer like but I want to know what is the solution to keep the game size reasonable.

    The second point is triangle RT or GI approximation, it will be interesting too.
     
    megre, John Norum, milk and 1 other person like this.
  14. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,270
    Likes Received:
    2,599
    Location:
    Wrong thread
    Cheers for the explanation!

    So really the tremendous flexibility, legacy mindedness and citadel style of multi layer security of the PC can create a kind of "death by a thousand cuts" in terms of some I/O operations.

    Whelp, lets hope Unreal 5 makes pooling of assets in ram easy!

    Makes me appreciate how much I'm unable to properly appreciate. :runaway:

    I wish I were qualified to say!

    Looking at his use of "unbuffered IO", and looking at some of the other pages on the MS Hardware Dev Centre site DSoup linked to, I think Carmak might be talking about different driver IO modes. How he's able to select between these types of access I dunno. Maybe he's tinkering with drivers, or maybe the drivers expose certain parts of their functionality that he's able to tap into.

    The MS page for "Using Direct I/O" (as opposed to buffered) lays out some of the advantages and disadvantages for direct:

    https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/using-direct-i-o

    "Drivers for devices that can transfer large amounts of data at a time should use direct I/O for those transfers. Using direct I/O for large transfers improves a driver's performance, both by reducing its interrupt overhead and by eliminating the memory allocation and copying operations inherent in buffered I/O.

    Generally, mass-storage device drivers request direct I/O for transfer requests, including lowest-level drivers that use direct memory access (DMA) or programmed I/O (PIO), as well as any intermediate drivers chained above them."

    ...

    "Drivers must take steps to maintain cache coherency during DMA and PIO transfers."

    Someone else is going to have to say whether I'm barking up the right tree or not!
     
    BRiT and DSoup like this.
  15. Mskx

    Newcomer

    Joined:
    Apr 20, 2019
    Messages:
    166
    Likes Received:
    163
    Will we have that kind of density without it being 500 of the exact same statue? There isn't some sort of discounted duplication/reuse thing going on?
    And same thing goes for the environment in the speed section, will that amount of density at high travelling speeds be possible when most assets are not just some variation of jagged brown rock?

    That is probably being unfairly suspicious, i guess, though i do wish the demo dispelled some of those thoughts.
     
    milk likes this.
  16. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    2,458
    Likes Received:
    175
    Don't know if it was just me but that final scene where she flies certainly didn't look like 30fps vsync. It looked like it was 50+fps and very smooth. Like a movie. :)
     
    #956 Unknown Soldier, May 20, 2020
    Last edited by a moderator: May 20, 2020
    Picao84 likes this.
  17. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    2,505
    Likes Received:
    776
    Looks like i won't have to upgrade for next gen (2080Ti/3950x/nvme) to play at next gen console level quality of game ports, guess most will be UE5.
     
  18. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,472
    Likes Received:
    7,719
    Location:
    London, UK


    Unbuffered I/O. There are a few different options for using unbuffered I/O in Windows and I don't read Carmack as suggesting it as a viable option, he said "quibble" as in a technicality, i.e. you can drive most cars with just 3 wheels on but you wouldn't want too.

    The issue with unbuffered I/O is if you don't respond to I/O reads/writes fast enough then data is lost, which can be catastrophic for the storage system. It's not all abut reads, there are writes too. Unless he says more about how he would do it and mitigate data loss, we're left guessing.
     
    Inuhanyou, Love_In_Rio, megre and 4 others like this.
  19. John Norum

    Newcomer

    Joined:
    Mar 23, 2020
    Messages:
    61
    Likes Received:
    68
    So Bright Memory is using Quixel models




    pretty good difference between normal and high model, just look at self-shadowing

    (will be released as 4K@60fps on XSX, and actually it runs 4K@110FPS on RTX 2080 TI)
     
    #959 John Norum, May 20, 2020
    Last edited: May 20, 2020
    Dictator, jlippo, AzBat and 4 others like this.
  20. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,412
    Likes Received:
    3,270
    Insidentaly, the detail added by better self-shadowing could conceptually be achieved without micro-geometry. It is possible to trace a ray, or do some cheap aproximation of that in texture-space against a heightmap of the model and integrate that with the shadowmapping results. The inner surface bumps can also be aproximated with POM or its cousin algo's. It's really the silhouettes that are harder to replace actual geometry with (although even that is not impossible)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...