Unreal Engine 5 Tech Demo, Release Target Late 2021

Discussion in 'Console Technology' started by mpg1, May 13, 2020.

  1. Jov

    Jov
    Regular

    Joined:
    Dec 16, 2002
    Messages:
    506
    Likes Received:
    3
    Yes, but they both have the extra custom compression to help the I/O and the flow. The PC setup between the SSD--> RAM <--> vRAM on the same shared bus, compared that the the PS5/XSX data flow diagrams.
    Obviously this will be mitigated if there's a bucket load of system RAM 64/128GB vs 16/32GB.
     
  2. ThePissartist

    Veteran Regular

    Joined:
    Jul 15, 2013
    Messages:
    1,589
    Likes Received:
    537
    Seriously, just marketing? Seems to me that SSD was Sony's compromise, extra compute was Microsoft's. Only time will tell which was a wiser decision.

    Just because you haven't conceptualised peak SSD usage at 5.5GB/s (or indeed somewhere between that and 9GB/s), doesn't mean it doesn't exist, or is "marketing".

    Maybe this PS5 demo does, or does not, show peak usage - we don't know. What we *can* assume is that there *will* be games (likely in this engine too) that *do* make use of the SSD at PS5's peak usage.

    Sony have literally banked on it.

    I'm not even convinced that a PC with huge RAM makes up the deficit, even one with 100+ GB that'll be between 10-15 secs of SDD usage before it hits a bottleneck.

    (This of course assumes that not all data are unique in that 100GB, coz no storage solution can keep up with that)

    Imagine trying to load 100+ GB at the beginning of your game, in order to makw up for lack of SSD. That's some insane inefficiency.

    PC gamers would be better to wait until 6GB/s+ SSDs are released.
     
    egoless, ultragpu, Karamazov and 2 others like this.
  3. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,738
    Likes Received:
    8,124
    Location:
    London, UK
    There are many options to solving this problem. One is to use larger buffers, although if nextgen assets are ramping up in size, this could may create problems in itself. Another is to reduce the asset complexity across the board so you are loading less. Another is just live with some pop-in as we have for years.

    The only data you want in memory is the data you need now or imminently that you can't load on-the-fly - and nextgen "on-the-fly" is orders of magnitude better, being measured in microseconds rather than seconds. In the final demo sequence you see the character whizzing along at speed, now take that speed and scene complexity and apply it Spider-Man 2 on PS5. You're no longer loading New York at 20mb/sec, you can build your world to have insane levels of detail (accepting that the CPU/GPU needs to render it) and vastly more object variety because it doesn't all need to be in memory many seconds before it's rendered.

    How much I/O bandwidth do you need? Nobody knows yet and the answer will likely change, ramping up over the course of the next generation. Is it better to have more and not need it, or need it and not have it?
     
  4. SlmDnk

    Regular

    Joined:
    Feb 9, 2002
    Messages:
    596
    Likes Received:
    222
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,028
    Location:
    Under my bridge
    Yep. Sony went all in to provide a next-gen IO system. Makes sense if you are trying to create a next-gen streaming engine, to use the most next-gen platform available in that regard. Once you've got it working as a tech that actually works, you can then look at what you need to optimise to get it onto other systems.

    I'm unconvinced by the marketing argument. I think this is a genuine technological partnership, not even official but just two companies whose work is synergised at this point. Years ago, Sony went to development partners saying, "we think storage is going to be super important so we're focussing on a really fast IO system," and Epic said, "hey, that fits in with our ideas for virtualised geometry. We should totally get in on this." Or even, Sony went around major players asking what they thought the future was, and Epic (among others?) said, "virtual geometry if the storage is fast enough," and a dialogue opened up with Epic sharing their research and Sony looking if they could accommodate it or not. MS were also aware of the industry ideas and also went with fast storage, but Sony's emphasis on storage was perhaps a little more to Epics liking in their R&D efforts.
     
    #505 Shifty Geezer, May 16, 2020
    Last edited: May 16, 2020
  6. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,741
    Location:
    Barcelona Spain


    Interesting they work for reaching 1440p 60 fps on PS5 at same quality with optimization.
     
  7. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,829
    Likes Received:
    1,142
    Location:
    Guess...
    True but the the PC is theoretically at least capable of a higher throughput than the XSX even when it's using compression. And next gen drives (available this year), won't be massively far from the PS5's compressed throughput either. Granted the PS5 has other enhancements in place as well.

    Not that I think tons of expensive RAM is a good solution but surely with 128GB RAM (costing around £650!) you could load the entirety of the game data into there. If game installs are going to be bigger than 128GB next gen then that PS5 SSD won't go very far at all. But even if they were larger, having 80% of all your game data (the most commonly used data at that) stored in RAM at all times would reduce the amount of data you need to stream from the SSD and thus the bandwidth requirements.

    That's only 30ish seconds on a Gen3 NVMe drive. And presumably you wouldn't need to load everything in order to start the game, just enough to start things up and then stream the rest in in the background. Isn't that how things work at the moment anyway. just on a smaller scale?

    Yes this is the better solution IMO too. That should happen this year.
     
  8. John Norum

    Newcomer

    Joined:
    Mar 23, 2020
    Messages:
    61
    Likes Received:
    68
    I agree. How much games can store in 800+GB ssd. If geometry data exceeds 128 GB for a single game?
     
  9. fehu

    Veteran Regular

    Joined:
    Nov 15, 2006
    Messages:
    1,754
    Likes Received:
    746
    Location:
    Somewhere over the ocean
    Maybe you are referencing a different tweet than that you quoted
     
  10. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,741
    Location:
    Barcelona Spain
    Mr Daniel Wright from Epic said they are working to reach 60 fps at the same quality and they can reach it lowering quality today.

     
    #511 chris1515, May 16, 2020
    Last edited: May 16, 2020
    DSoup likes this.
  11. cheapchips

    Veteran Newcomer

    Joined:
    Feb 23, 2013
    Messages:
    1,170
    Likes Received:
    964
    I want to see what "lower quality" look like. It's presumably more than lower resolution. Also, really want to see what the current/gen mobile pipeline creates.
     
    BRiT likes this.
  12. fehu

    Veteran Regular

    Joined:
    Nov 15, 2006
    Messages:
    1,754
    Likes Received:
    746
    Location:
    Somewhere over the ocean
    Mr Daniel Wright not from sony said they are working for reach 60 fps not specifying the platform when the context is 1080p and not 1440p
     
    PSman1700 likes this.
  13. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,741
    Location:
    Barcelona Spain
    I said PS5 because we know demo run at 1440p 30 fps on it but maybe they work to reach 1080p 60 fps on the next-generation console.

    We have no idea how the demo runs on Xbox Series X or PC.
     
    #514 chris1515, May 16, 2020
    Last edited: May 16, 2020
  14. Recop

    Veteran Newcomer

    Joined:
    Aug 28, 2015
    Messages:
    1,290
    Likes Received:
    625
    The demo looks really good. If Next-Gen really looks like that, it's really impressive.

    It looks way better than anything else on the market. But i'll wait for actual games before being totally hyped.

    The motion blur looks bad though. Same for their "fluid simulation".

    Shouldn't it run better on X ?

    From what i understood, it is more powerful than the PS5 by a decent margin, but i didn't really follow this subject.
     
    #515 Recop, May 16, 2020
    Last edited: May 16, 2020
  15. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,741
    Location:
    Barcelona Spain
    It must run witn an improve resolution 15% better than PS5 at 30fps if there is nothing fancy done with the SSD.
     
  16. Nesh

    Nesh Double Agent
    Legend

    Joined:
    Oct 2, 2005
    Messages:
    12,455
    Likes Received:
    2,757
    I am pretty sure it has the CPU, memory bandwidth and GPU performance to handle such detail. What I wonder is whether if this specific demo requires some other optimizations inherent to the PS5 like the faster SSD throughput or the higher clockspeeds to fully replicate it. The part where she just flies so fast into such super detailed scenery with zero pop in for example.
    It could have been possible that this demo was specifically tailored for the PS5's differentiating hardware choices to showcase what they allow. Or maybe not and it is nothing more than a marketing deal with Sony to reveal first and exclusively the PS5 brand with Unreal 5.
    Epic barely makes a reference to the XBOX Series X and we know PCs have the processing power but not a super fast SSD solution as standard yet.
     
  17. j^aws

    Veteran

    Joined:
    Jun 1, 2004
    Messages:
    1,939
    Likes Received:
    42
    https://www.eurogamer.net/articles/...eal-engine-5-playstation-5-tech-demo-analysis

    "The vast majority of triangles are software rasterised using hyper-optimised compute shaders specifically designed for the advantages we can exploit," explains Brian Karis. "As a result, we've been able to leave hardware rasterisers in the dust at this specific task. Software rasterisation is a core component of Nanite that allows it to achieve what it does. We can't beat hardware rasterisers in all cases though so we'll use hardware when we've determined it's the faster path. On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders."

    Regarding software rasterising using CUs, I caught up reading the Eurogsmer article, and for PS5 at least, Epic are using its primitive/ geometry shaders for this area instead of CUs. So that presumably leaves the CUs for lighting and shading.

    Also, RDNA2 has 4 Primitive/ Geometry units?

    If they take a billion triangles in a frame and cull them down to 20 million lossless triangles, then render at 1440p/ 30. Those 4 primitive/ geometry shaders are spitting out roughly:

    1440p ~ 3.6 million pixels per frame

    PS5 GPU ~ 2230 MHz/ 30 FPS = 74.3 million cycles per frame

    1 pixel rasterised ~ 74.3/ 3.6 ~ every 20 clock cycles using 4 primitive/ geometry units

    So, their REYES algorithm can process a pixel in 20 cycles - is that quite cheap? Then use the CUs to shade and light for dynamic GI.
     
  18. Recop

    Veteran Newcomer

    Joined:
    Aug 28, 2015
    Messages:
    1,290
    Likes Received:
    625
    Ok, i think we will know soon. What's funny though is that this demo doesn't have RTX. Some new technologies while nice aren't the game changer promoted by some PR.

    GI is good enough to me.
     
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,028
    Location:
    Under my bridge
    I finally understand completely your position. You're saying that this unlimited detail was impossible before the compute based rasterisation, and that's the primary enabler.

    In that I agree, this is impossible without the change in rendering, and that part needed to be solved before the IO, making it the 'primary bottleneck'. This also isn't at odds with what I am saying about the whole demo and how content may need to scale by platform resources beyond GPU power; you're just looking at the problem differently, and asking a somewhat different question.

    As the textures are virtualised, these shouldn't require a huge amount of BW by any stretch, as you only need BW proportional to the number of pixels on screen. 1440p30 will be twice the workload of Trials 720p60 texture streaming in data flow. I don't know what the IO processing overheads are though. These may be significant with many small, random access block reads.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...