Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

... in which case Sony would have made a really canny move by signing this multiplatform engine demo (not even a game demo!) as a Sony promotional exclusive. Bravo! *tips fedora*

You're not alone in disscussing this. But it probably is the reason for the UE5 demo, they kinda have to. Besides, it is a interactive tech demo really, if the HB2 demo is any indication of what to expect in-game, that to me is more impressive, it's doing more like character animation/modelling, seemingly ray traced reflections, fire, night/day time etc, at 4k30). I mean if MS implies it's supposed to be actually being rendered on XSX hardware reprensative of game graphics, i see it as much worth, just without all the hype commentary like 'we have this much triangels per object etc). Unless their straight out lying, offcourse.
But we probably will get to see more in the (near) future.

or in the case of that demo it might be more important to have the double of SSD speed than 20% more GPU power.
who knows.

The demo seems to be more GPU reliant, and yes double SSD speed, but not double compression. I highly doubt this won't be even more impressive on the XSX, with more gpu ad cpu power to play with, and more importantly, less bandwith restricted, and the bcpack mitigating for the lower peak transfer rate, i think were in for a threath there aswell.
 
Probably because of different scope of “boats”. He’s thinking about the entire system end-to-end. A higher clocked GPU will raise and also demand higher performance systemwide, in this case all the way from SSD to GPU internals. You don’t have to keep all the units fully utilized to achieve reap max benefits.

That is not the only thing he has to be concerned with though. There is also efficiency, heat, bottlenecks, etc. They may have taken measurements about utilization too (e.g. how much % of “tflops“ and parts are actually used in games on the average). Probably out of context to take only one sound bite from the entire presentation.

We’ll know how well his approach works when the games are out.
 
Last edited:
Is geometry engines in rdna similar to rops? Could both consoles have same amount of geometry engines/primitive shaders?
 
Is geometry engines in rdna similar to rops? Could both consoles have same amount of geometry engines/primitive shaders?
geometry engine is likely equivalent to mesh shaders. So it's compute based front end that does it works over the CUs.
Fixed Function units are largely ignored.
 
I think the best thing here is to wait until further details of the demo emerge. I really doubt this demo, at this level of detail, is somehow exclusive to the exact performance of the PS5 SSD spec. This is Unreal we are talking about here. One of the most widely used engines in game development. It doesn't serve them to make an engine or demo that serves just a particular client. Sony has had a UE demo since the PS3 so this is nothing new. It is a savvy marketing move to align the first view of this demo/engine with the PS5, especially given the one metric in which the PS5 exceeds the XSX is one of the important components in enabling this engine. I doubt this engine needs exactly 5.5gb/s to do what it does.
 
It would be stupid to think epic wouldn't have implemented some kind of lod to allow for lower quality assets to be fetched with slower hw. Or it is just massive pop in fest. Probably epic thinks of range of hw that needs to be supported and scales accordingly. Reasonable would be something like 1.5GB/s read for lower end nvme SSD's and high watermark somewhere around 7GB/s. It's roughly only 4x difference in speed. Perhaps always full level geometry but use half the texture resolution. If geometry is already tied to display resolution then that means there has to be a way to load lower level geometry too and that would save additional bandwidth.

Another approach can be to just have less variety. In something like grand theft auto faster hw might allow for more variety in pedestrians, cars and less pop in when bringing in new assets.
 
I think the best thing here is to wait until further details of the demo emerge. I really doubt this demo, at this level of detail, is somehow exclusive to the exact performance of the PS5 SSD spec. This is Unreal we are talking about here. One of the most widely used engines in game development. It doesn't serve them to make an engine or demo that serves just a particular client. Sony has had a UE demo since the PS3 so this is nothing new. It is a savvy marketing move to align the first view of this demo/engine with the PS5, especially given the one metric in which the PS5 exceeds the XSX is one of the important components in enabling this engine. I doubt this engine needs exactly 5.5gb/s to do what it does.

Doesn’t look like it’s just an event, a marketing move.

It has more to do with their nextgen vision and how they follow through. They spent a lot of time working together, focusing and optimizing little pieces and parts to achieve what they have today instead of talking about tflops yet again. It’s about the entire system and holistic approach.

I doubt this is the only way to realize nextgen. People get anxious and hung up because they are impatient. I am pretty sure some other games and tech will showcase their nextgen-ness too. If you look at the PS3 era, despite all the launch fireworks, the most impressive games came later, some are not even tech related (e.g., Soul series).
 
But if you can't load the data for those triangles fast enough you might have some problems
But you go from 20 million triangle meshes to output 3.7 million in the end. That’s 18% of the triangles being rendered from the pre-culled 20 billion available from the source. If we put this SSD on PS4 will it output more detail at 720p over XSX? If you follow the line of logic, I/O far surpasses its compute capability here. 1440p resolution is no where near enough to save the triangles from being subpixeled and discarded.
 
It is not always about big numbers. Efficiency is important too.

In software, if you do things differently and save resources, you can do more.
But you go from 20 million triangle meshes to output 3.7 million in the end. That’s 18% of the triangles being rendered from the pre-culled 20 billion available from the source. If we put this SSD on PS4 will it output more detail at 720p over XSX? If you follow the line of logic, I/O far surpasses its compute capability here. 1440p resolution is no where near enough to save the triangles from being subpixeled and discarded.

If you put that SSD subsystem into a PS4 (along with the dependencies), devs would be able to make games differently. It would be a different and better game altogether. They may have more time to do things to surprise you. Personally I wouldn’t be looking at only the resolution. That is what the presentation is trying to tell people.

“Developers, developers, developers”
 
Last edited:
What a demo!

I was personally very pleased as geometric detail and dynamic lightung global illumination have always been my personal main interesta in graphics and so it's good to see this huge leap in both these areas.

None of them came out of nowhere either. Epic did try to push dynamic GI for PS4 but they couldn't get the performance/quality to where it needed to be... They though dynamic lighting was gonna be their big "next gen" feature, but then they discocered PBR could grant them an equally awe inpiring leap and that actually was within their reach. That was the difference between the ellemental demo and the infiltrator one. Ellemental had 100% dynamic lights, but their materials were not PBR yet. Infiltrator went with PBR everything, but a lot of light baking. Oh well.

Equally with geometry, Epic had foreshadowed this tech manytimes in the past if you paid attention. Sweeney for example, had a pre ps4 presentation where he discussed his big-picture vision for the future of real time rendering technology. I was actually trying to find that again last week to discuss with you guys. In a slide about the geometry pipeline there was something like "PS3 pipeline: Author High Poly asset and a low poly proxy for in-game and bake normal maps by comparing both. If game needa LODs author those by hand. PS4 pipeline: Author High Poly asset and a Low Poly Proxy (more detailed than a PS3 one though) and the engine will build a full LOD chain for you with one click. PS5: hopefuly by then, the engine will render the full High Poly asset as is." I think the slide said something like "Just render the fucking Z Brush model IN GAME!"

The fucker was right!
 
What a demo!

I was personally very pleased as geometric detail and dynamic lightung global illumination have always been my personal main interesta in graphics and so it's good to see this huge leap in both these areas.

None of them came out of nowhere either. Epic did try to push dynamic GI for PS4 but they couldn't get the performance/quality to where it needed to be... They though dynamic lighting was gonna be their big "next gen" feature, but then they discocered PBR could grant them an equally awe inpiring leap and that actually was within their reach. That was the difference between the ellemental demo and the infiltrator one. Ellemental had 100% dynamic lights, but their materials were not PBR yet. Infiltrator went with PBR everything, but a lot of light baking. Oh well.

Equally with geometry, Epic had foreshadowed this tech manytimes in the past if you paid attention. Sweeney for example, had a pre ps4 presentation where he discussed his big-picture vision for the future of real time rendering technology. I was actually trying to find that again last week to discuss with you guys. In a slide about the geometry pipeline there was something like "PS3 pipeline: Author High Poly asset and a low poly proxy for in-game and bake normal maps by comparing both. If game needa LODs author those by hand. PS4 pipeline: Author High Poly asset and a Low Poly Proxy (more detailed than a PS3 one though) and the engine will build a full LOD chain for you with one click. PS5: hopefuly by then, the engine will render the full High Poly asset as is." I think the slide said something like "Just render the fucking Z Brush model IN GAME!"

The fucker was right!

https://disruptiveludens.files.wordpress.com/2018/02/timhpg2009.pdf

It was in 2009, I find the pdf again.
 
But you go from 20 million triangle meshes to output 3.7 million in the end. That’s 18% of the triangles being rendered from the pre-culled 20 billion available from the source.
How many triangles are being fetched from storage and then culled to what's rendered on screen?
 
I haven't looked at how large the assets are. But it seems like there's not that many to begin their building of the outdoor environment level with. How much ram would be used if you simply brute force it and load all of them into RAM?

You know, to see what it takes for PCs that can't have NVME perform because of software API issues.
 
Back
Top