Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

I don't. That part I can't calculate. You're unlikely to render so fine with XBO. So you'll need supply assets with lower polygons and the accompanied lower quality textures. But how the engine will interact with those assets should be the same, just scaled down.

I think this is wrong. Xbox One does not support any form of mesh shaders. It's a totally different front-end design. Xbox One and PS4 are stuck with vertex shaders. PS5 and Series X have general compute front-ends which are way more flexible and far faster because it can take advantage of general compute power. Without primitive/mesh shaders, we don't even know if vertex shaders would be suitable for accessing and processing out of whatever data structure they've created for geometry. It's no longer just a vertex buffer to read in and process.
 
That's running on a Warp 560 turbo. I went looking for YT examples but they are all on emulators and accelerators AFAICS. I remember playing Doom clones like Alien Breed 3D on tiny windows with chunky pixels to get some sort of playable framerate.

Doom ran dog slow because it was designed around the "chunky" pixel format used on PC as opposed to the planar format used on the Amiga. The former was much more suited to rendering these types of games (hence even AB3D being fairly slow, despite being designed for the Amiga specifically). If you wanted to run Doom itself, you had to do a chunky-to-planar conversion in software and this slowed things way down. Commodore's ill-fated CD32 console actually had dedicated circuitry in one of it's custom chips (Akiko) to do this conversion in hardware in order to make these games run better.
 
I think this is wrong. Xbox One does not support any form of mesh shaders. It's a totally different front-end design. Xbox One and PS4 are stuck with vertex shaders. PS5 and Series X have general compute front-ends which are way more flexible and far faster because it can take advantage of general compute power. Without primitive/mesh shaders, we don't even know if vertex shaders would be suitable for accessing and processing out of whatever data structure they've created for geometry. It's no longer just a vertex buffer to read in and process.
They stated it's done through compute, and only used hardware if the result was faster.
Meanwhile, Microsoft has developed DirectX 12 Ultimate, which also includes a radical revamp of how storage is handled on PC, but apparently the firm isn't leaning heavily on any one system's strength. However, subsequent to our interview, Epic did confirm that the next-gen primitive shader systems are in use in UE5 - but only when the hardware acceleration provides faster results than what the firm describes as its 'hyper-optimised compute shaders'.


I'm not sure where there is a gap is with respect to I/O and available hardware features in terms of the level of texture and geometry meshes supported. They will likely have recommended levels of quality here.
"To maintain compatibility with the older generation platforms, we have this next generation content pipeline where you build your assets or import them at the highest level of quality, the film level of quality that you'll run directly on next generation consoles," continues Tim Sweeney. "The engine provides and will provide more scalability points to down-resolution your content to run on everything, all the way down to iOS and Android devices from several years ago. So you build the content once and you can deploy it everywhere and you can build the same game for all these systems, but you just get a different level of graphical fidelity."

I couldn't find more than bold and underline. If I could I would. seriously, people need to ask Tim these questions, I'm only going by what he wrote.

The goal is to get as many people onto UE5 as fast as possible once it's releases, that really means making sure no projects are lost on the way that are on development with UE4.xx
 
Extremely impressive.

One small issue: temporal reconstruction artifacts are very noticeable on a 4K monitor. Hopefully they will find a way to reduce them or developers will lower some other setting to achieve that.


I am surprised by the fact that it doesn't use RT functionality, do they mean that it's not showing any direct RT effect (such as RT reflections) or is not used at all? (Aka Lumen doesn't need it)
 
Hyperbole.. People have to realize that there are limitations to all of this (for once only static meshes from the looks of it with Nanite) etc.. It's the same cycles of hype & PR every time something new is shown.
It took more than 2 years for the holy grail of holy grails of technologoies (RT) to be production ready (GDC 2018-> UE4. 25 last week) just as I said here when it was announced.. and it isn't even used in the new lighting engine Lumen! ...
Tech demos are nothing more than marketing vectors to sell products..
Remember mega textures ? Voxels ? There are so many things out there that turned out to be a dead end or we reversed from for awhile.
 
Extremely impressive.

One small issue: temporal reconstruction artifacts are very noticeable on a 4K monitor. Hopefully they will find a way to reduce them or developers will lower some other setting to achieve that.


I am surprised by the fact that it doesn't use RT functionality, do they mean that it's not showing any direct RT effect (such as RT reflections) or is not used at all? (Aka Lumen doesn't need it)
It seems Lumen also uses temporal information.
Who knows, maybe this leverages id buffer evolved.
 
So i supposse UE5 will use RT units for sound effects, right?.
They claimed UE5 will support ray tracing, but it's not being used in this demo.
Though the demo shows just how much the old rasterization still has in it.

Though considering how Epic is seemingly maxing out SSD and shader throughput for geometry and textures, I wonder if we'll see this level of geometry detail paired up with raytracing at all.
 
Re-watching the interview with Kojima's best friend, they do mention loading in polygons as the camera moves. I was highly skeptical that the bandwidth of the SSD would be fast enough to load and unload things as the view frustrum changed. I'm less skeptical now that it seems they've totally changed how models are loaded and processed. It is likely they can load only the necessary pieces of a full model, instead of the whole thing. It's what they said they're doing, so it's hard to argue against it at this point.
 
They claimed UE5 will support ray tracing, but it's not being used in this demo.
Though the demo shows just how much the old rasterization still has in it.

Though considering how Epic is seemingly maxing out SSD and shader throughput for geometry and textures, I wonder if we'll see this level of geometry detail paired up with raytracing at all.

I'm really curious to see how it's implemented on PC. Right now the vendors kind of handle the BVH for you in a black box system. If they have a virtual geometry cache/hash of some kind, I'm not sure how you'd store that info into a black box BVH for intersection tests. When you're at one polygon per pixel, that's A LOT of geometry to transform into some format that's suitable for whatever the API expects. Especially when the cost of refitting the BVH is high. Not sure how you'd keep bringing in so much geometry and keeping it up to date.
 
They claimed UE5 will support ray tracing, but it's not being used in this demo.
Though the demo shows just how much the old rasterization still has in it.

Though considering how Epic is seemingly maxing out SSD and shader throughput for geometry and textures, I wonder if we'll see this level of geometry detail paired up with raytracing at all.
with a polygon per pixel at 1440p not even next gpus at 5nm could make ray tracing lighting and shadowing as shown in this demo. Probably they are using a variation of SVOGI.
 
A few thoughts ...

Virtual geometry. If this is analogous to virtual texturing, you have a geometry cache/hash that stores the geometry you're going to need to render. It'll need 3 dimensions, because it's not flat like a texture. Anyway, with virtual texturing you keep a table of texture pages you're using in memory, so you figure out which pages you need to load from disk into memory and you unload the pages you no longer need. With virtual geometry you'd be doing the same, I assume. Instead of loading an entire 33 million polygon model, you load the pieces of the mesh (meshlets?) that map to what's in your view. So maybe you don't load the back-face of the model and only stream in the front-face from disk into memory. But how do you do that? If your scene is empty and you position your camera, how do you stream in only the meshlets you'll see instead of streaming in the whole model and then culling the rest? Or will they load in the whole model, cull the rest and then store the remainder in the virtual geometry cache? What benefit would that give you?

Also, if you have 1 polygon per pixel and you can map each polygon to a texel, doesn't that mean that aliasing is essentially gone? Do you even need to filter textures anymore?
 
Remember mega textures ? Voxels ? There are so many things out there that turned out to be a dead end or we reversed from for awhile.
Doesn't seem like a particular dead end IMO (in the case of Nanite) but it will obviously be restricted to specific usage as all features are. This is nearly still 2 years away from being production ready & everybody's wetting their pants over a tech demo (see RT 2 years ago....and where we are today...Lumen doesn't even leverage it and it is somewhat relegated to offline rending in UE4.25 as its primary usage)
 
A few thoughts ...

Virtual geometry. If this is analogous to virtual texturing, you have a geometry cache/hash that stores the geometry you're going to need to render. It'll need 3 dimensions, because it's not flat like a texture. Anyway, with virtual texturing you keep a table of texture pages you're using in memory, so you figure out which pages you need to load from disk into memory and you unload the pages you no longer need. With virtual geometry you'd be doing the same, I assume. Instead of loading an entire 33 million polygon model, you load the pieces of the mesh (meshlets?) that map to what's in your view. So maybe you don't load the back-face of the model and only stream in the front-face from disk into memory. But how do you do that? If your scene is empty and you position your camera, how do you stream in only the meshlets you'll see instead of streaming in the whole model and then culling the rest? Or will they load in the whole model, cull the rest and then store the remainder in the virtual geometry cache? What benefit would that give you?

Also, if you have 1 polygon per pixel and you can map each polygon to a texel, doesn't that mean that aliasing is essentially gone? Do you even need to filter textures anymore?
https://en.wikipedia.org/wiki/Reyes_rendering
 
Back
Top