Unreal Engine 4

Pls delete. Wrong thread
 
Last edited by a moderator:
GDC 2019 - March 20, 2019
The first video, seen above, is titled Rebirth and showcases just how photorealistic scenes can be when developed with the gaming engine's technology. The demo, designed by the studio Quixel, highlights how realistic the lighting technology inside Unreal Engine 4 has become.
 
Last edited by a moderator:
That's just incredible, its hard to believe graphics like that are even possible. On what kind of hardware is that demo running?
 
That's just incredible, its hard to believe graphics like that are even possible. On what kind of hardware is that demo running?
Contrary to the PR claim (real-time cinematic) this demo is "rendered in-engine" (aka pre-rendered) @4K 24fps. Also contrary to what is claimed (this was done by only 3 artists!), more than 40 persons worked on it. Also contrary to the PR claim that no outside plugins where used.. Houdini was used to generate the volumetric clouds which where then imported into UE4 using SideFX's Houdini-to-UE4 plugin...It can probably run in real-time in the editor at much lower resolution though.

On the other hand...The Troll demo (by Goodbye Kansas & Nvidia) was really running in real-time in UE4.22 with RT on. Same for Unity's Heretic demo which was also running in real-time.
 
Last edited:
The troll demo spoke about rendering 50 million polygons in that scene:oops:and that's like ~10 times more than what a current gen AAA title can render. Like even without Raytracing, just having that poly budget would be a dream for asset and level creation.
 
The troll demo spoke about rendering 50 million polygons in that scene:oops:and that's like ~10 times more than what a current gen AAA title can render. Like even without Raytracing, just having that poly budget would be a dream for asset and level creation.
There's a difference between 50M polys of assets per frame & rendering 50M polys per seconds .
50M polys of fully shaded, textured assets per frame is currently impossible at 24Fps+ (especially in UE4). Literally not a single commercial GPU can handle it (as a matter of fact barely any DCC software would unless use decimate it). As a matter of fact I'm currently working on a single 35M polys model which weights more than 4GB only with vertex colors..
 
There's a difference between 50M polys of assets per frame & rendering 50M polys per seconds .
50M polys of fully shaded, textured assets per frame is currently impossible at 24Fps+ (especially in UE4). Literally not a single commercial GPU can handle it (as a matter of fact barely any DCC software would unless use decimate it). As a matter of fact I'm currently working on a single 35M polys model which weights more than 4GB only with vertex colors..
So you're saying there's a huge portion of that Troll demo is unshaded or texutred? Or are they counting the tessellation number too?
 
Or not even drawn. 50 million polygons in the scene, only a fraction of which are drawn after culling. And Ike's data-point of reference is 35M polygons takes up 4 GBs. If your scene geometry was so dense that you had 50 million triangles being drawn (over 5 triangles per pixel at 4K), you'd need many, many millions more in the scene, which would exceed VRAM.
 
There's a difference between 50M polys of assets per frame & rendering 50M polys per seconds .
50M polys of fully shaded, textured assets per frame is currently impossible at 24Fps+ (especially in UE4). Literally not a single commercial GPU can handle it (as a matter of fact barely any DCC software would unless use decimate it). As a matter of fact I'm currently working on a single 35M polys model which weights more than 4GB only with vertex colors..

There's something wrong here - 4 GB for 35 M polys equates to over 100 bytes per poly! Is this model being stored as plaintext or something?
 
There's something wrong here - 4 GB for 35 M polys equates to over 100 bytes per poly! Is this model being stored as plaintext or something?
There's literally nothing "wrong" here. This is the norm for a model in .obj format (including vertex normals & colors). A standard "high poly" model composed of 2M triangles in .obj is around 220 MB..now you do the math for denser models..
 
And what exactly would be the problem with a demo having 10+ GB worth of mesh data in HDD storage anyway? It's not that their claim means that, but what would be the problem? The entire demo is confined to a single scene, and most of it from the same angle iirc.

.obj is a text format and I'm very positive that's not the data format used by the engine in runtime. It would most likely be an order of magnitude smaller.

Besides 50M polys in a frame/scene doesn't translate to 50M polys in .obj files. Far from it, in fact. There's surely duplication of meshes, probably lots of procedurally generated detail and also probably tesselation involved.

50M polys per second, that definitely isn't it, since games have had much more than that for over a decade.
 
50M per frame is 1.5B/3B triangles per second at 30/60 fps. But it's all moot. There are 2 million pixels at 1080p. Anything more than 2x60 = 120 million triangles per second is more triangles than there are pixels. And you get inferior shading performance at that, so actually need one triangle per 4 pixels at best.

Perhaps people should start being specific about what exactly their counting? (and why??)
 
The troll demo spoke about rendering 50 million polygons in that scene:oops:and that's like ~10 times more than what a current gen AAA title can render. Like even without Raytracing, just having that poly budget would be a dream for asset and level creation.

Since Epic and Nvidia are close could the demo maybe have been using Nvidias Mesh Shaders which were introduced with Turing?

 
50M per frame is 1.5B/3B triangles per second at 30/60 fps. But it's all moot. There are 2 million pixels at 1080p. Anything more than 2x60 = 120 million triangles per second is more triangles than there are pixels. And you get inferior shading performance at that, so actually need one triangle per 4 pixels at best.

Perhaps people should start being specific about what exactly their counting? (and why??)

My guess is that it's an estimation (vs measurement) of the final amount of per-frame triangles sent to the GPU by the engine, pre-culling, after tesselation. KInd of: average triangle amount sent to the GPU x average tesselation factor.
 
Back
Top