Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Maybe they cone trace the size of a pixel and that’s how the select vertices. If the model is stored as a hierarchy you stop at the level where the polygon vertices are slightly larger than a pixel.
 
Nanite is using REYES, and its general pipeline (from Wiiki) for reference:
  1. 1) Bound. Calculate the bounding volume of each geometric primitive.
  2. 2) Split. Split large primitives into smaller, diceable primitives.
  3. 3) Dice. Convert the primitive into a grid of micropolygons, each approximately the size of a pixel.
  4. 4) Shade. Calculate lighting and shading at each vertex of the micropolygon grid.
  5. 5) Bust the grid into individual micropolygons, each of which is bounded and checked for visibility.
  6. 6) Hide. Sample the micropolygons, producing the final 2D image.
Epic have essentially turned the GPU into a glorified software renderer. About time! I recall posting a bunch of speculative threads on these forums around 15 years ago with the Cell architecture doing REYES with PS3. Obviously that didn't come to fruition, but it's great to see the status quo being shaken in the industry. I can see both Sony and Epic collaborating with a similar vision for realtime rendering progressing.
 
I recall posting a bunch of speculative threads on these forums around 15 years ago with the Cell architecture doing REYES with PS3.


I remember this happening:

Please... i don't want to see the R word... Ever. Again.


Whoever you are, you know who you are, you know what i'm talking about.


:devilish:


With "the R word" London-boy meant REYES, of course. And then:

Arsenal_Reyes_L.jpg




Hmmm...

HOS and HDR on RSX to get a 'football' looking like a real football similar to that image.

Perhaps some CELL post processing to get motion blur like on the feet of that footballer too...

Also, maybe physics aware tesallation to get some nice football/cloth simuations with CELL + RSX. Would be neat...maybe they'll include a geometry shader? :devilish:


That was funny as hell.
 
How will this affect a games total size of assets?
The upcoming consoles have quite limited storage space and multi-TB expansion is not going to come cheap for the foreseeable future.
 
They’ll maybe want to be doing instancing a fair bit to make the most (re)use of assets. For example, breaking down the statues to be built from smaller components would allow for some variation.
 
So Lumen is essentially what Sebbi did in Claybook (Ray tracing SDF via compute among other things like VT etc). Wonder how he feels about it now (as Epic & Brian Karris desperately wanted to hire him).
 
So Lumen is essentially what Sebbi did in Claybook (Ray tracing SDF via compute among other things like VT etc). Wonder how he feels about it now (as Epic & Brian Karris desperately wanted to hire him).
Vindicated, as he adds it to Unity?

Actually, I feel a little vindicated. We talked about alternative GI solutions in the RTRT discussion, and it clearly is a workable solution for next-gen.
 
The idea of going heavy on software rasterization makes me consider Sony and MS's respective approaches this time round.

In the Road to PS5 presentation, Mark Cerny talked about how "fast and narrow" was "the tide that raises all boats". But with compute based rasterization, wouldn't that leave fewer boats to raise? The hardware rasteriser, RBs, tessellation unit and to some extent the L1 (unlike the L0 and L2 it's described repeatedly as a "Graphics L1" in the RDNA whitepaper) would surely be of relatively lesser importance for this approach....?

We don't know how PS5 and XSX stack up in terms of ACEs and Command Processor sauce whatnots, but we do know XSX has both more compute and more bandwidth / Flop to feed it (terrible metric but it seems to get used a lot!) ... so perhaps, ironically, this demo would have been most at home on XSX.

... in which case Sony would have made a really canny move by signing this multiplatform engine demo (not even a game demo!) as a Sony promotional exclusive. Bravo! *tips fedora*
 
The idea of going heavy on software rasterization makes me consider Sony and MS's respective approaches this time round.

In the Road to PS5 presentation, Mark Cerny talked about how "fast and narrow" was "the tide that raises all boats". But with compute based rasterization, wouldn't that leave fewer boats to raise? The hardware rasteriser, RBs, tessellation unit and to some extent the L1 (unlike the L0 and L2 it's described repeatedly as a "Graphics L1" in the RDNA whitepaper) would surely be of relatively lesser importance for this approach....?

We don't know how PS5 and XSX stack up in terms of ACEs and Command Processor sauce whatnots, but we do know XSX has both more compute and more bandwidth / Flop to feed it (terrible metric but it seems to get used a lot!) ... so perhaps, ironically, this demo would have been most at home on XSX.

... in which case Sony would have made a really canny move by signing this multiplatform engine demo (not even a game demo!) as a Sony promotional exclusive. Bravo! *tips fedora*

Depends on how much of it pushes the SSD. If that demo uses the SSD at near peak-IO, then it's feasible that it could *only* be done on PS5.
 
Back
Top