Modern and Future Geometry Rasterizer layout? *spawn*

This is the reason Epic use a compute rasterizer for Nanite and only bigger triangle use the hardware rasterizer.
When you can achive such quality and the tradeoff is realy low for normale computer shader. Why didn't AMD and Nvidia integrated a raszier for triangles smaller than a pixel?
 
For determining the mip level alone, it wouldn't be required. If your UV mapping made certain guarantees about uniform texture resolution, and you know the projection, then normal, tangent and fragment depth are sufficient to decide on the correct mip level, as well as the level of anisotropic filtering required. And the APIs already allow you to provide the derivates if you can calculate them yourselves.

All UV-transforms are missing in this case, a UV-position doesn't convey the UV-space's scale, rotation, shearing and so on. I guess only the scale is essential. The scale is affected by the triangle's size on screen, and the UV-values on the vertices. Calculating those factors isn't all that convenient. You likely would make use of much more interpolator-slots, and that path isn't really large, not even today.
 
If we looked the nvidia lunch and we now know that the leaks were right and we have a die shot (if this is true i dont know). Nvidia will have this year 7 GPCs and 7 Rasterizers to feed 10.000 Cuda cores. What do you think bbout that?
 
I did some testing. Did Nvidia changed from tiled base back to normal or immediate rasterizer?

Her you can see how it looks on Pascal and Maxwell.
https://www.realworldtech.com/tile-b...n-nvidia-gpus/

This picture is from turing.
attachment.php
 
This is how IMR looks like. Turing displaying the turquoise pixels while at the same time having some pixels completely untouched shows its doing conceptually the same thing as Maxwell and Pascal. Different buffer sizes or other defaults may lead to small differences though.

edit: I can see a maximum span over a 9 tile region at once.
 

Attachments

  • Triangle_Bin_HD6670.PNG
    Triangle_Bin_HD6670.PNG
    108.9 KB · Views: 24
Last edited:
When i see nanite enigne i think the real bottelnack are micro polyogons. I wathched these analysis of nanite egninge an also looked clother. Even whene there is no good lightning in the scene the polygon structure make it unike. I don't understand why whe invest so much in shader when micro polygons gif you so much picture quality:

 
I think like in the video they said now theyfollow polygons, because film industry need it :D If you want to make a realistic face and realistic structures you realy need more geometry than light effects.

Also ghost of thushima is a example that geometry rueles over lighning, all the high poly vegetation make it so realistic and the game don't use raytracing.

 
Last edited:
Nanite is not REYES nor is Lumen ray tracing based on everything I've read. Nanite is rendering small triangles, but that's where the similarity with REYES seems to end.

Also, you absolutely need good lighting to complement dense geometry and vice versa. Just because a game doesn't use ray tracing doesn't mean it has poor lighting.
 
It's funny how realtime graphics follow past movie industry trends. Movies went REYES and nowdays ray tracing.

I find this tragic rather than funny. Isn't there some cartoon software that could help make games look interesting again?
 
Also ghost of thushima is a example that geometry rueles over lighning, all the high poly vegetation make it so realistic and the game don't use raytracing.
[my bold]
I think that's a bit of a misconception. You would need to properly light the micropolygons as well and in order to know where to apply which amount of each color component, you need some kind of formula. Preferrably (but depending on your art style), global illumination based on physcial surface properties.[/QUOTE]
 
@CarstenS
Exactly. But for a good shadow you need a good contour of the geometry. The geometry is feeding the formular with information.

My argument: If you use a complex mesh with a simple lightning formulare you will get besser results than with a complex formular for lightning with a simple mesh. because Main information for the lightning comes from the mesh. The mesh is a big part of the variables insight a lightning formular. If the mesh is not detailed enough you can have the best ray traycing, it will look ugly.

Thats why i think we will get a better grafic advantage when we invest more time and resources in polygon creation with simple light shader formula than if we use a simple polygon mesh with a complex lightning formula.

That is what nanite is exatly doing. They don't do any raytracing. They use complex geometry and us a simplified lightning system with only one lighning bounce and the result is impressive.

Her it is good video which explains it:

If you look here. The small geometry hights make the wall so realistic. Withought the hights you don't get the information for the micro shadows.
 
Last edited:
Back
Top