function said:Squeak said:Okay, maybe I shouldn’t have mentioned voxels, as I’m not completely sure how they work, but what I meant was this: When you can generate multiple 3d points (that’s what vertices are, right?) per pixel, why bother sending all that extra information to draw triangles when you could just colour the 3d points and be done with it? In short, what is the advantage of having subpixel triangles?
I think one good reason might be that to calculate lighting accurately you need a normal, which means you need a surface (which a triangle has). I don't know how you would calculate accurate, directional lighting for a surface made of particles. For a bumpy surface, you'd have to try and give each particle a size (radius) and cast a shadow from that point, which would be even more complex for point or spot lighting.
Though I imagine self shadowing on a micro-polygon level would be quite expensive anyway.
Maybe I'm missing something pretty simple ... ?
I think that the only realistic way to use such a rendering approach would be to do almost all models in HOS.
In that case, it would "just" be a case of sampling the normal, the required number of times on the surface?
To V3: Thank you for the link that was a really good read. Weird that I haven’t come across this paper until now.