I know i'm disturbing an old issue here but i see that the opengl spec (2009) persists in forcing implementations to choose one face of a cubemap according to the incoming vector (rx, ry, rz) - and continueing as if the texture was a regular 2d - well i don't need to explain - you guys know about this
but do implementations really do this? does nVidia adhere to this? do hardware implementations not mix up a few faces now and then - this would seem more correct.
ty, Nir
but do implementations really do this? does nVidia adhere to this? do hardware implementations not mix up a few faces now and then - this would seem more correct.
ty, Nir