Yup, sounded a lot like what we did in old demos.By using a software renderer they could specify texture coordinates for the vertices. This meant they could effectively "slide" a large environment texture around the polygons, dependent upon where the vertex or surface normal was pointing (I'd guess a lookup table for texture coordinate based on normal values was used for speed).
UV directly from normals * distance from center to edge of texture. (32 for 64x64 texture, if there is need for more mirror like appearance add small offset from screen coordinates. .)
When you know that the object is small in center of screen there is no need to have perfect texturing and you can bypass a lot of code like clipping etc.
Last edited: