Historical rendering dead-ends

Concerto

Newcomer
So on Twitter I came across a thread posted by Richard Mitton (@grumpygiant) asking for any examples of rendering dead-ends, one of his examples being Ecstatica’s ellipsoid technology. I found this interesting and wondered if there is anybody on this forum with experiences with that sort of thing.

Some highlights that stuck out to me from the comments were Microsoft’s Talisman and its features like impostor textures, forward texture mapping and quadratics (nvidia nv1/ Sega Saturn), voxels, as well as some 8-bit computer trickery.
 
First thing that comes to mind is the old arcades that rendered their graphics as the signal was being sent to the monitor's raster to avoid wasting costly memory on a frame buffer. Racing the beam.
Going even further back, we can point to older arcades such as asteroids,that didn't even use raster displays, but rather oscilloscope like monitors where they controlled the beam directly.
 
For modern 3d graphics, Doom engine and it's contemporaries also employed vertical line ray-casting, which although limited, allowed some form of pseudo-3d rendering at realtime speeds on 90's IBM PC's.
 
Heightmap raycastsing, especially ray surfing variants which restricted camera rotations.
Also voxel sprites from games like Bladerunner.

Loved the idea of image based rendering when it was introduced as well, but it's pretty much transformed into something we see in advanced impostors and not on actual geometry.
Sadly lot of the research and demos have vanished from the web, but things like proper object rotation etc running on ancient GPUs etc.
 
Continuous level of detail meshes. It's not worth any amount of PCI bandwidth to edit an index buffer on the GPU. Forsyth's sliding window technique is the only one I've seen that doesn't seem absurd on modern hardware.

Related to ellipsoid rendering, point-based rendering is ever the future. There's a 4x4-vertex patch based approach with seems like a not-as-bad compromise. Meanwhile, Assassin's Creed is rendering in GPU-culled clumps of 256(?) vertices.
 
For modern 3d graphics, Doom engine and it's contemporaries also employed vertical line ray-casting, which although limited, allowed some form of pseudo-3d rendering at realtime speeds on 90's IBM PC's.
I believe Voxatron employs ray casting techniques to allow destructible terrain. Plus it runs entirely on the cpu.

 
I believe Voxatron employs ray casting techniques to allow destructible terrain. Plus it runs entirely on the cpu.

ok, that's 3d raycasting, the cool thing about doom, build, outlaws and company, is that it was basically a 2.5D process.
 
I had to read up on the image based rendering and PowerVR’s Infinite Planes techniques. The former is like a Cube-map on steroids or something? At least that’s the idea I got when reading about it.

The Infinite Plane tech is interesting though but I am left wondering what the real world application of the technique would be. One video that I did see from Scali’s OpenBlog showed some limitations with the original hardware and it seems to only work best with simple surfaces and shadow volumes.
 
Variations of parallax mapping are still used in huge amounts of games.
So while the tech has a lot of limitations, it still is useful.

Relief mapping is, but it's fairly different, being a ray cast through a heightfield to find the appropriate texel. As far as I know, the term "parallax mapping" is used exclusively for the algorithm that takes a single height sample and uses it to guess at the proper intersection. Elder Scrolls: Oblivion used parallax mapping (and I can't actually think of anything else that does at all).
In any case, I expect relief mapping to be gradually replaced by displacement mapping with tessellation. Already, games use half and half for different objects in the scene.
 
Concerning voxels, I'd consider them alive and well. For instance, Nvidia has the GVDB framework which, while intended mostly for rendering professional stuff (visualizations of simulations and rendering fluids in various circumstances), is nevertheless currently used for "rendering".

The problem with the term "voxels" is that the line between "voxels" and volumetric rendering in general is muddy, to say the least. The most basic meaning, a rasterized image in 3 dimensions, is straightforward enough, but what about something like distance fields, where you want to create an isosurface from volumetric data? If you call a normal map or a height map a "texture", a SDF can be called "voxels", right? SDFs (combined with marching cubes or surface nets to make triangles or quads) are pretty popular these days, with games like No Man's Sky and (formerly) Subnautica using them for procedural/modifiable terrain. Then of course there are games like Minecraft, which use voxels as a terrain representation, but generates triangles for actual rendering.

So you can argue what constitutes "true" voxel rendering. Does meshing them into triangles count, or are you expected to directly render them through ray tracing? Most games and such do the former, while GVDB does the latter. What about using task and mesh shaders to generate geometry and render directly from the voxel image?
 
Relief mapping is, but it's fairly different, being a ray cast through a heightfield to find the appropriate texel. As far as I know, the term "parallax mapping" is used exclusively for the algorithm that takes a single height sample and uses it to guess at the proper intersection. Elder Scrolls: Oblivion used parallax mapping (and I can't actually think of anything else that does at all).

It's commonly called Offset Bump Mapping / OBM, it derives the parallax from the xy of the normal map. In the original Parallax Mapping paper the derivatives are calculated from a depth map. As the normal map is the derivative of some heightfield, and because it's already available for free, that's adopted instead of the (more expensive) depth-sample version.
 
Back
Top