Polygons and curved surfaces are both analytic representations that have
serious problems of scalability. The "number of visible polygons
on screen" problem comes up if you build your world out of polys. Curved surfaces seem to help with that, but not for long... soon you run
into problems of "number of visible curves on screen" and you're back to
square one.
John's hunch is that eventually 3D hardware will be based on some kind
of multiresolution representation, where the world is kept in a data structure
that fundamentally supports rendering it in near-constant time from any
viewpoint.
Voxels are one example of such a structure. John mentioned that he
actually wrote a voxel renderer and converted an entire Quake 2 level to
use it. It wound up being about 3 gigabytes of data!  But he
said that that's not actually that far off from today's hardware capacity,
if you use intelligent streaming techniques to move the relevant portion
of the voxel set into and out of memory.  And he said it was really
nice having only
one data structure for the entire world--no more
points versus faces versus BSPs... just one octree node (and subnodes of
the same type) representing
everything.
Other possibilities include frequency space representations (i.e. wavelet
compression schemes). (Think something like fractal compression,
and fractal-compression-like progressive rendering, applied to an entire
3D space.  He didn't mention fractal compression at all, though; he
just talked about "frequency space" and nodded when I said "wavelets"
He mentioned that there is one multiresolution graphics text that includes
some techniques for frequency-space 3D modeling and rendering, but didn't
say which one; one possibility is Tony DeRose's (et al.)
Wavelets for
Computer Graphics.