Tim Sweeney Interview at Ars

Didn't AMD/ATI use Voxels on their 48x0 demos?

Yes:
http://raytracey.blogspot.com/2008/08/otoy-transformers-and-ray-tracing.html
There is also a video of the full AMD Cinema 2.0 event in which Urbach talks a bit about ray tracing on GPU's (from 41:00 to 47:00) and goes a bit more in-depth during the Q&A session (from 72:00 to 88:00):



- Urbach has been talking to game publishers to start integrating the relighting part of Otoy in existing game engines

- Otoy can do full raytracing, but also supports hybrid rendering. It can convert any polygonal mesh to voxels

- The Ruby demo does not use any polygons, only voxels

- For games, Urbach thinks hybrid rendering will be the way to go "for a very long time"

- With this technology, game developers will require a different way of working. Basically they're saying that you can make a photorealistic game, but the workload on the artist side will be astronomous

- In 2005, Urbach started out writing approximations to Renderman code during the making of Cars. At the time, he used cheats for ray tracing and reflections. In three years, GPU’s have evolved so quickly that the latest hardware makes realtime ray tracing possible that is “99 % accurate”

- Voxel data sets are huge, but with voxel based rendering you can load only subsets of the voxel space, which is not possible with polygons. You can also choose which texture layers to load

- Compression and decompression of the voxel data is CPU bound. What takes 3 seconds to decompress on a CPU, can be done at a “thousand frames per second” on a GPU.

- What's interesting according to Urbach is that in 2005 he started out writing approximations to ray tracing, but the latest generation of hardware allows him to do ray tracing that gets really close to the 100% point


A lot of stuff: http://raytracey.blogspot.com/
 
Reading that just sends shivers down my spine on the possibilities available for graphical eye candy going forward if it is used...

Yes...I'm a geek...

Regards,
SB
 
infinite detail

Thanks to the voxel rendering "the level of detail becomes infinite"

Speaking as a person who toyed with totally amateurish raytracing of quad-tree heightfield, fractal-generated landscapes, developped in assembler on his marvelous 486SX2 50MHz, under protected-mode extended MSDOS, and all that just because he found Commanche graphics so awesome (who didn't?)... Can someone please enlighten me about one thing:

How the heck do you handle magnification of voxels???
 
no idea what magnification of voxels means
but my main probles with them was that they were too big the needed to be at most pixel size
 
How the heck do you handle magnification of voxels???
He obviously doesn't actually mean "infinite" (it's kind of silly how people throw that word around, but alas)... all they mean is that you can store geometry detail down to an absurd level of detail (subpixel size for any reasonable viewing point) and not worry about it if you never traverse down that far in the acceleration structure, as it can happily sit on disk/dvd/bluray and never be touched.

All the recent hype about voxels and similar representations can be summed up by "it's really easy to do great LOD with a volumetric, topologically free representation". Cool! Alas it's hard to animate topologically free representations ;)
 
Has anyone seen anything detailed - direct feed images, HD video - from that Ruby demo that's supposedly voxels only? I've got some serious doubts if they're talking about that one with the city street, cars and giant robot...
 
Has anyone seen anything detailed - direct feed images, HD video - from that Ruby demo that's supposedly voxels only? I've got some serious doubts if they're talking about that one with the city street, cars and giant robot...

That's exactly the one, I think the 'secret' is that most of it is static, which is supposedly quite easy, fast and cheap to do on voxels
 
He obviously doesn't actually mean "infinite" (it's kind of silly how people throw that word around, but alas)... all they mean is that you can store geometry detail down to an absurd level of detail (subpixel size for any reasonable viewing point) and not worry about it if you never traverse down that far in the acceleration structure, as it can happily sit on disk/dvd/bluray and never be touched.
More or less, how much data is absurdly high lod?

I tried to estimate it using Q3 lightmaps (because they are mapped uniquely to the world geometry). In average level, all lightmaps can be packed to a single 512x512 texture (16 lightmaps on average, 128x128 each). When playing, single lightmap texel can be seen stretched to as much as 2/3rd of screen height. This is screen resolution dependent, so let's be generous and estimate that as 512x512 area.

So, in order to have each texel unique in map AND never see them magnified, you need 262144x262144 texels. That amounts to 256 gigabytes, for as simple geometry as average Q3 level.

And that was generous estimate, because we didn't take into account other factors. Like antialiasing. When voxelised, every non axis-aligned plane becomes staircase. Also, our data are stripped of any spatial information, cause we assumed we store just flat color data per voxel.

But of couse, it might be possible that, even as we speak, somewhere in the world scientists are working on new Pink-Ray discs able to store cubic shitloads of data...
 
As Jon Olick pointed out in his Siggraph presentation is also possible to have recursively defined spatial subdivision structures that map always to the same hierarchy of voxels, hence guaranting 'infinite' (and repeated) detail for a fixed amount of memory.
 
Last edited:
Has anyone seen anything detailed - direct feed images, HD video - from that Ruby demo that's supposedly voxels only?
Not the visuals you're asking for, but at least some info, here:
  • The voxel data is grouped into the rough equivalent of ‘triangle batches’ (which can be indexed into per object or per material groups as well). This allows us to work with subsets of the voxel data in the much the same way we do with traditional polygonal meshes.
  • we showed a deformation applied in real time to the Ruby street scene.
 
You could write a software rasterizer that uses the traditional SGI rendering approach; you could write a software renderer that generates a scene using a tiled rendering approach. Take for instance Unreal Engine 1, which was one of the industry's last really great software renderers. Back in 1996, it was doing real-time colored lighting, volumetric fog, and filtered texture mapping, all in real-time on a 90MHz Pentium. The prospect now is that we can do that quality of rendering on a multi-teraflop computing device, and whether that device calls a CPU or GPU its ancestor is really quite irrelevant.

I know I read an article on Unreal way back in 1996 in Next Gen magazine, but the game was released in 1998 and eventually supported 3dFX graphic cards.

So what gives, in wikipedia it says that a Pentium 166Mhz was not realistic to play the game.
 
from the unreal readme

-CPU Speed-

Unreal is also very sensitive to CPU speed, memory bandwidth, and cache performance. Thus, it runs far better on leading-edge processors such as Pentium II's than it does on older ones such as non-MMX Pentiums.

How Unreal will perform on different classes of machines:

Non-MMX P166 class machines: Slow rendering; large frame rate variations. We recommend playing in 320x200 resolution if available. We recommend setting the sound playback to 11025 Hz.

P200 MMX: Good rendering speed; some frame rate variations. We recommend running low resolutions like 320x240 or 400x300. We recommend keeping the sound playback at 22050 Hz.

Pentium II; K6-2 with 3DNow!: Very nice rendering speed; consistent frame rate. Software rendering runs smooth in 512x384, 32-bit color resolution. You might try 44 kHz audio for best sound quality.


------------
Requirements
------------

Minimum system requirement:
* 166 MHz Pentium class computer.
* 16 megabytes of RAM.
* 2 megabyte video card.

Typical system:
* 233 MHz Pentium MMX or Pentium II.
* 32 or 64 megabytes of RAM.
* 3dfx Voodoo class 3d accelerator.

Awesome system:
* Pentium II 266 or faster.
* 64 or 128 megabytes of RAM.
* 3dfx Voodoo or Voodoo2 class 3D accelerator.


ps: it supported glide out of the box
 
Last edited by a moderator:
ps: it supported glide out of the box

Perhaps, but not all GLide cards were made equal. I had a Banshee (still have it actually) and it took a couple of patches and me finding out about their existence before Unreal would even get to the Flyby without crashing.
But when it did, I was so blown away, oh man.... :oops:

PS: Software was WAY too slow on my 133 w/o MMX to enjoy though
 
Banshee came way later, Unreal was 2-3 years old by then. That problem was due to Banshee being different than the older chips, it wasn't 100% compatible.
 
Obviously efficient compression plays a HUUUUGE part of this :)
It might be inferred from the article that his compression scheme is... good old polygonal mesh. That would mean voxel tree's role is reduced to being only aceleration structure. 'Voxelize()' replaces 'Rasterize()' in the pipeline?
 
Banshee came way later, Unreal was 2-3 years old by then. That problem was due to Banshee being different than the older chips, it wasn't 100% compatible.

More like half a year. I bought mine around the time NaPali was released in early 1999.
 
Back
Top