Quick tech questions cencerning next gen...

Nite_Hawk said:
The disadvantage for raytracing is that for every single pixel you need to iterate over every object and find out which one is closest to the screen. As you can imagine, this can be pretty slow. There are ways you can speed this up however.
Yes of course. Obvious now you mention it. In my mind's eye was seeing all the objects in 3D space and just sending a ray out there without thinking how the computer has to sort through data to find what surface it hits. Things seem a lot easier in the 'real world'. You know, they ought to invent a device that can fabricate 3D worlds like physical 'holograms'. Sort of...synthesizing reality as it were. Dunno what such a chip would be called though...
 
Yes of course. Obvious now you mention it. In my mind's eye was seeing all the objects in 3D space and just sending a ray out there without thinking how the computer has to sort through data to find what surface it hits.
Yeah, the big challenge with raytracing would have to be buffering the scene data on local VRAM or something. Well, if the ART VPS chips are any indication, it doesn't take a whole lot to at least show some real improvement over software raytracers. Realtime at HD resolution with say, 500 shadow rays per pixel is still something of a further problem.

These guys -- http://www.piqsoftware.com/projects/freon27/ -- also have some interesting work going on, but AFAIK, there's no real progress on it, and there's no physical hardware or FPGA layout, just a purely software simulator.

You know, they ought to invent a device that can fabricate 3D worlds like physical 'holograms'. Sort of...synthesizing reality as it were. Dunno what such a chip would be called though...
Hmmm... a reality... synthesizer... where have I heard that before? ;)
 
Back
Top