How can resterisaztion be replaced by raytracing?
You still have to draw the initial textures and geometry.
I don't follow your logic here. You don't have to *draw* anything before tracing rays. You still have to create actual textures and models, but that doesn't have anything to do with having to draw something. In a lot of realtime raytracers, they do use the GPU to dump a Z-Buffer to accelerate the first hit, but that's not a requirement by any means.
I thought you were trying to come up with reasons why raytracing would replace rasterization though?:smile:
So I could still come up with plenty of those. For the most part, on the hardware side, it means needing a lot of memory ports, a lot of cache, and a lot of TLP -- The SaarCOR and Freon 2/7 samples do some pretty interesting stuff for their level, but because they're not nVidia or ATI, it's probably not going to go anywhere. If SaarCOR's team could get up to designs scaled up to 300 million transistors running at 500 MHz, you'd basically be seeing some rather interesting results... will it actually happen? I doubt it.
Certainly, there are ultimately things you can't do without at least raycasting (your example of sub-pixel geometry), e.g., non-linear and time-dependent cameras and other similar things -- aside from the fact that you actually get per-pixel perspective. The fact that anything per-pixel is inherently straightforward in a raytracer as opposed to per-vertex (pixels in the outer loop) is actually more powerful than you let on. Of course, the fact that it is, by nature, the quintessential "embarassingly parallel" problem doesn't hurt from a hardware design standpoint.
My big problem is that there's nothing that you can achieve effectively by mutating and stripping down theory in order to suit something in obfuscated and otherwise unsuitable way. Making something needlessly complex and limited in scope is pretty much the ticket to anything that involves world-level sampling on rasterizers. Rastererizers are designed with certain limitations in mind, and achieving higher-order rendering techniques with them means finding ways to circumvent that in a highly impractical way. None of the crazy dreams you see out there will probably ever make it into a real product. Real products always tend to stay well within the hardware's designed limits -- try to obscure that, and you are invariably asking for trouble. With raytracing, it's all simply a matter of power (on all fronts, that is). Simple stuff works, and complex stuff always breaks down, and raytracing variants are really simple.
Certainly, the first viable place in my mind to introduce raytracing hardware would be a game console or something similar which isn't weighed down by legacy nonsense. On a PC, you have to worry about the past history of games, and the fact that the user basically doesn't understand or give a damn about how things get the job done -- just that it always works. The only problem is that the people who could afford to do this are the same ones who would sooner shoot themselves than admit the fact that raytracing is innately superior
.