raytracing questions

is it basically just going back to the way things were before hw accellerated 3d?

2. couldn't it be harder or take longer to program since nothing is able to just be turned on/off? Wouldn't kind of be like the saturn(like raytracing where colored lighting and transparency had to be done thru sw) compared to the ps1(like rasterization which had hw transparency, gouraud shading, and light sourcing.)

In #2 it was just an analogy, it was the quickest thing i could think of.
 
is it basically just going back to the way things were before hw accellerated 3d?
Nope. If you think raytracing is slow now, imagine doing it on a machine that may not even have a floating point unit, is equipped with 1/1000th of the ram (or less), and the list goes on. The only people raytracing back then were pretty much the people who still use raytracing now.

2. couldn't it be harder or take longer to program since nothing is able to just be turned on/off? Wouldn't kind of be like the saturn(like raytracing where colored lighting and transparency had to be done thru sw) compared to the ps1(like rasterization which had hw transparency, gouraud shading, and light sourcing.)
There was no raytracing on the saturn, I think you're confusing it with tiling -- and I'm not sure how, since tiling and raytracing have incredibly little in common.
 
Nope. If you think raytracing is slow now, imagine doing it on a machine that may not even have a floating point unit, is equipped with 1/1000th of the ram (or less), and the list goes on. The only people raytracing back then were pretty much the people who still use raytracing now.


There was no raytracing on the saturn, I think you're confusing it with tiling -- and I'm not sure how, since tiling and raytracing have incredibly little in common.
OK. Thanks=]

I had thought that raytracing referred to all software (CPU/general purpose shader) based rendering. Before 3d accellerators pc games were cpu-rendered, and I thought that raytracing was going back to that.

You read my analogy wrong, but that's ok. What I was saying was that programming for the saturn was like programming for shader based hw, since it had to do some things (e.g., transparency and light sourcing) in software, that the ps1 had as fixed functions.
 
OK. Thanks=]

I had thought that raytracing referred to all software (CPU/general purpose shader) based rendering. Before 3d accellerators pc games were cpu-rendered, and I thought that raytracing was going back to that.

Nope it's simply an algorithm, and at that it's not terribly dissimilar to rasterization...

Code:
// Raytracing
for all fragments
    for all triangles
        for all lights
            // shade a fragment

Code:
// Rasterization
for all triangles
    for all fragments
        for all lights
            // shade a fragment
 
Nope it's simply an algorithm, and at that it's not terribly dissimilar to rasterization...

Code:
// Raytracing
for all fragments
    for all triangles
        for all lights
            // shade a fragment

Code:
// Rasterization
for all triangles
    for all fragments
        for all lights
            // shade a fragment

To that ray tracing algorithm, most simple looks something like this:
Code:
for all fragments
{
     for all triangles
     {
          collision = findCollision();
          if(collision < nearestCollision)
          {
               nearestCollision = collision;
          }
     }
     for all lights
     {
          shadePixel(nearestCollision);
     }
}
So it's little simplier than you wrote (especially if you do highly complex shading within scene with very high depth complexity. Although finding collision between ray and triangle is very slow (so you need to limit it to as less as possible).
First one is optimising algorithm as much as is possible - so you use very cheap intersection test.
Second one is using SIMD for processing 2x2 ray packets at same time.
Third one (and most important) - you use some scene hierarchy, the fastest is SAH KD-tree for static and probably BIH for dynamic.

With correct use of these optimisations (most important is the third, first two doesn't make so big difference) you can render on two quad core CPUs ~half milion triangles interactively with correct shadows (even non-correct soft shadows, correct soft shadows are possible too using area lights, but not interactively for so complex scenes), reflections and refractions.

So in final algorithms look like this (of course there are several more solutions on KD tree traversal, and of course traversal isn't as important as KD tree build ... difference between SAH KD tree and another kind of KD tree ... SAH can be even 2 times faster)
Code:
// KD tree traversal pseudo-code
for all fragments
{
     bool hitLeftRight;
     int i = 0; 

     while(node[i].hasChildren)
     {
          i = FindIntersection of ray with left or right child; // lets say it returns childs index
     }

     for all triangles in node[i]
     {
          for all lights
          {
               shadePixel;
          }
     }
}

Main problem of course is, that there is no hardware acceleration.
Anyway difference between rasterization and raytracing is in one more thing. Rasterizing triangle is just bunch of projection equations, but for ray tracing it's finding ray-triangle collisions (and it's not as simple as projection equation).
 
The optimizations that you mention are applicable to both ray tracing and rasterization. The "simple" version above is apt as it more clearly highlights the differences in the algorithms, as once you optimize them both they start to look a lot like one another...

Anyway difference between rasterization and raytracing is in one more thing. Rasterizing triangle is just bunch of projection equations, but for ray tracing it's finding ray-triangle collisions (and it's not as simple as projection equation).
They are fundamentally the same math, they do the same edge tests and they produce the exact same results for single-ray-origin (i.e. camera, point light, etc) queries on a regular grid.
 
They are fundamentally the same math, they do the same edge tests and they produce the exact same results for single-ray-origin (i.e. camera, point light, etc) queries on a regular grid.
Well, I wouldn't quite say the "exact same" results simply because projection in rasterization is done at a geometry level rather than a per-sample level. You'd only have the same result if the largest quadrilateral unit of geometry occupied a 2x2 quad of samples (i.e. - each sample on your grid at least gets one unique vertex).
 
Well, I wouldn't quite say the "exact same" results simply because projection in rasterization is done at a geometry level rather than a per-sample level. You'd only have the same result if the largest quadrilateral unit of geometry occupied a 2x2 quad of samples (i.e. - each sample on your grid at least gets one unique vertex).
Uhh, I'm pretty sure you'll generate the same samples given "reasonable" rasterization rules. What is fundimentally different about evaluating edge equations of rays that go through centers of pixels in 3D space compared to projecting the geometry onto the image place and evaluating 2D edge equations at those pixel centers? You should get the same results modulo precision errors, which are well-handled at least on the rasterization side... maybe I'm missing something.
 
Uhh, I'm pretty sure you'll generate the same samples given "reasonable" rasterization rules. What is fundimentally different about evaluating edge equations of rays that go through centers of pixels in 3D space compared to projecting the geometry onto the image place and evaluating 2D edge equations at those pixel centers? You should get the same results modulo precision errors, which are well-handled at least on the rasterization side... maybe I'm missing something.
Simply the problem that linearity of a geometric figure in 3 dimensions does not necessarily equal linearity of that same figure in the 2D projection thereof. Because those 2D edge equations mistakenly assume linearity of an edge in 2D, you will get a different result than having every point on that edge projected for every sample it crosses. This is really only a less significant problem if the geometry elements individually occupy less on-screen real estate or if the view FOV angle is very narrow (either one being cases where the potential error is generally smaller) and no non-linear warps like you might use on a shadow map are applied. When you do an explicit projection at every sample on your grid (which is essentially what that first hit raycast is), you're effectively not making that assumption that "linear in 3D" == "linear in 2D".
 
Not necessarily, just generally.

Straight lines will generally be projected to straight lines ... people like it that way, since our eyes work like that. Renders with mismatched FOVs are not the norm.
 
Not necessarily, just generally.

Straight lines will generally be projected to straight lines ... people like it that way, since our eyes work like that. Renders with mismatched FOVs are not the norm.

But our eyes work like projecting on a curved surface not a flat plane... I want to see renderers taking into account that :p.
 
Well of course rasterization has to be a linear/affine projection onto a flat plane... but assuming you're shooting those same rays in the raytracer (i.e. all have the same origin and intersect the centers of the projection plane pixels as usual), I'm still pretty sure you'll get exactly the same result. Yes you can do more in the raytracing case (fish-eye projections, etc. etc.) but in the case that's equivalent to the rasterization case - and indeed the most common one by far for eye rays - I don't see how the math would be any different.
 
But our eyes work like projecting on a curved surface not a flat plane... I want to see renderers taking into account that :p.

But renderers project images upon a flat screen, not your eye, so it doesn't need to take into account what the eye does, in fact it shouldn't, since than you would get the effect applied twice, since the image still has to travel through your eye.

If the renderers FOV is identical to the FOV of your screen in your vision, then flat projection is a perfect match. Although it probably wont match, unless you sit extremely close to your screen. In that case other forms of projection will probably look better for wide FOVs, I admit, even though it's still not a correct projection.
 
Back
Top