Intel .pdf on Ray Tracing implementation of Quake Wars

I didn't realise they converted quake wars too. Makes for interesting stuff, but goes to show how computationally hungry the ray traced renderer is at the moment.
 
If we ever get raytracing hardware it'll mean wavetraced audio will be easy to do

I have a couple of noob questions:
1. you shoot rays from every light source to the camera ?
2. if yes how many ?
 
Last edited by a moderator:
If we ever get raytracing hardware it'll mean wavetraced audio will be easy to do

I have a couple of noob questions:
1. you shoot rays from every light source to the camera ?
2. if yes how many ?

I think it goes camera -> object -> light source.
 
If you shot from light sources, only a minuscule portion of rays would actually contribute anything to the scene, so you shoot from the camera.
 
ahh, how many rays ?
ang on how do you know what angle to shoot the rays at
and if you shoot out a ray how do you know it will eventually end up at the light source
 
AFAIK angle doesn't matter, you just trace a ray from the object to the light.



Angle matters for reflections/refraction, obviously.
 
Nice but thay still use "normal " res texturs, also there seems to be none of the bigest ray trace benefit to lightening. However reflections look realy nice.
I did test the IRT real time ray tracer on 1 Ps3. And the car "Ferrari" was 500k polys had transparent windos and reflections on both the car windows and rear-view. Also 4x AA@720p FPS was @1-2FP/s turn of AA and frame rate went up to 5-10FPS turn of reflections and i got ca 20FP/S. Strange thing is according to IRT if you use 3 PS3 you will get 20+FP/s even if you have reflections and AAx4 on. realy strange ?
But still ingame real time ray traceing is many moons away i think.
 
Daniel Pohl in his paper said:
Creating correct shadows from partially transparent quads is not an easy task for a rasterizer. The most commonly used algorithms for calculating shadows in rasterization (called “shadow mapping”—see http://en.wikipedia.org/wiki/Shadow_mapping) does not deliver additional information that might help in the case of shadows from transparencies.

This brings a question to mind: how do modern (rasterized) games pull off correct shadows from transparent textures?
 
If any of you missed it the first time, you can check the movie here.

This report was disappointingly light on implementation details though. Oh well.
 
Thanks! I'm about half way through and so far it's been a great read. :smile:
 
They don't; shadows cast from transparent objects are tricky to get right and can be done only under a specific set of constraints. Blizzard's implementation of transparent shadow-casters is an interesting example:

http://ati.amd.com/developer/SIGGRAPH08/Chapter05-Filion-StarCraftII.pdf

Nice read, they really highlight a lot of hardware and API limitations. Looks like they're maxing out DirectX9. Maybe in the next 10 years they can max out DirectX 15 or something with SC3.
 
Nice read, they really highlight a lot of hardware and API limitations. Looks like they're maxing out DirectX9.

That was my impression as well. It seems like they were really making sacrifices for the lowest common denominator. How important is it for a mid-late 2009 game to support <SM3.0 cards?
 
That was my impression as well. It seems like they were really making sacrifices for the lowest common denominator. How important is it for a mid-late 2009 game to support <SM3.0 cards?
I guess it depends on what kind of market you are targeting. Blizzard has always developed games that scaled very well on lower-end hardware and that must have had a very positive effect on their bottom line (you can run WoW on pretty much anything albeit lowering the detail a lot). The PC gaming market is shrinking and we're in the middle of the worst economic downturn most of us can remember which will inevitably slow down the upgrade cycle. Under these circumstances supporting PS2.0 cards will probably yield a nice ROI. Besides it could also serve as a fallback for people with PS3.0+ cards but without the necessary performance to run the PS3.0+ code paths.
 
That was my impression as well. It seems like they were really making sacrifices for the lowest common denominator. How important is it for a mid-late 2009 game to support <SM3.0 cards?

Perhaps that's one of the differences that allow Blizzard to keep a 4 year old game on the first spot of the PC charts while nearly every other developer cries about piracy? I still have an SM2.0(b) card on one of my 'puters and my gf still has an SM2.0 one.
 
Perhaps that's one of the differences that allow Blizzard to keep a 4 year old game on the first spot of the PC charts while nearly every other developer cries about piracy? I still have an SM2.0(b) card on one of my 'puters and my gf still has an SM2.0 one.

I am inclined to think that as the game gets older, supporting GPUs that were old at its time of release becomes less important. :???:
 
They don't; shadows cast from transparent objects are tricky to get right and can be done only under a specific set of constraints. Blizzard's implementation of transparent shadow-casters is an interesting example:

http://ati.amd.com/developer/SIGGRAPH08/Chapter05-Filion-StarCraftII.pdf

We are using basically the same technique as Blizzard is using for their transparent shadows. We are using the technique also on soft shadow impostor geometry additionally to transparent surfaces. Unlike Blizzard we store both depth and the transparent filter color (greyscale in our case) in the same shadow map texture. This way the shadow map sampling requires only one tfetch and the shadow map requires less memory (and bandwidth). G16R16 texture has been enough for our shadow (linear) depth value and the transparency filter value. It's a good and fast format for bilinear filtered texture fetching (we also need the depth value to be bilinear filtered as we use modified ESM on our shadow depth comparison).

This is a good and fast transparent shadowing technique and combines very well with depth buffer based shadow mapping (when proper care is taken when both are bilinear filterered together - use sub instead of mul to blend the both maps together or you get subpixel gaps at depth edges). The only constraint of this technique really is that the transparent objects cannot cast transparent shadows to other transparent objects. However standard opaque shadows can still be cast on transparent objects (and particles if needed).
 
Back
Top