Okay, here's my attempt at bringing back *some* of the posts I could find floating about on the internet. I'll update each post as I go along i.e. 1 post = 1 page of the original thread I could find.
I did find all of page 2... it may very well be the longest single post in B3D history.
I did find all of page 2... it may very well be the longest single post in B3D history.
---------------------------------------------------------------------------------------------------------------------------------Rys said:With Intel making noise about acceleration of real-time ray tracing in recent months, especially where it concerns future hardware they're working on, DeanoC found himself a little puzzled as to why.
Pitting real-time Whitted RT vs traditional real-time rasterisation, Deano's article compares the two approaches to see where the pros and cons of each approach lie. Should we be hailing our future shiny ball overlords, or does recent RT chatter need a bit more to back it up?
Have a read and find out![]()
---------------------------------------------------------------------------------------------------------------------------------Gubbi said:Fool's errand IMO.
Yeah, rasterization is a gross simplification, but so is ray tracing.
Rys said:With Intel making noise about acceleration of real-time ray tracing in recent months, especially where it concerns future hardware they're working on, DeanoC found himself a little puzzled as to why.
Intel are emphasizing ray tracing because it is something their CPUs do better than GPUs. When ray tracing gets interesting (lots of reflections, lots of refractions, lots of objects) branch coherence and memory locality tanks, killing GPU performance, making Intel's CPUs look great.
Cheers
---------------------------------------------------------------------------------------------------------------------------------Arwin said:Two things (no, three):
- very well written and accessible article!
- "A hybrid approach will probably offer the best of both worlds." were exactly my thoughts at seeing the topic's title, but then that was obvious enough.
- regarding the indirect lighting thing, wasn't there a company that had some kind of solution to this? (company that worked together with Guerrilla among others?) Or am I confusing things ... (global illumination maybe).
EDIT: Yes, it was indirect lighting:
http://secondintention.com/portfolio/worldlight/
from the page:
Seems to be relevant to the article, and maybe worthy of a B3D interview/investigation?Key features
- Efficient PRT mesh format using 12 bytes per vertex - only as much data as a floating-point normal!
CPCA compression allowing arbitrary order of PRT simulation with the same per-vertex size and runtime cost. Higher orders cost more CPU time and CPCA basis table storage, but the GPU cost remains constant.
- Multiresolution shadow maps giving crisp shadows both close-up on small details and in the distance on large buildings. Shadow map selection avoids use of the stencil buffer completely.
- Depth guided screen-space shadow filtering for soft shadows without depth artifacts.
- Local cubemaps for both specular lighting and PRT Spherical Harmonic diffuse lightsource generation.
- Real-time implementation of Geigel and Musgrave tonemapping. This simulates the nonlinear behaviour of different negative and print film emulsions allowing the user to input film characteristic curve data and see the real-time world rendererd as though photographed with the film they have selected.
---------------------------------------------------------------------------------------------------------------------------------hoho said:The main difference between ray tracing and rasterizing is how well they scale:
1) Rasterizing scales linearly with number of triangles and logarithmically with number of pixels.
2) Ray tracing scales linearly with number of pixels and logarithmically with number of triangles.
Another difference is that rasterizing starts with some very basic approximations and adds all sorts of tricks to make it look like real-life.
Ray tracing takes a physically-correct model and to make it fast you can start adding all kinds of tricks to make it faster (dropping global illumination helps a lot). E.g you can use screenspace AO just as good with ray tracing as you would with rasterizing. You could even use lightmaps with ray tracing if you really wanted to :smile:
Also one thing worth noting is that currently the most expensive thing in ray tracing is not usually traversing tree or intersecting triangles but shading. Doing things like bilinearly filtered bump-mapping isn't exactly cheap to do on a regular CPU, you know
As for some of the points in article:
Aliasing
Yes, it is a "bit" more difficult with ray tracing, although there there has been some research done on the subject. Current implementations doesn't generally use LOD to reduce geometry aliasing because they try to show how fast they can trace lots of triangles. Adaptive anti-aliasing is another topic that hasn't had too much attention lately but could be quite good at reducing aliasing. Also CPUs are not good enough to do anything but bilinear texture filtering in reasonable amount of time. Intel Larrabee should have HW units for texture filtering so this point should be close to being fixed.
Static or moving scenes?
I'd say this is half-solved problem. Building a bounding interval hierarchy over tens of thousands of triangles takes a few milliseconds on newer CPU core and it is possible to parallelize it. Also it is possible to build it lazily. Skinning takes as much time as it would with rasterizing.
Only problem with ray tracing is instancing. If you have ten enemies running towards you with each and every one being rendered at different animation frame you will have to have ten copies of those meshes in memory.
Global Illumination
It is much more expensive than direct ray tracing but doable. It will just need about an order of magnitude more performance. Until we don't have so powerful HW we can use all the tricks being used with rasterizing.
Is it because these effects are not exactly useful or because they aren't that easily implementable on current GPUsLuckily for rasterisation-based renderers, real-world scenes made of shiny transparent balls are rare.
I wouldn't mind seeing more translucent stuff in games (liquids, glass) but things like that aren't generally used all that much. I still remember bringing my PC to halt by putting around 20 glass objects behind each other in HL1 map editor and trying to look through all those :smile:
A completely dynamic scene would kill a rasterizer also as it takes lots of time to update the geometry and send it to GPU, it won't be much worse with ray tracing allthough BVH building will surely add to it a bit.At the same time, though, a completely dynamic scene with lots of overlapping large triangles would kill a real-time ray tracer.
Tessellation helps with those large overlapping triangles, though I doubt many games have those big overlapping triangles anyway.Tracing primary rays is rather cheap actually. If we already have good enough HW for tracing secondary rays then tracing primaries won't be much slower than rasterizing them. Also it would make things like overlapping translucent surfaces a lot simplier to render.A hybrid approach will probably offer the best of both worlds. Many ray tracers already replace primary rays with rasterisation and only compute secondary rays through traditional ray tracing.
Of course in the immediate future it would lessen the load on CPU allowing it to trace more secondary rays but I highly doubt if it would be viable in the future.Yes but that would require GPU to have all the information about the entire scene in its memory at all times, including the acceleration structure. You couldn't just trace rays before you've sent all the geometry to be rendered. No matter how you look at it this kind of GPU rasterizer won't look anything like we have today.The ability to spawn rays in the GPU’s shader core would solve many hard problems that rasterisation engines have to face.
Also I'm quite sure that ray tracing games on general CPUs will not be done any time soon as they lack a whole lot of useful HW units and they always lag behind special HW like GPUs. Of course there have been some rea-time demos but I'm talking about game-quality here. On the other hand if we would have a decent HW that has lots of FP power and some special units then it could very well be doable. Larrabee with its 24+ 4-way SMT cores and 512bit SIMD looks like something that could allow it.
Larrabee might not be the only contender to the ray-tracing chip title, there have been other researches on the subject. One of the latest one is Hardware Ray Tracing using Fixed Point Arithmetic, a bit older one is Estimating Performance of a Ray-Tracing ASIC Design (more stuff here). Both offer a special purpose solution and are likely much more efficient than general CPU or GPU doing the same thing.
I bet most people already have heard about Intel demo of ray traced Quake4. It was first shown at December last year running on QX6700 at 256x256 and achieving around 17FPS. This year they made "little" changes to their algorithm (replaced OpenRT with their internal tracer, probably MLRTA based one) and showed it on 2P quadcore Penryn at 1024x1024 running at around 90FPS. For low-quality video see news -> 28th September 2007 on their page.
In short I wouldn't say it is fools errand. It will surely take years* for ray tracing to become a viable alternative for game developers and there are still problems left to be solved. Of course the simple direct ray tracing won't be the one trying to compete with rasterizing, it will be something like instant global illumination that got demonstrated a few years ago.
*) my guess 1.5-3 years for first good looking tech-demos, 4-6 for some serious games sold for gamers.
---------------------------------------------------------------------------------------------------------------------------------Lux_ said:First, thank you for the excellent post! It's the quoted part I'm not excatly agreed with.hoho said:In short I wouldn't say it is fools errand. It will surely take years* for ray tracing to become a viable alternative for game developers
*) my guess 1.5-3 years for first good looking tech-demos, 4-6 for some serious games sold for gamers.
1) it takes about 4 years to build a game USING foundation of existing production workflows and supporting toolset. To introduce something significantly different would introduce delays: the larger the change, the more there are bugs and more need to relearn/train. Watch Vista for example. Nobody wants their product to be delayed for 1,5 years.
2) The harware for "very good looking" rasterization games is not exactly here yet, as can be seen in latest batch of DX10 benchmarks. Raytracing provides better quality, not better speed - and the consumer space is not in position to choose better quality at the expense of double-digit drop in framerate just yet.
3) There are "known" ways to significantly speed up rasterization-based rendering: I remember one paper from this year's SigGraph, where the author accumulated shader results across frames (can't find the link right now). For example when running character was shaded, there were big speedups, because large parts remained unchanged across the frames.
I don't know how easy it is to introduce this technique using current frameworks, but I imagine it is orders of magnitudes easier than similar techniques for raytracing. Maybe I'm mistaken.
---------------------------------------------------------------------------------------------------------------------------------hoho said:Yes, I agree. I was actually thinking of Intel showing something when they release Larrabee around late next year/early 2009, others won't have something for years to come :smile:Lux_ said:1) it takes about 4 years to build a game USING foundation of existing production workflows and supporting toolset.
About point 3, what makes it impossible to use similar techniques with ray tracing? As I've said one can use pretty much every rasterizing trick in ray tracing too.
---------------------------------------------------------------------------------------------------------------------------------crystall said:Reading between the lines it seems to me that they're talking about navigating a level, that's a completely static scene on which you can spend as much time as you want to build an optimal tree. The day you could really handle a ray-traced Quake IV-like game at a decent resolution, with decent AA & AF levels on a general purpose CPUs the GPUs will be probably doing much, much more than that.hoho said:I bet most people already have heard about Intel demo of ray traced Quake4. It was first shown at December last year running on QX6700 at 256x256 and achieving around 17FPS. This year they made "little" changes to their algorithm (replaced OpenRT with their internal tracer, probably MLRTA based one) and showed it on 2P quadcore Penryn at 1024x1024 running at around 90FPS. For low-quality video see news -> 28th September 2007 on their page.
---------------------------------------------------------------------------------------------------------------------------------hoho said:Seems like someone forgot to watch the demo videocrystall said:Reading between the lines it seems to me that they're talking about navigating a level, that's a completely static scene on which you can spend as much time as you want to build an optimal tree.
http://www.idfun.de/temp/q4rt/videos...elPohl_cut.wmv
hint: ~2:30
Gubbi said:Ultimately secondary rays is the entire reason ray tracing exists as a concept. That basically makes ray tracing an excercise in pointer-chasing (heck, there even was a raytracer in the integer spec2000 suite for exactly this reason). That means performance is ultimately limited by latency.
Rasterizers exploit the great locality rasterization implies (pixels in screen space, textures etc). That makes it more limited by bandwidth than latency.
That ultimately favours rasterizers as bandwidth is a lot easier to improve than latency. The difference between raytracers and rasterizers has gone up for the past many years and I see no reason for that to change.
Given the same amount of silicon real estate I can't see raytracers displacing rasterizers.
Cheers