Larrabee to also be presented at Hot Chips

B3D News

Beyond3D News
Regular
Since the rest of the internet is still at the stage where they're all excited about Larrabee being presented at Siggraph (hint: you guys are ten days late), we thought we'd let you know it will also be presented at Hot Chips, presumably with more of a hardware perspective.

Read the full news item
 
It is amusing how it seems that NVIDIA thinks Intel takes raytracing more seriously than they really do, while Intel thinks the same for NVIDIA with rasterisation.

Na, it's 100% marketing FUD on both sides.
Intel: rasterisation (and NV) are dead, ray tracing is the way of the future for awesome cool new effects
NV: Intel is ray-tracing only, do you really want to rewrite your game from scratch for it?

Intel's trying to pull an Itanium (destroy the market with a paper product way before any actual launch), while NV is capitalizing on the lack of real info about LRB.
 
Na, it's 100% marketing FUD on both sides.
I don't think so, NVIDIA is investing a lot on money on RT.
I wouldn't be surprised if NVIDIA is actually investing more in RT than Intel.
 
Last edited:
I don't think so, NVIDIA is inventing a lot on money on RT.
I wouldn't be surprised if NVIDIA is actually investing more in RT than Intel.

Yeah, but Nvidia is using the investment for Gellato currently not gaming. RT for games just doesnt seem very relavant at the mo.

Global illumination, Real time AC and area-lit soft shadows seems like the next generation of effects coming through. All of which are computable without RT true? and all will prob take another 3 - 4 years before they are all full scene / full screen effects in most/all games. By which time RT may then just be starting to emerge? As an interesting tech for hybrid rendering (if hybrid is a possibility?)
 
Yeah, but Nvidia is using the investment for Gellato currently not gaming. RT for games just doesnt seem very relavant at the mo.

As far as i know, Nvidia's "Gelato" (even the Pro version) is now 100% free, as its development is being phased out in favor of their new -and far more more widespread in the industry- acquisition, Mental Images' "Mental Ray".
 
Yeah, but Nvidia is using the investment for Gellato currently not gaming. RT for games just doesnt seem very relavant at the mo.
I was not talking about Gelato, Nvidia is investing a lot on money in RT research; they also acquired Peter Shirley's company.
 
Talking about Peter Shirley ... from his blog :
As for NVIDIA, the main question I asked myself to help make the decision was expressed in my last blog post; are hybrid methods a natural architecture for interactive graphics? After much contemplation, I have concluded the answer is yes. For complex geometry rasterization and hi-res screens has many advantages over ray tracing for viewing “rays”, and the solution already works. It is yet to be shown that ray tracing can be similarly efficient for such sampling rates, and I am skeptical that a magic bullet will be found to change that.
 
My bet is that ray tracing will be much faster on NV hardware, and that NV is targeting the movie/commercial/in-game-video ray tracing market.

As for real-time ray tracing, about all that is needed for current raster based hardware is a fast hardware path for writing depth conditional that written depth is farther away from eye than the triangle primitive's depth. Then HiZ (AMD) Z-Cull (NV) could still work without much of a hardware change. Simply raycast into the triangle, and use triangles as bounding geometry to accelerate the raycast. Writing depth is required to solve overlap problem else you get billboards... Solve secondary rays with traditional image space maps (cubemaps, etc). Hybrid methods will be practical very soon (next console generation).
 
Solve secondary rays with traditional image space maps (cubemaps, etc).
With one map per triangle (except for shadow rays) I don't see that as particularly tractable. You can't get away from directly accelerating ray traversal of the bounding volume hierarchy on the GPU ... once you have that everything else is too trivial to worry about.
 
My bet is that ray tracing will be much faster on NV hardware, and that NV is targeting the movie/commercial/in-game-video ray tracing market.
Do you know something we don't?

As for real-time ray tracing, about all that is needed for current raster based hardware is a fast hardware path for writing depth conditional that written depth is farther away from eye than the triangle primitive's depth. Then HiZ (AMD) Z-Cull (NV) could still work without much of a hardware change. Simply raycast into the triangle, and use triangles as bounding geometry to accelerate the raycast. Writing depth is required to solve overlap problem else you get billboards... Solve secondary rays with traditional image space maps (cubemaps, etc). Hybrid methods will be practical very soon (next console generation).
Umh..dunno, triangles don't sound too exciting as accelerating structure.
 
Not one map per triangle (obviously). Think of maps as simply a way to pool secondary rays by data locality. Don't forget that you can raycast into a map also.

Do you know something we don't?

Just speculation for now...

Umh..dunno, triangles don't sound too exciting as accelerating structure.

Why not? Triangles can solve rough visibility without searching, then leave fine searching for a fragment shader. Stop thinking about triangles as actual surfaces, and begin thinking of them just as bounding volumes in combination with a good level of detail and hierarchical occlusion system.

How would a traditional ray tracer have any hope in accelerating a really dynamic system with a lot of moving objects? Rebuilding the acceleration structure which is necessary to get good ray search performance would be many times more expensive than simply rendering the bounding volumes of the movers (ie leaves of a tree, etc) to start the rays from. Somewhat similar to the reason we radix sort instead of quick/heap/merge sort...
 
Not one map per triangle (obviously). Think of maps as simply a way to pool secondary rays by data locality.
Secondary rays and data locality? That's a good one ;)

Doing a hybrid between object order and image order rendering will only work for the primary and shadow rays ... if rays don't originate from a single point the only way to know if there is an intersection is to do the test per ray.
 
Last edited by a moderator:
Why not? Triangles can solve rough visibility without searching, then leave fine searching for a fragment shader. Stop thinking about triangles as actual surfaces, and begin thinking of them just as bounding volumes in combination with a good level of detail and hierarchical occlusion system.
I think you answered your own question :) Triangles are not exactly the best accelerating structure and you want to use them to perform part ray traversal and as you say you'd still need to a structure to speed up the rest of your traversal.

BTW..I know it's semantics but I wouldn't say that triangles solve visibility without searching: you are still searching, just in a different way.
Andy put this very well in a long post he wrote on another forum, too bad I can't find it now :)

How would a traditional ray tracer have any hope in accelerating a really dynamic system with a lot of moving objects? Rebuilding the acceleration structure which is necessary to get good ray search performance would be many times more expensive than simply rendering the bounding volumes of the movers (ie leaves of a tree, etc) to start the rays from. Somewhat similar to the reason we radix sort instead of quick/heap/merge sort...
You'd still need to (re)build the 'rest' of your accelerating structure, unless you complete structure (triangles + something else) naturally handles rigid transforms.
 
BTW..I know it's semantics but I wouldn't say that triangles solve visibility without searching: you are still searching, just in a different way.
Andy put this very well in a long post he wrote on another forum, too bad I can't find it now :)

Yeah, in terms of hierarchical occlusion culling/lod you are limiting your "search" with temporal locality and log cost pre-Z drawing pass.

You'd still need to (re)build the 'rest' of your accelerating structure, unless you complete structure (triangles + something else) naturally handles rigid transforms.

Yes the "something else" and rest of the traversal, I'm purposely leaving it out here. Without giving out all the details: hint not triangles, doesn't need to get rebuilt, and yes does work with rigid transforms.
 
Yeah, in terms of hierarchical occlusion culling/lod you are limiting your "search" with temporal locality and log cost pre-Z drawing pass.
I see, but let say I want to compute some dynamic ambient occlusion terms (no screen space hacks please..) ? ;)
Yes the "something else" and rest of the traversal, I'm purposely leaving it out here. Without giving out all the details: hint not triangles, doesn't need to get rebuilt, and yes does work with rigid transforms.
You are a good teaser :)
 
Humm, I'm wrong, actually apparently on at least one current platform you can force coarse tile raster z culling with depth writes on if writes are farther than triangle plane equation Z.

Anyway as per 2ndary rays using maps, http://www.fsz.bme.hu/~szirmay/ibl3.pdf, its not as if this hasn't been done before (just GPU performance hasn't been there to do it yet).
 
Anyway as per 2ndary rays using maps, http://www.fsz.bme.hu/~szirmay/ibl3.pdf, its not as if this hasn't been done before (just GPU performance hasn't been there to do it yet).
If this is raytracing then so is parallax mapping. I'd sooner call this kind of environment map hack rasterization myself (also Franklin Seung-Hoon Cho described this method half a decade before they did, see "Towards Interactive Ray Tracing in Two- and Three-Dimensions").
 
If this is raytracing then so is parallax mapping. I'd sooner call this kind of environment map hack rasterization myself (also Franklin Seung-Hoon Cho described this method half a decade before they did, see "Towards Interactive Ray Tracing in Two- and Three-Dimensions").

Sure, from parallax mapping to cone tracing, all fragment shader ray tracing with different amounts of quality / cost trade-off in the ray intersection test, and different acceleration structures. Personally I don't see a good idea in doing triangle intersection tests at the pixel level, takes too long, costs too much memory to store the triangles and what is required to shade those triangles, and doesn't work well in terms of LOD or antialiasing. Time to think outside the triangle.
 
Raycasting into rasterized environment maps will not be what people expect when you say you are raytracing on the GPU. Most of the heavy lifting is still done by rasterization and you have none of the elegance or generality of what people will expect from raytracing.

Also if you want to do it right you simply can't leave it at that, because of differences in occlusion from the original point of view and that of the actual ray you will get artefacts ... for refraction and lighting effects it's not so bad because we have a higher tolerance for errors there, but reflections will be noticeably fucked up on occasion. You can make a good guess where the artefacts are and fill them in with real raytracing, but then you actually have to be able to do real raytracing.
 
Last edited by a moderator:
Back
Top