Larrabee's Rasterisation Focus Confirmed

B3D News

Beyond3D News
Regular
For many months, researchers and marketing fanatics at Intel have been heralding the upcoming 'raytracing revolution', claiming rasterisation has run out of steam. So it is refreshing to hear someone actually working on Larrabee flatly denying that raytracing will be the chip's main focus.

Read the full news item
 
That's a nice quote on the "how did this strange thing happen?", but if I were to put a date and place on where the messaging went off the rails it would be 8/30/2006 and right here: http://www.mercextra.com/blogs/aei/2006/08/30/the_coming_comb/

But, finally, can we talk now about how raytracing gets folded into DX, over what time frames, and following what model?

Will MS stamp their authority on a solution sooner rather than later? Or will they let some proprietary API solutions compete for awhile before picking "best of breed" as the basis for inclusion in DX? Will middleware from 3rd parties play a big role in transitioning ISVs up the learning curve?
 
Yay, I was hoping someone here would notice Tom's post :D

Now we can get back to the interesting conversations, like the questions that Geo brings up.
 
But, finally, can we talk now about how raytracing gets folded into DX, over what time frames, and following what model?
Never ... raytracing when it's done will be handled by general parallel programming not OS level APIs.
 
That's good, because I remember a recent interview with Carmack saying raytracing stinks, basically. He then stated that it was possible though that with Intels process advantage, it might have 3 times more clockspeed than the competition to overwhelm the fact that raytracing might be three times as slow, ie a 3ghz GPU that might perform like the competitions fastest 1ghz chips.

And I thought it begged the question..why not just go with a traditional architecture and be three times faster than the other guys instead? Sounds like the new Intel is heading that direction.
 
Why would D3D try to integrate ray tracing? Seems like a bad idea to me because that would be trying to strictly lock down data structures ruining whatever flexibility ray tracing would afford.

Jawed
 
Well, no surprise there.
Also the thing about it not having much in terms of fixed hardware... Depends on how you look at it I suppose. With that many cores/threads, you can simply dedicate one or more of them to tasks that were previously hardwired. The end result is the same: input data here, get output data there, in parallel with whatever other rendering tasks the chip is performing.

As I already pointed out elsewhere, Intel already does a bit of that: http://softwarecommunity.intel.com/articles/eng/1487.htm
As you can see, it will spawn threads on the execution units for clipping and coefficient/interpolation setup. So part of the rasterizer on Intels IGP has been brought back to 'software' (not sure if any other GPU does this aswell?).
 
Never ... raytracing when it's done will be handled by general parallel programming not OS level APIs.

If the same hardware resources are doing both rasterisation and raytracing, don't you need to be able to schedule them somehow rationally, and wouldn't that require (or if not require, be made much easier by) "talking" to the resource through a common interface?
 
Why would D3D try to integrate ray tracing? Seems like a bad idea to me because that would be trying to strictly lock down data structures ruining whatever flexibility ray tracing would afford.

Jawed


Standard API's always trade flexibility for predictability and portability. I wouldn't be surprised if there were proprietary lower level access that co-existed as well, but I'd think ISV's would demand some level of standardisation and portability between solutions for all the same reasons we have DX in the first place.

Edit: Or said another way, Jen-Hsun will give up raytracing to the x86 crowd when they pry it from his cold dead fingers.
 
Why would D3D try to integrate ray tracing? Seems like a bad idea to me because that would be trying to strictly lock down data structures ruining whatever flexibility ray tracing would afford.

Same reason that Pixar and renderman have added a lot of support for raytracing within shaders...

For somethings, it not only gives better results, but its faster as well.

If people want to read up on how rays can be used within the raster graphics domain, I would highly suggest people go to graphics.pixar.com and read some of the papers.

Ray Tracing for the Movie 'Cars' is a good primer.

Aaron Spink
speaking for myself inc.
 
Standard API's always trade flexibility for predictability and portability. I wouldn't be surprised if there were proprietary lower level access that co-existed as well, but I'd think ISV's would demand some level of standardisation and portability between solutions for all the same reasons we have DX in the first place.
I dare say ISVs who want to ray trace but don't want to build it from scratch will happily use a library like Havok (which might gain direct support for "ray tracing intrinsics" if it doesn't have them already :?: ) or have access to ray tracing through some game engine, like Unreal.

I think treating ray tracing as just another general computation problem that can run (at least partially) on the GPU is the way forward. The more advanced developers are champing at the bit to be able to do general computation freely on the GPU, so ray tracing is just some code+data.

The problem of pipelining "non-graphics" computations, consuming/producing irregular data structures and allowing developers to dynamically construct networks of kernels lies at the heart of getting general computation "right". Larrabee does this right, by default - it's the incumbent IHVs who've got to adapt.

Jawed
 
Same reason that Pixar and renderman have added a lot of support for raytracing within shaders...
Didn't stop these guys from implementing a hybrid, rasterisation + ray tracing, renderer:

http://graphics.stanford.edu/papers/i3dkdtree/

Their big problems are SIMD divergence penalty and limited per-ray register allocation. I don't see how a ray-tracing-specific API is going to solve those problems. They note that D3D10 partially eases some of the restrictions of DX9, but it doesn't go far enough. Any substantial change to the API/GPU in this direction will be driven by general computation.

Jawed
 
If the same hardware resources are doing both rasterisation and raytracing, don't you need to be able to schedule them somehow rationally, and wouldn't that require (or if not require, be made much easier by) "talking" to the resource through a common interface?
Support for virtualizing resources so you can run GPGPU and rendering at the same time will come, but that's hardly specific raytracing support.
 
I dare say ISVs who want to ray trace but don't want to build it from scratch will happily use a library like Havok
Heed my words: any hardware company which decides to base its strategy around the competence and efficiency of third party software teams that they do not pay directly is asking for trouble. A LOT of trouble.
Plus, I'm still pretty convinced raytracing is performance-sensitive enough that it makes more sense to have a hardware-specific implementation exposed through a common API.
And in general, Tom Forsyth mentions they'll sport a bunch of other features that GPUs don't/won't have. If that's not in DX11... how do they expose it?
 
And in general, Tom Forsyth mentions they'll sport a bunch of other features that GPUs don't/won't have. If that's not in DX11... how do they expose it?

Well, the way I read it, they will expose the 'bare metal' to developers who like to get down-and-dirty, but also have standard D3D/OpenGL drivers for compatibility with existing software and developers who don't want to 'roll their own'.
So perhaps these features are mainly exposed by their GPGPU/software rendering programming interface?
 
Well, the way I read it, they will expose the 'bare metal' to developers who like to get down-and-dirty, but also have standard D3D/OpenGL drivers for compatibility with existing software and developers who don't want to 'roll their own'.
So perhaps these features are mainly exposed by their GPGPU/software rendering programming interface?
Unless the two can work at the same time for the same application, that's a losing proposition though. But certainly an extension of the way CUDA interacts with OpenGL/Direct3D might be interesting if done right.
 
Heed my words: any hardware company which decides to base its strategy around the competence and efficiency of third party software teams that they do not pay directly is asking for trouble. A LOT of trouble.
:???: How does this relate to Intel and Havok? :???: Or, are you saying Intel's onto a winner? :???:

Plus, I'm still pretty convinced raytracing is performance-sensitive enough that it makes more sense to have a hardware-specific implementation exposed through a common API.
Suggestions for these hardware-specific aspects of ray tracing to be included in D3D welcome :D

And in general, Tom Forsyth mentions they'll sport a bunch of other features that GPUs don't/won't have. If that's not in DX11... how do they expose it?
How can D3D11 support features that "GPUs won't have"? Or are you suggesting that D3D11 will be built solely for Larrabee?

If Larrabee supports multiple concurrently executing kernels and virtualises the resources used by the D3D pipeline, I expect it to be running a software D3D pipeline and various non-D3D general compute kernels in parallel. e.g. Havok, which should run concurrently with D3D rendering.

Since ray casting is a part of a physics engine, how far is it from Havok to a ray tracing library sold by Intel? NVidia is surely doing the same with PhysX/CUDA.

Jawed
 
I'm still hoping for something like DirectPhysics and/or DirectRT from Microsoft. I don't want to have to write code for 3 different GPUs because all 3 IHVs use their own formats and interfaces.

Some people hate Microsoft's monopoly, but the fact that they can actively force a standard on everyone is invaluable. I'd love to be able to just make calls to something like "DirectPhysics", and it'll automatically go to whatever physics engine or drivers or hardware acceleration is available.
 
:???: How does this relate to Intel and Havok? :???: Or, are you saying Intel's onto a winner? :???:
Oh, you said 'like Havok'. I think I wasn't very clear atll, sorry; what I meant is that either Havok supports raytracing on all hardware that is capable of it in an efficient way, or it won't pick up. And when it comes to 'standard' Third Party solutions that'd work for Intel/NVIDIA/AMD, I am very skeptical that would be a good strategy.

Certainly Intel can use Havok and NVIDIA can use Ageia as 'backdoors' for graphics raytracing if they wanted to; however, unless they were nearly optimal for the other guy's hardware too (and AMD's ideally), I don't see how that can gain traction.

Suggestions for these hardware-specific aspects of ray tracing to be included in D3D welcome :D
Well, some would be in favour of a GPGPU aspect ala CUDA for future D3Ds, but as I said I'm skeptical that is the right direction for raytracing specifically. For some other things, of course, it is a very good idea.

How can D3D11 support features that "GPUs won't have"? Or are you suggesting that D3D11 will be built solely for Larrabee?
I'm not suggesting anything; just read all of Tom Forsyth's blog post and I think you'll see what I'm talking about. And indeed I'd be very curious about Tom's thoughts (or that of anyone else working on Larrabee - Matt?) on API issues and standardization. As I said, this problem isn't exclusively related to raytracing.

P.S.: I wrote a noticeable chunk of the original news piece, but the final version was written by consensus - so do not assume the views presented in the news piece are 100% exactly those of anyone specific on staff, although all of us agree with the vast vast majority of what's in the piece (and parts where that didn't happen were just removed, obviously).
 
And indeed I'd be very curious about Tom's thoughts (or that of anyone else working on Larrabee - Matt?) on API issues and standardization.
What's wrong with x86 as a standard? Let the middleware worry about APIs. To sell Larrabee they first and foremost need it to rasterize, and that's apparently Tom's job. It doesn't imply that other extensions will be offered through DirectX 11. That's just one of the many libraries it will be able to run (in my expectation). Not happy with the DX11 geometry shaders? Write your own in x86, stream out, stream in...
 
Back
Top