Lighting Logic Map + Raytracer + HD textures vs unified shader/stream/VPU based GPUs

Flux

Regular
What if you made a gpu that featured a dedicated raytracer unit,a deditated triangle drawing unit(ASIC),TMU cluster block(hd textures), and a lighting behavior block(Fmac/gate cluster)(calculates the lighting behavior using the raytrace for the HD textures based off if the material's real life optical physical constants(refraction/reflection).

What advantages would it have over the normal shader/triangle based cards?
 
You'd have nice light simulation but unpredictable memory access patterns and generally unpredictable performance. And of course disgruntled programmers with 15 years of rasterizing experience under their belts.

You might want to try and dig up some of the old SaarCOR presentations. That was an attempt to do the hardware raytracing accelerator approach for realzies, but didn't gain much traction anyway. Actually it seems most of the information about it has been buried and wiped away.
 
You'd have nice light simulation but unpredictable memory access patterns and generally unpredictable performance. And of course disgruntled programmers with 15 years of rasterizing experience under their belts.

You might want to try and dig up some of the old SaarCOR presentations. That was an attempt to do the hardware raytracing accelerator approach for realzies, but didn't gain much traction anyway. Actually it seems most of the information about it has been buried and wiped away.

Why will it have random memory access patterns?
 
Because rays could bounce every which way in a particular scene depending on its contents and position of lights? Or at least that's what I understand from people more knowledgeable about such things than I. :)
 
Would it matter if you had a cluster of raytracing(logic gates/FMACs) nodes that had a decent amount of fast low latency on-die(1GB /45nm bit cell/40mm2)1TSRAM-Q to use per RT node?

I am not saying this IS the gpu. These are just features of a gpu.
 
It certainly could not hurt. Then again probably every GPU designer in the world would drool like a little girl if you told them they can add 1GB of fast low latency on-die memory for "free" and capabilities of todays GPUs would jump significantly.
It still wouldn't change the fact that you would still have to deal with huge amounts of random 32bit addresses. And what would those "RT nodes" do exactly?
 
It certainly could not hurt. Then again probably every GPU designer in the world would drool like a little girl if you told them they can add 1GB of fast low latency on-die memory for "free" and capabilities of todays GPUs would jump significantly.
It still wouldn't change the fact that you would still have to deal with huge amounts of random 32bit addresses. And what would those "RT nodes" do exactly?

RT nodes are small Ray Trace Logic Blocks.Each with thier own memory block(out of the a total of 1000 x 1MB 1-STRAM-Q).

Memory allocation is handled by other logic ICs in the RT node.
 
Yes, but what part of raytracing does it manage? Ray-triangle intersaction? Spawning new rays? Building/maintaining acceleration structure?
 
Yes, but what part of raytracing does it manage? Ray-triangle intersaction? Spawning new rays? Building/maintaining acceleration structure?

RT processor just draws as many raytraces as possible (at 24-30fps within a Tile/scaled sight region similar to what powervr sgx does)

The Lighting unit(maybe a lighting phsyics only TEV-like unit) just lights the textures mapped to the triangles based on real world lighting physics like say refraction,diffusion,reflection,opacity and compares the raytrace's properties with the lighting texture's properties on the triangle/geometry. The RT'er doesn't perform the lighting it only determines intensity/direction of the light source.

The triangle unit is a ASIC designed exclusively for drawing JUST triangles. The triangles are then mapped by the TMU array unit with a high resolution(tiled) texture for the polygonal mesh. The RT/Lighting unit then light them with accurate realistic lighting.
 
So basically you're trying to solve DirectX 7 class problem with some other type of hardware and you picked ray tracing becouse it's supposed to by photorealistic by default.

You make lighting entirely fixed function (although based on "real" physics). You talk about refraction, reflections,... yet your description of an "RT processor" looks just like a rasterizer that's unable to trace secondary rays and thus unable to provide reflections.
And you have even added a "triangle unit" which I guess is the normal rasterizer. Why?

How much grasp do you have on what has to be done mathematically to get 3D graphics on screen? Forget implementation for now.
 
Shaders+TMUs are the way of the future. Once manufacturing processes get small enough, we'll see less fixed function.

The reason Larrabee was cancelled was because of performance/watt.
 
Back
Top