Unlimited Detail, octree traversals

There have to be "LOD" Transitions, else you would have ugly aliasing in motion, eg. a complex object will have a "voxel" information which will be used if the scale is smaller than 1 pixel and child nodes if it isnt.
Now there is the problem of the voxel having the same color from all perspectives, and absolutely no surface information.
You can store color and other information to each main direction like Epic did in their SVOGI.
 
In order for IHVs to dump rasterizing they'd have to face a method/technology which advantages overweigh its disadvantages compared to rasterizing; again I honestly haven't ever heard or read of anything that came close to a paradigm like that. If there ever was I'd gladly like to read about it out of pure technical curiousity.

While I fully understand your notion above, I'm afraid that without even a single paradigm that would have had the potential or nearly the potential to replace rasterizing it's somewhat of a moot point in the end.
Larrabee is the closest thing to a paradigm change since Nvidia's first chip attempted quads instead of triangles. It didn't become a commercial success as a graphics accelerator, but it definitely got everyone's attention. Nvidia's first attempt was before my time in this industry though so I don't recall the particulars.
 
Larrabee is the closest thing to a paradigm change since Nvidia's first chip attempted quads instead of triangles. It didn't become a commercial success as a graphics accelerator, but it definitely got everyone's attention. Nvidia's first attempt was before my time in this industry though so I don't recall the particulars.

Was or is there anything that would suggest that Larabee wouldn't had operated as a rasterizing GPU? Just because Larabee as a GPU didn't contain any fixed function rasterizing hw, it doesn't mean that it wouldn't had rasterized in the end.

Intel canned the desktop GPU because it wouldn't had been competitive in terms of perf/W mostly, yet the architecture is very much alive for professional markets; more like a "too programmable for it's time" beast than anything that was meant to replace rasterizing in any way.

NV's NV1 was at a time where 3D was still in its infancy and a couple of talented blokes could set up a successful company in a garage. More about it here: http://en.wikipedia.org/wiki/NV1

....if the content should be accurate and using quadratic texture maps was as complicated as described then the wiki author (-s) might have a point. Bears the question if an API tailored for it exclusively would had theoretically made things a lot easier.
http://en.wikipedia.org/wiki/NV1
 
In case it is unclear, spamming these forums won't get you editing rights, but it might get you a vacation. The offending posts have been merged.
 
Larrabee is the closest thing to a paradigm change since Nvidia's first chip attempted quads instead of triangles.

I'm pretty sure that it was a rasterizer up until they decided to gut it. Because even when you're Intel you need to obey the status quo and fight it gradually otherwise you're stillborn. It's just that they were a pretty sucky rasterizer when compared to the competition across relevant tasks...but it was supposed to be awesome for future tasks that were tailored to its strengths. A bit like Rampage, Pyramid3D and other mythical dragons:p
 
I agree Larrabee was going to support rasterization but it was mostly a software renderer which is a big change from current GPUs. Obviously too big of a change for the time period it was targeting.
 
Will be interesting come next-next gen of consoles to see if intel pushes it again. It should be more performant and perhapsd more up to par then amd f intel wants in on consoles to use as leverage to build the toolchain they could push it then for a nice price.
 
I agree Larrabee was going to support rasterization but it was mostly a software renderer which is a big change from current GPUs. Obviously too big of a change for the time period it was targeting.

Does it matter how you achieve N task(-s) in any given case? Ok Intel had the dumb idea to skip ff rasterizing hw like it would had cost any arm and leg to include at least one raster unit; as long as LRB would had rasterized in the end even through sw it didn't introduce any new or alternative method to rasterizing, just a quite inefficient solution for its time.

Will be interesting come next-next gen of consoles to see if intel pushes it again. It should be more performant and perhapsd more up to par then amd f intel wants in on consoles to use as leverage to build the toolchain they could push it then for a nice price.

In the meantime care to elaborate why Intel isn't insisting on any of the LRB ideas in their GenX line of GPUs? Who's to guarantee that exotic HPC hw like Knights-whatever has more chances to land in a next-next generation console than a higher end SoC with GenX GPU like Intel is selling today? I don't see anything else BUT SoCs both in Microsoft and SONY's upcoming consoles and I see Intel investing in higher SoC bandwidths via eDRAM instead any overproportional programmability in their GPU architectures.

Finally before and above all you'd have to convince me first why Intel could convince any of the console manufacturers to buy anything from them. Apart from that the actual question should be if there will be any consoles beyond the coming ones as we know them. I can see Intel already investing in smartTV technology which must be another awkward coincidence.
 
Does it matter how you achieve N task(-s) in any given case? Ok Intel had the dumb idea to skip ff rasterizing hw like it would had cost any arm and leg to include at least one raster unit; as long as LRB would had rasterized in the end even through sw it didn't introduce any new or alternative method to rasterizing, just a quite inefficient solution for its time.
Intel's goal was obviously to allow for alternative methods to rasterization and Larrabee likely would have been good at some other methods, but business conditions never allowed them to get that far. So in my mind they attempted a paradigm change they just weren't able to make it work in that time frame.
 
Intel's goal was obviously to allow for alternative methods to rasterization and Larrabee likely would have been good at some other methods, but business conditions never allowed them to get that far. So in my mind they attempted a paradigm change they just weren't able to make it work in that time frame.

Where the alternative method to rasterizing for Larabee would had been what exactly? All I can see from my humble perspective is that programmability went out of of proportion; there's too much for everything or too little. Keep the right proportions in order to reach a good balance for each and every timeframe's conditionals. It's also my understanding that Intel went for as much programmability probably having in mind higher programming flexibilities for professional markets; I can see it in today's MIC architecture and it clearly has its advantages for HPC. How you could turn that though into something that would have at the same time clear advantages under 3D against its competition is another chapter.

Even for HPC and while having its own advantages I don't see it being in essentially a Kepler/Tesla "Killer" of any sort either. In other words you can either look at Larabee at something that failed due to conditionals not being in Intel's favor or you can see it as an unbalanced design with quite a few wrong design decisions for something that was to excel both for HPC and 3D markets.
 
Ray tracing is the alternative rendering method everyone thinks of and Intel was investing in that from a research level at least. I think their main goal was to have a more CPU like product that could service the graphics and HPC markets. Your perspective is correct as that didn't work out for them.

I'm not saying Larrabee had "clear advantages under 3D against its competition" if that's what you think I'm saying. Intel obviously thought it would be good enough when they started the project though. GPUs have evolved to be better at compute than they were when Larrabee started.

Seeing the thread we're in the next question is what architecture is best suited for new ideas or new spins on old ideas like voxels?
 
Ok Intel had the dumb idea to skip ff rasterizing hw like it would had cost any arm and leg to include at least one raster unit;

This is (was) seriously not the problem. It is hardly a matter of adding a small bit of FF logic. It is about how the rendering pipeline as embodied in modern (and not so modern) APIs basically pushes you into a particular data-flow.
 
Ray tracing is the alternative rendering method everyone thinks of and Intel was investing in that from a research level at least. I think their main goal was to have a more CPU like product that could service the graphics and HPC markets. Your perspective is correct as that didn't work out for them.

I don't think any of Intel's engineers were ever THAT naive or that any of them weren't aware of the downsides of ray tracing.

I'm not saying Larrabee had "clear advantages under 3D against its competition" if that's what you think I'm saying. Intel obviously thought it would be good enough when they started the project though. GPUs have evolved to be better at compute than they were when Larrabee started.

I didn't think you were either; my only other point is that Intel couldn't find the correct balance for the architecture to have at the same time something competitive for 3D and HPC; it ended up being only for the latter.

Seeing the thread we're in the next question is what architecture is best suited for new ideas or new spins on old ideas like voxels?

I as a layman was thinking that IHVs and Microsoft in the longrun might consider some sort of micropolygon oriented architectures, but unless I'm missing anything it doesn't seem like we'll ever see any radical changes after all.
 
This is (was) seriously not the problem. It is hardly a matter of adding a small bit of FF logic. It is about how the rendering pipeline as embodied in modern (and not so modern) APIs basically pushes you into a particular data-flow.

I didn't that it was the primary problem either; as I said IMHO they just concentrated too much on programmability and lost a good balance between HPC and 3D.

3dcgi mentioned sw rasterizing; in the end LRB was as programmable that you could had turned it probably at will into an IMR, TBR or a deferred renderer all through sw in the end. In the given case however ability != efficiency.
 
Tried to post in another thread, didn't get past moderation. Perhaps here: goo.gl/eK1SYG
This already occurs elsewhere but, as far as I know, no one did a voxel renderer purged from any ray, cone, division, multiplication or float filth.

In fact, it is a hitherto unfamiliar method to relate, with perspective & incrementally, 3-D to 2-D without /. For example you can do perspective-correct MIP texture mapping without there occurring any /. More interesting than the / every n pixels. Here the picture elements are less constrained than the usual [x, x + 1) * [y, y + 1) (yes, pixel = rectangle) i.e., replace the 1s with deltas.

Bottomlessness is possible (& implemented, there is such a variant in VAR.C).

As for UD, here it is: https://www.google.com/patents/WO2014043735A1
Compare with the '80s work of D. J. Meagher: http://goo.gl/sdjXVG

Let dumbing down, menial parallelism die.

Early UD test (no parallelism, whether many-cority or SIMD, involved): http://goo.gl/r7p7e9 (Code later).

Often updated code & discussion: goo.gl/txUlSl
 
Where the alternative method to rasterizing for Larabee would had been what exactly?
I'm late to the party, but didn't see this thread earlier. In case you're really interested. There are several alternatives and intel has pushed some research. (keep in mind, Larrabee started somewhere in DX9 times, way before there was any access to compute on GPUs, nor anything close to flexible as it).

One alternative is irregular z-buffering, e.g. http://www.eweek.com/c/a/IT-Infrastructure/Inside-Intel-Larrabee/10/
instead of the usual problems of mapping 'some' depth pixel from the light-view to the eye-view depth, you take the actual depth from the eye-view and use it for rasterization of the shadow-map. it's not much more than custom sample offsets for the rasterization and can be quite effective, especially if you consider what work arounds you usually have to get decent shadows (e.g. higher resolution, multiple cascades, filter, etc.)

Another alternative is point rendering, with a good reconstruction filter, you could use it to get some kind of proper transparency, motionblur, depth of field etc. It's of course not without flaws, but our current way of rendering has also insane flaws that we just got way too used to.

adaptive sampling.. in movies, it's common to render parts of the screen with increasing sampling-resolution until you are under some noise threshold, either controlled by artist, by some statistics or error metrics. In DX11 you have a way to output a coverage mask for your custom multisampling in shader, but you cannot really do e.g. deferred shading based on the noise frequency of the g-buffer. you have to do it explicitly in shader. but even then it's tricky, as you don't have control over the sampling position (not in a way that would spawn threads). if the rasterizer would spawn the needed amount of shading-threads, you'd get for shading what you get for geometry anti aliasing with MSAA.
 
Shade

That a reciprocal of front-to-back octree splatting can be used to endow the world with very convincing real-time light & shadow, seems to have escaped notice. Namely, substitute a source for the (destination) observer & a lightmap (the higher the res./bit depth, the more intricate the light) to be extruded for the occlusion mask. Then shade what is seen from the standpoint of the source.

I am interested in obtaining the early UD demos.
 
Back
Top