Realtime Ray-tracing Processors.

Real-time Raytracing processors the next big thing?


  • Total voters
    233
DiGuru said:
Jaws said:
DiGuru said:
ERP said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on.

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

Renderman in a lot of ways is just a collection of hacks, and as such it's a really good example of how far the hacks can get you.

You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.

Well you could argue that anything simulated on a computer is a series of hacks! ;) I suppose I was referring to the term 'hack' in a relative sense indirectly using renderman as a reference point where mainstream 3D aims to match it's rendering quality through a series of 'it's own' hacks. I assume PiXAR is selective of what it ray traces due to cost rather than quality? If it's cost then there's a bigger incentive for faster ray tracing hardware and algorithms, no?

DigGuru, I'm not sure what you're asking me to choose here? 3D can be 'artistically' pretty or 'realistically' pretty...I'm referring to realistically pretty here as it's less subjective. ;)

Yea, but realistically is split between the realism of Hollywood (that looks much better) and "real" realism (more like Half-life 2, very gritty).

To produce brilliant art, you need brilliant artists. But you can create very nice art by giving regular artists great tools. So, we can have a few brilliant pieces, created by a team of outstanding people, or we can have those same pieces AND a lot of very nice ones.

Those teams produce only one or two masterpieces, and it doesn't happen every year that one is produced. Especially with computers, we can create hardware and software that does the hacks and other hard stuff, freeing the artists from all those constraints. For computer graphics as I see it, that would require clever GPU's that use ray-tracing.

On the other hand, the method that uses the brute force approach is known to most people. It is not very artist-friendly, has lots of dead-ends and requires a major in graphics programming, but it is how it has to be done, right?

IMO, a few brilliant titles created by a few outstanding individuals is a scenario that will always be present, whatever tech is being used as talent, true talent is always sparse and is a neccesity for the industry to inspire others. The doom, half-life and unreal guys would be an example. They have written cutting edge 3D engines to inspire others and have even been financially astute to license these engines to 'lesser' mortals thereby raising industry standards. IMO, some new kid on the block needs to do the same and create a 'ray tracing' game engine that inspires others.

Yeah, 3D originated from geeks in R&D and not artists. As the tools matured over the years, artists have grasped the tech but still need a degree in programming like you say. Due to the competitive nature of this 'graphics whore' industry, people will always try to tinker with the low-level metal to extract every ounce of performance. ;)

IMO, I think the true culprit is DirectX. The movers and shakers of the mainstream PC graphics industry will try to cater for whatever the 'next' DirectX dictates and to a lesser extent OpenGL. The question is if the next iterations of these API's promote a 'ray tracing' API, will the industry suddently clamour for their cards to support it ala SM3 etc. :?:
 
Cryect said:
Maybe PowerVR will wake up and smell the coffee from us geeks wanting TBDR.

• 9+M triangles, 16K line images
BTW am I the only one bothered by this low poly count?

There's a rumour going around the Console forum that Dreamcast was the only console to do TBDR. ;)

I was bothered by this but considering it's 'old' silicon, 220nm, 110 mm2 die and IIRC, 100mhz and from the patent, it seems that most of the silicon is there for raytracing and not shifting geometry, it's reasonable, no :?:
 
Like I said they got bought by Apple and that was back in November 1999. I think their web site has been gone since then. I would love for somebody to explain their technology and what happened to it. I remember it was really exciting at the time.

Sorting algorithms... Basically another company along the lines of GigaPixel, Stellar Graphics, etc... The big deal (supposedly) with their technology was that their sorting algorithms were derived from data-mining expertise and supposed to be more efficient than any of their competitors (make what you will of that)... They burned up all their venture capital though and never finished (they had 3 chips planned). Apple's acquisition was pretty much an IP/talent grab...

And just for an interesting tidbit, the initial architectural director of the GeForce6, FX (and a whole slew of involvment on the GF4, GF3, GF2, GeForce, and TNT) was also one of the architects for Raycer's scanline rendering system...

IMO, offline ray traced 'family' of rendered images are the truest representaion of the physical world in 3D and most people would agree that it would be great if we could do that 'realtime' and make it 'interactive' in a games environment. IMO, two of the biggest factors, well done lighting and animation in an interactive scene can significantly enhance the realism.

Actually if you're trying to obtain a particular look however ray-tracing can really suck. It's useful for REALLY nice, accurate reflections, and REALLY nice shadows and that's about it...

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

PRman didn't even didn't get ray-tracing 'till really recently... Antz used BMRT as a ray-server for the few scenes that had any raytracing...

You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.

Well ray-tracing by itself is just as much a hack as anything else. There is no "true" solution. It presents as much problems as it solves...

I suppose I was referring to the term 'hack' in a relative sense indirectly using renderman as a reference point where mainstream 3D aims to match it's rendering quality through a series of 'it's own' hacks. I assume PiXAR is selective of what it ray traces due to cost rather than quality? If it's cost then there's a bigger incentive for faster ray tracing hardware and algorithms, no?

Ray tracing involves it's own costs (depending on the renderer). However in films (let's say Pixar), simply ray tracing a scene or object can present it's own problems. For one it's a lot harder to control unpleasant/undesirable output.

So, unless we can convince enough technical people so that they are willing to stick out their neck to support ray-tracing, we will be using more of the same brute force chips,

The more important question is do we really need ray-tracing that badly? The only thing that really useful thing that ray-tracing offers is shadows IMO...
 
archie4oz said:
The more important question is do we really need ray-tracing that badly? The only thing that really useful thing that ray-tracing offers is shadows IMO...

isn't it ironic - at first people spent tons of inventivness to add yet another bit of shinier light into their synthetic images, and now they're spending twice the efforts to remove that light from those same images..
 
Jaws said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names),
"Luxo Jr" was the name of the animation and, as others have pointed out, they don't really use ray tracing all that much.
 
Jaws said:
There's a rumour going around the Console forum that Dreamcast was the only console to do TBDR. ;)

I was bothered by this but considering it's 'old' silicon, 220nm, 110 mm2 die and IIRC, 100mhz and from the patent, it seems that most of the silicon is there for raytracing and not shifting geometry, it's reasonable, no :?:

Hmmm, Dreamcast is the only console to do TBDR. The Kyro and Neon 250 are similar to what the Dreamcast had in it. Anyways TBDR has nothing to do with raytracing.
 
Cryect said:
Jaws said:
There's a rumour going around the Console forum that Dreamcast was the only console to do TBDR. ;)

I was bothered by this but considering it's 'old' silicon, 220nm, 110 mm2 die and IIRC, 100mhz and from the patent, it seems that most of the silicon is there for raytracing and not shifting geometry, it's reasonable, no :?:

Hmmm, Dreamcast is the only console to do TBDR. The Kyro and Neon 250 are similar to what the Dreamcast had in it. Anyways TBDR has nothing to do with raytracing.
He may have been confused because the early PVR tech referred to "ray casting"
 
My gut is there will be 4 to 5 big next things before real time ray tracing is real with us at the consumer level.

So definitely within 6 - 8 years, and I guess algorithms can get alot better by then, otherwise the data requirements and bandwidth requirements would swamp you. You need better algorithms for dealing with all those light sources and illumination types.
 
Ok. So, has the current brute force hardware succeeded in doing all that can be done with ray-tracing faster? Even if it has, what could a ray-tracing chip on the same scale accomplish?

But is that the point? No. It is all about making money versus creating great tools. We all know the x86 architecture, MS-DOS, Windows and D3D aren't the best solutions. They're just the most popular, whatever the technical merit. So we are stuck with them.

At this moment, the current graphics hardware has to change radically. Do we leave that to the management and CEO's or do we, the technical people, want a say in it?
 
DiGuru said:
At this moment, the current graphics hardware has to change radically. Do we leave that to the management and CEO's or do we, the technical people, want a say in it?

IMO you should leave it to the artists. The technical people will tend to lean towards solutions which are technically interesting, rather than necessarily the best solution to get what the punter wants onto the punters desk.

RT is interesting to folks because it appeals to the physicist/mathematician in every computer geek as being the "right" way to model the interaction of light with an environment. That doesn't necessarily make it the best solution for an industry that is founded on the willing suspension of disbelief.
 
archie4oz said:
AzBat said:
Like I said they got bought by Apple and that was back in November 1999. I think their web site has been gone since then. I would love for somebody to explain their technology and what happened to it. I remember it was really exciting at the time.

Sorting algorithms... Basically another company along the lines of GigaPixel, Stellar Graphics, etc... The big deal (supposedly) with their technology was that their sorting algorithms were derived from data-mining expertise and supposed to be more efficient than any of their competitors (make what you will of that)... They burned up all their venture capital though and never finished (they had 3 chips planned). Apple's acquisition was pretty much an IP/talent grab...

And just for an interesting tidbit, the initial architectural director of the GeForce6, FX (and a whole slew of involvment on the GF4, GF3, GF2, GeForce, and TNT) was also one of the architects for Raycer's scanline rendering system...

Archie, cool. Thanks for the info! I always wondered what happened with them. One of the guys I used to know at Rendition went to work with them. Evidently some other Rendition guys were there too.

Tommy McClain
 
archie4oz said:
IMO, offline ray traced 'family' of rendered images are the truest representaion of the physical world in 3D and most people would agree that it would be great if we could do that 'realtime' and make it 'interactive' in a games environment. IMO, two of the biggest factors, well done lighting and animation in an interactive scene can significantly enhance the realism.

Actually if you're trying to obtain a particular look however ray-tracing can really suck. It's useful for REALLY nice, accurate reflections, and REALLY nice shadows and that's about it...

What kind of look are you referring to that would really suck?

archie4oz said:
You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.

Well ray-tracing by itself is just as much a hack as anything else. There is no "true" solution. It presents as much problems as it solves...

I'm curious as to what these problems are that can't be solved?

archie4oz said:
The more important question is do we really need ray-tracing that badly?

As I mentioned earlier, it seems that the mainstream realtime graphics industry is stuck on rails and with a significant barrier to entry for alternative technologies and hence a lack of demand for these technologies. I suppose if someone showcased a realtime "ray traced" game engine that showcased the hardware/technology then we might see that demand, but hey this is the classic chicken and egg situation... :(
 
Simon F said:
Cryect said:
Jaws said:
There's a rumour going around the Console forum that Dreamcast was the only console to do TBDR. ;)

I was bothered by this but considering it's 'old' silicon, 220nm, 110 mm2 die and IIRC, 100mhz and from the patent, it seems that most of the silicon is there for raytracing and not shifting geometry, it's reasonable, no :?:

Hmmm, Dreamcast is the only console to do TBDR. The Kyro and Neon 250 are similar to what the Dreamcast had in it. Anyways TBDR has nothing to do with raytracing.
He may have been confused because the early PVR tech referred to "ray casting"

Off topic: Nah...a misunderstanding guys, I know it has nothing to do with ray tracing. :D i was just referring to a 'silly' thread on the console forum about TBDR and dreamcast! ;)

Back to topic,

Simon F said:
Jaws said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names),
"Luxo Jr" was the name of the animation and, as others have pointed out, they don't really use ray tracing all that much.

Thanks...was the 'Luxo Jr' demo fully raytraced unlike PIXAR movies?

archie4oz said:
The only thing that really useful thing that ray-tracing offers is shadows IMO...

I take it that you mean 'all' other effects can be achieved by 'cheaper' alternatives and of the same quality except shadows? What about GI, reflections, refractions etc?

If we only take shadows into consideration, the importance of it to the realism of a scene can be taken for granted and underestimated. Realistic 'soft' shadows can take a flat looking 3D scene and significantly bring it to life.

landing_1.jpg


This is a very simple looking image of Luxo, as I mentioned earlier, two of the biggest factors, good lighting and animation can bring the image to life without the need for 50 layer textures and millions of polygons...I'm still amazed by this nearly 20 year old video of Luxo>>> link...



559.jpg


But look at the complex soft shadows in this image. The intricate shadowing and lighting is so realistic. Will we be able to do something like this in realtime with 'brute' force hardware or realtime GI/raytracing hardware in the future?
 
Ok, we need "brute force" hardware to do GI. GI is a great lighting technique. However, we need to find a different GI algorithm than Ray-Tracing(if you consider Ray Tracing to be GI). I hope a full-GI algorithm thats fast enough to be used in games pops out of nowhere and handles every light situation accurately while getting rid of artifacts. I doubt infinite texture layers(which of course is imposssible)will be able to achieve what GI can do. However GI still needs to be combined with high geometry detail.
 
g__day said:
My gut is there will be 4 to 5 big next things before real time ray tracing is real with us at the consumer level.

So definitely within 6 - 8 years, and I guess algorithms can get alot better by then, otherwise the data requirements and bandwidth requirements would swamp you. You need better algorithms for dealing with all those light sources and illumination types.

What are the 4/5 next big things you refer to?

DiGuru said:
Ok. So, has the current brute force hardware succeeded in doing all that can be done with ray-tracing faster? Even if it has, what could a ray-tracing chip on the same scale accomplish?

But is that the point? No. It is all about making money versus creating great tools. We all know the x86 architecture, MS-DOS, Windows and D3D aren't the best solutions. They're just the most popular, whatever the technical merit. So we are stuck with them.

At this moment, the current graphics hardware has to change radically. Do we leave that to the management and CEO's or do we, the technical people, want a say in it?

Protecting your interests will come easy if your in a monopoly or cartel! ;) Like intel and AMD have the PC chips market sewn up, so do ATI and Nvidia with D3D! ;) These companies have their R&D investments and IPs to protect and will go in the direction of the D3D and OpenGL APIs. So I presume the future hardware directions will go where these APIs go. So if you could influence whoever control these API directions then you would influence the hardware directions, no? how about a push for OpenRT? :cool:

I'm curious as to the differences that dictate the 3D hardware market between the mainstream 3D games market and the niche, high end realtime visualization market? What is the '3D food' chain.? Are significantly different technologies being used in these markets?

nutball said:
IMO you should leave it to the artists. The technical people will tend to lean towards solutions which are technically interesting, rather than necessarily the best solution to get what the punter wants onto the punters desk.

RT is interesting to folks because it appeals to the physicist/mathematician in every computer geek as being the "right" way to model the interaction of light with an environment. That doesn't necessarily make it the best solution for an industry that is founded on the willing suspension of disbelief.

What does the punter want? I thought all punters were "closet graphics whores'! ;)

It maybe, as DiGuru mentions, that RT may not be the best solution because the right tools don't exist yet for the artists?
 
pat777 said:
Ok, we need "brute force" hardware to do GI. GI is a great lighting technique. However, we need to find a different GI algorithm than Ray-Tracing(if you consider Ray Tracing to be GI). I hope a full-GI algorithm thats fast enough to be used in games pops out of nowhere and handles every light situation accurately while getting rid of artifacts. I doubt infinite texture layers(which of course is imposssible)will be able to achieve what GI can do. However GI still needs to be combined with high geometry detail.

I'm wondering if the next gen of unified programmable shaders from ATI, Nvidia or Sony CELL tech can run some of these GI algorithms at a decent speed? Maybe the poll question, 'are realtime raytracers the next big thing' should read > 'are realtime GI algorithm hardware the next big thing?'

Metropolis Light Transport

Reference: Eric Veach and Leonidas J. Guibas, SIGGRAPH 97 Proceedings (August 1997), Addison-Wesley, pp. 65-76.

Abstract
We present a new Monte Carlo method for solving the light transport problem, inspired by the Metropolis sampling method in computational physics. To render an image, we generate a sequence of light transport paths by randomly mutating a single current path (e.g. adding a new vertex to the path). Each mutation is accepted or rejected with a carefully chosen probability, to ensure that paths are sampled according to the contribution they make to the ideal image. We then estimate this image by sampling many paths, and recording their locations on the image plane.

Our algorithm is unbiased, handles general geometric and scattering models, uses little storage, and can be orders of magnitude more efficient than previous unbiased approaches. It performs especially well on problems that are usually considered difficult, e.g. those involving bright indirect light, small geometric holes, or glossy surfaces. Furthermore, it is competitive with previous unbiased algorithms even for relatively simple scenes.

The key advantage of the Metropolis approach is that the path space is explored locally, by favoring mutations that make small changes to the current path. This has several consequences. First, the average cost per sample is small (typically only one or two rays). Second, once an important path is found, the nearby paths are explored as well, thus amortizing the expense of finding such paths over many samples. Third, the mutation set is easily extended. By constructing mutations that preserve certain properties of the path (e.g. which light source is used) while changing others, we can exploit various kinds of coherence in the scene. It is often possible to handle difficult lighting problems efficiently by designing a specialized mutation in this way.

How about the 'Metropolis Light Transport' GI algorithm for efficiency in next gen realtime GI GPUs?

Source

fig5.jpg
 
Metropolis light transport migh be efficient in an asymptotical sense, but in practice it's hella slow. Very physically correct and elegant, but totally impractical for anything where rendering time matters.
 
GameCat said:
Metropolis light transport migh be efficient in an asymptotical sense, but in practice it's hella slow. Very physically correct and elegant, but totally impractical for anything where rendering time matters.

Caustics in a pool of water, viewed indirectly through the ripples on the surface. Image below was rendered with Path-tracing, 210 samples per pixel,

fig7a.jpg


The Same caustics in a pool of water, viewed indirectly through the ripples on the surface. Image below was rendered with Metropolis, 100 mutations per pixel, same computation time as above,

fig7b.jpg


Source, page 11

Looking at the above images, Metropolis light transport produces a much higher image quality with the same computation time than path-tracing and with fewer samples.

uses little storage, and can be orders of magnitude more efficient than previous unbiased approaches.

Maybe they're fibbing! :) What would be great is matching these algorithms to dedicated graphics processors that could do it in realtime, hence the topic! 8)
 
Triple Product Wavelet Integrals for
All-Frequency Relighting

To appear at SIGGRAPH 2004

Abstract

This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for real-time rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials). We propose a new mathematical and computational analysis of pre-computed light transport. We use factored forms, separately pre-computing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublinear-time algorithms for Haar wavelets, incorporating non-linear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.

Source

Here's another algorithm that DeanoC mentions in the other GI thread. They imply they could render these frames every few seconds! :oops: ( no mention of computing power though)...just need to go that one step further with new graphics processor designs to do it in realtime! :D
 
Back
Top