Realtime Ray-tracing Processors.

Real-time Raytracing processors the next big thing?


  • Total voters
    233

j^aws

Veteran
The Ray Tracing Graphics Processor
The AR350 is the company's second-generation ray tracing processor. The AR350 features a memory manager to access local DRAM, an on-chip data cache to reduce the bandwidth required of the host bus and two 3D rendering cores.

Each rendering core performs both the geometry and the shading operations of the ray tracing algorithm. The geometry co-processor of each core is capable of performing a ray / triangle intersection calculation every processor cycle. The shading co-processor is completely end-user programmable through the RenderMan Shading Language.

A simple interface between AR350s produces a scaleable architecture with good processor linearity. The AR350 uses a 0.22-micron line size and is delivered in an SFBGA package for integration into the heart of ART VPS rendering devices. The AR350 delivers four times the rendering speed of the AR250.

The first ray tracing chip - the AR250 - represented a new class of graphics processor - the photorealistic rendering chip. Unlike other graphics chips that use simple 'painter' algorithms to generate images, ART's processors use the physically-based ray tracing algorithm to generate images of stunning quality. The AR250 was the first processor to use ART's dedicated ray tracing architecture, giving unrivaled rendering performance.

Source

Is this the next big thing or will it always be a niche market like real-time volume processing :?:
 
Most raytracers have rasterizers internally to optimize some tasks ... the tasks most relevant to us. This is a rather old discussion though.
 
IIRC AR350 actually uses the exact same core technology as AR250, but they have just put multiples on the same chip (if forget if its 2 or 4 cores on a die). This technique is not realtime, just accelerated - it can also get faster as they do build servers with multiples of renderers in the system. Unfortuntly the company is fairly small and doesn't have the capacity, or the capability, for new chip design, layouts and tapeouts so they are limited to these 220nm cores. It would be kind of interesting to see what they could do on 130nm or 90nm though.
 
DaveBaumann said:
IIRC AR350 actually uses the exact same core technology as AR250, but they have just put multiples on the same chip (if forget if its 2 or 4 cores on a die). This technique is not realtime, just accelerated - it can also get faster as they do build servers with multiples of renderers in the system. Unfortuntly the company is fairly small and doesn't have the capacity, or the capability, for new chip design, layouts and tapeouts so they are limited to these 220nm cores. It would be kind of interesting to see what they could do on 130nm or 90nm though.
I hope they can extend their ray-tracing to path-tracing. Path-Tracing is too slow though.
 
pat777 said:
DaveBaumann said:
IIRC AR350 actually uses the exact same core technology as AR250, but they have just put multiples on the same chip (if forget if its 2 or 4 cores on a die). This technique is not realtime, just accelerated - it can also get faster as they do build servers with multiples of renderers in the system. Unfortuntly the company is fairly small and doesn't have the capacity, or the capability, for new chip design, layouts and tapeouts so they are limited to these 220nm cores. It would be kind of interesting to see what they could do on 130nm or 90nm though.
I hope they can extend their ray-tracing to path-tracing. Path-Tracing is too slow though.

AR250,

Advanced Rendering Technology
AR250 Statistics
• 0.35um drawn, LSI Logic G10 silicon process
• 650K gates, 106mm2 die
• Custom RISC processor core
• 32, single-stage, 32 bit IEEE compatible
floating-point units
• Multi-dimensional noise, square root and trig.
functions
• 50 MHz operation

AR250 PPT presentation Source

AR350,

Advanced Rendering Technology
AR350 Statistics
• 0.22um drawn, Texas Instruments silicon
process
• 1.8M gates, 110mm2 die
• Custom RISC processor core
• 64, single-stage, 32 bit IEEE compatible
floating-point units
• Multi-dimensional noise, square root and trig.
Functions

AR350 ppt presentation Source

Some features of the AR350,

Advanced Rendering Technology
Rendering Core
• Parallel Ray Tracer
• Regular Algorithms
• Physically Accurate
• Floating Point throughout
• Large geometry and image handling
• 9+M triangles, 16K line images
• Robust!
Rendering Core Features
• High quality lighting & anti-aliasing
• True Camera and Object Motion Blur
• True Depth of Field
• Diffuse Reflection & Refraction
• Area Lights (true soft shadows)
• Programmable Shading (RenderMan S. L.)
• Displacement Shading
• Volumes
• Camera shaders & Lens effects
RenderMan Compliant Interface
• Support RenderMan Shading Language
– C like programming language
– Total flexibility
– Surfaces, Volumes, Lights & Cameras
• Directly support all standard geometry types
– Polygons, Patches, NURBS, Primitives



The AR250 was at 350nm@ 50Mhz. The AR350 at 220nm likely uses two cores at 100 Mhz to give a four fould performance increase. The die sizes are aprrox. 110 mm2. It's not exactly cutting edge silicon! This company could make some serious noises if they went for cutting edge tech ala STI Cell.

Using simplistic scaling, if they used say 65nm in 2006, they could pack in 64 of these AR250 cores into a die space approx. 243 mm2. At only 100Mhz, this thing would be 128 times faster than the AR250! :oops: At 1Ghz, it would be 1280 times faster! :oops: If it's only accelerating raytracing at the present, it surely must be capable of realtime at those speeds in the near future, including path tracing and global ailumination! :D And remember this would only be one chip and you can build arrays of these. :cool:

Pete said:
A related thread to get you started.
Thanks for link. ;) The general gist I got from that thread was that realtime raytracing could be achieved by the SaarCor chip and possibly next gen programmable GPUs from Nvidia and ATI! 8) There's hope yet...though there were concerns about scene complexity and animation of objects. :(
 
The AR250 was at 350nm@ 50Mhz. The AR350 at 220nm likely uses two cores at 100 Mhz to give a four fould performance increase. The die sizes are aprrox. 110 mm2. It's not exactly cutting edge silicon! This company could make some serious noises if they went for cutting edge tech ala STI Cell.

Using simplistic scaling, if they used say 65nm in 2006, they could pack in 64 of these AR250 cores into a die space approx. 243 mm2. At only 100Mhz, this thing would be 128 times faster than the AR250! At 1Ghz, it would be 1280 times faster! If it's only accelerating raytracing at the present, it surely must be capable of realtime at those speeds in the near future, including path tracing and global ailumination! And remember this would only be one chip and you can build arrays of these

pretty much my thoughts exactly, Jaws, except you put it into text probably better than I could have.

the potential is amazing 8) :oops:
 
//hello community, first post here;)

...raytracing hardware would be nice, but I think that most of the realtime gi developers dont even notice that there is not really a need for "real realtime gi-calculation" for better gi quality in 3d games.. it's like "yeah, lets use a particle based physics systems to create wakes for our next powerboat racing game... maybe we can wait for the right hardware in 2010?"

And this is the wrong way! I think there is a whole different technique needed. A new technique to "visualize"(!) precomputed GI-maps or GI-lightmaps instead of using textures or vertex colors on current rasterizers.. I don't know much about game(s) & shader programming, but for dynamic objects there should be a different method then computing the gi occlusion of the whole scene. The unreal3 demo shows that there are much techniques to combine the advantages of baked lightmaps for static objects and blended cubemap-shadows for dynamic area lights.. even using vertex lights with negative values for smooth shadows are a not well used technique...

..so I hope that there is some "space" between needed raytracing hardware and bad baked lightmaps like in this picture:

gta-vice-city-41.jpg


??

;)
 
Somebody refresh my memory, but wasn't the Raycer Graphics technology able to do real-time ray tracing? I remember visiting their offices once and got a demo, but it's been so longer I can't remember their method. Too bad Apple bought them. I don't think anything came of the technology.

Tommy McClain
 
Like I said they got bought by Apple and that was back in November 1999. I think their web site has been gone since then. I would love for somebody to explain their technology and what happened to it. I remember it was really exciting at the time.

Tommy McClain
 
mentalraygun said:
//hello community, first post here;)

...raytracing hardware would be nice, but I think that most of the realtime gi developers dont even notice that there is not really a need for "real realtime gi-calculation" for better gi quality in 3d games.. it's like "yeah, lets use a particle based physics systems to create wakes for our next powerboat racing game... maybe we can wait for the right hardware in 2010?"

And this is the wrong way! I think there is a whole different technique needed. A new technique to "visualize"(!) precomputed GI-maps or GI-lightmaps instead of using textures or vertex colors on current rasterizers.. I don't know much about game(s) & shader programming, but for dynamic objects there should be a different method then computing the gi occlusion of the whole scene. The unreal3 demo shows that there are much techniques to combine the advantages of baked lightmaps for static objects and blended cubemap-shadows for dynamic area lights.. even using vertex lights with negative values for smooth shadows are a not well used technique...

..so I hope that there is some "space" between needed raytracing hardware and bad baked lightmaps like in this picture:

gta-vice-city-41.jpg


??

;)

IMO, offline ray traced 'family' of rendered images are the truest representaion of the physical world in 3D and most people would agree that it would be great if we could do that 'realtime' and make it 'interactive' in a games environment. IMO, two of the biggest factors, well done lighting and animation in an interactive scene can significantly enhance the realism.

I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on. :LOL:

You indirectly mention the significance of 'lighting' in your posts by referring to several 'lighting' hacks to get that realism but why not use the real thing if the tech can be developed? The realtime graphics industry seems to be driven by the PC graphics market and like the PC market I think it's stuck on rails and developers will follow suit. No one seems to be creating that demand for real-time ray tracing as, like you say, it always seems to be 5 years away, so the status quo remains. It will take a huge paradigm shift for the industry and developers to move to that when they can get away with 'cheaper' hacks. A chicken and egg situation? :(
 
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on.

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

Renderman in a lot of ways is just a collection of hacks, and as such it's a really good example of how far the hacks can get you.
 
ERP said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on.

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

Renderman in a lot of ways is just a collection of hacks, and as such it's a really good example of how far the hacks can get you.

You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.
 
AzBat said:
Like I said they got bought by Apple and that was back in November 1999. I think their web site has been gone since then. I would love for somebody to explain their technology and what happened to it. I remember it was really exciting at the time.

Tommy McClain

I don't have any of those details but I have the next best thing. Here's a link to the above AR250 ray tracing chip Patent which describes how it works...

Link to AR250 Ray Tracing Chip Patent. 8)
 
Anyway, is there an other direction to go for graphics hardware? We have seen what the brute force approach can do. And while it can produce astonishing graphics when done right, it is at the end of its road. We need a more clever approach. Let the really clever and creative hardware and driver designers worry about the clever hacks: that way, all the other people can just produce very nice things.
 
DiGuru said:
Anyway, is there an other direction to go for graphics hardware? We have seen what the brute force approach can do. And while it can produce astonishing graphics when done right, it is at the end of its road. We need a more clever approach. Let the really clever and creative hardware and driver designers worry about the clever hacks: that way, all the other people can just produce very nice things.

Heh who says brute force is at the end :p (few more years prolly at least and plenty of stuff possible to add)

Maybe PowerVR will wake up and smell the coffee from us geeks wanting TBDR.

• 9+M triangles, 16K line images
BTW am I the only one bothered by this low poly count?
 
DiGuru said:
ERP said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on.

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

Renderman in a lot of ways is just a collection of hacks, and as such it's a really good example of how far the hacks can get you.

You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.

Well you could argue that anything simulated on a computer is a series of hacks! ;) I suppose I was referring to the term 'hack' in a relative sense indirectly using renderman as a reference point where mainstream 3D aims to match it's rendering quality through a series of 'it's own' hacks. I assume PiXAR is selective of what it ray traces due to cost rather than quality? If it's cost then there's a bigger incentive for faster ray tracing hardware and algorithms, no?

DigGuru, I'm not sure what you're asking me to choose here? 3D can be 'artistically' pretty or 'realistically' pretty...I'm referring to realistically pretty here as it's less subjective. ;)
 
Cryect said:
DiGuru said:
Anyway, is there an other direction to go for graphics hardware? We have seen what the brute force approach can do. And while it can produce astonishing graphics when done right, it is at the end of its road. We need a more clever approach. Let the really clever and creative hardware and driver designers worry about the clever hacks: that way, all the other people can just produce very nice things.

Heh who says brute force is at the end :p (few more years prolly at least and plenty of stuff possible to add)

Maybe PowerVR will wake up and smell the coffee from us geeks wanting TBDR.

• 9+M triangles, 16K line images
BTW am I the only one bothered by this low poly count?

I think the big question is: in how far is the current graphics crowd used to the way it works right now, to think the small improvements that can still be made outweight having to learn to create and use a different way to produce graphics?

Even worse, the decision makers (management) does not like large changes. And it is doubtfull they know enough about the technology to be able to see the impact in other forms than sales figures.

So, unless we can convince enough technical people so that they are willing to stick out their neck to support ray-tracing, we will be using more of the same brute force chips, but with more pipelines for a long while, as the proposed changes at the moment (unified shaders and flow control) are technically huge problems.

It happens.
 
Jaws said:
DiGuru said:
ERP said:
I remember the first time I saw that PiXAR demo with the jumping desk lamps in the 80s (can't remember their names), it had fantastic animation and lighting, but it didn't have millions of polygons, 50 texture layers or what not but to this day I've yet to see an interactive game that matches it's 'realism' or maybe I still have my rose tinted goggles on.

And I guess it's my turn to point out that very few (Pixar) Renderman renders use significant amounts of ray tracing.

Renderman in a lot of ways is just a collection of hacks, and as such it's a really good example of how far the hacks can get you.

You need very clever and creative people to create brilliant things with a series of good hacks. With real-time ray-tracing hardware and good tools, most artists can create really nice things. Choose.

Well you could argue that anything simulated on a computer is a series of hacks! ;) I suppose I was referring to the term 'hack' in a relative sense indirectly using renderman as a reference point where mainstream 3D aims to match it's rendering quality through a series of 'it's own' hacks. I assume PiXAR is selective of what it ray traces due to cost rather than quality? If it's cost then there's a bigger incentive for faster ray tracing hardware and algorithms, no?

DigGuru, I'm not sure what you're asking me to choose here? 3D can be 'artistically' pretty or 'realistically' pretty...I'm referring to realistically pretty here as it's less subjective. ;)

Yea, but realistically is split between the realism of Hollywood (that looks much better) and "real" realism (more like Half-life 2, very gritty).

To produce brilliant art, you need brilliant artists. But you can create very nice art by giving regular artists great tools. So, we can have a few brilliant pieces, created by a team of outstanding people, or we can have those same pieces AND a lot of very nice ones.

Those teams produce only one or two masterpieces, and it doesn't happen every year that one is produced. Especially with computers, we can create hardware and software that does the hacks and other hard stuff, freeing the artists from all those constraints. For computer graphics as I see it, that would require clever GPU's that use ray-tracing.

On the other hand, the method that uses the brute force approach is known to most people. It is not very artist-friendly, has lots of dead-ends and requires a major in graphics programming, but it is how it has to be done, right?
 
Back
Top