New graphics algorithm promises film quality games

Graphics developers at Williams College and Nvidia have created a new algorithm that they say will drastically improve graphics quality.

Because video games have to compute images more quickly than movies, video game developers have found it hard to keep up.

But Morgan McGuire, assistant professor of computer science at Williams College, and Nvidia's Dr David Luebke have developed a new method for computerizing lighting and light sources that they say will allow video game graphics to approach film quality.

Producing light effects involves essentially pushing light into the 3D world and pulling it back to the pixels of the final image. The new method actually reverses the process, so that light is pulled onto the world and pushed into the image - a much faster technique.

As video games continue to increase the degree of interactivity, graphics processors are expected to become 500 times faster than they are now.

McGuire and Luebke's algorithm is well suited to these increased processing speeds, they say, and is expected to appear in video games within the next two years.

News Source: http://www.tgdaily.com/games-and-en...raphics-algorithm-promises-film-quality-games
 
"Producing light effects involves essentially pushing light into the 3D world and pulling it back to the pixels of the final image. The new method actually reverses the process, so that light is pulled onto the world and pushed into the image - a much faster technique."

Is there a simple diagram that sows the difference ?
 
They replace the first bounce photon emission from the light source with a rasterization step from the lights point of view (cheap), do the rest of the bounces on the CPU and at the end they scatter all the photons inside the view frustum across "nearby" visible pixels from the eye's point of view also with rasterization (not so cheap, but the normal gathering step which this replaces ain't cheap either). Also they upsample the radiance map in screen space taking into account the geometry to reduce the amount of work necessary for that final step.

PS. to determine nearby visible pixels they render a icosohedron, and presumably use the stencil buffer for a point in volume test, I don't quite understand why ... is that really going to be cheaper than rendering a tight bounding box in screenspace and doing a 3D "point in ellipsoid" test for each pixel? (Which is going to give you pretty much the same quality scattering.)
 
Last edited by a moderator:
I love this kind of stories.

They serve to remind us of the competence and trustworthiness of media. Remember, they know just as much about the economy, social affairs, wars and foreign relations as they do about graphics technology - use this to recalibrate your input filter on mainstream media.
 
I love this kind of stories.

They serve to remind us of the competence and trustworthiness of media. Remember, they know just as much about the economy, social affairs, wars and foreign relations as they do about graphics technology - use this to recalibrate your input filter on mainstream media.

Oh c'mon, to be fair they can't just fill the entire page with ads, they do need at least a paragraph of two of drivel to pose as a story. :D
 
Last edited by a moderator:
Back
Top