Until now, developers used another technique called rasterisation.
It first appeared in the mid-1990s, is extremely quick, and represents 3D shapes in triangles and polygons. The one nearest the viewer determines the pixel.
Then, programmers have to employ tricks to simulate what lighting looks like. That includes lightmaps, which calculate the brightness of surfaces ahead of time, says Mr Ronald.
But these hacks have limitations. They're static, so fall apart when you move around. For example you might zoom in on a mirror and find that your reflection has disappeared.
...
But with these workarounds, "pretty quickly you lose that realism in a scene," observes Kasia Swica, Minecraft's senior program manager, based in Seattle.
One "fiendish problem" for ray tracing has involved how shaders can call on other shaders if two rays interact, says Andrew Goossen, a technical fellow at Microsoft who works on the Xbox Series X.
GPUs work on problems like rays in parallel: making parallel processes talk to each other is complex.
Working out technical problems for improving ray tracing will be the main tasks "in the next five to seven years of computer graphics, at least," says Mr Ronald.
In the meantime games companies will use other techniques to make games look slicker.
Earlier this month Epic Games, the makers of Fortnite, released its latest game architecture, the Unreal Engine 5.
It uses a combination of techniques, including a library of objects that can be imported into games as hundreds of millions of polygons, and a hierarchy of details treating large and small objects differently to save on its demands on processor resources.
For most game makers such "hacks and tricks" will be good enough, says Mr Walton.