If you look at your own phrasing, it's easy to put the finger on the weakness. "True, realtime global illumination." Raytracing doesn't offer true global illumination, it is another approach to it, the quality of which depends on implementation and computational effort. "True" vs. "hacks" are meaningless - it's all just pixels on a screen, and the only valid measure of success is to what extent we accept the result. "kludges that should be replaced", why should they be replaced? Intellectual purity? That's the domain of philosophers, not game developers. And the other half contains the caveat: "...when possible.". Well, is it? The answer, at this point in time is simple - no it isn't.True, realtime global illumination. Unified lighting provides the greatest visual cues to make a scene look solid and believable. All lighting to date has been hacks upon hacks to make the lighting work, with baked lightmaps and SSAO and shadow maps. It's a collection of awkward kludges that should be replaced with an elegant, effective solution when possible. This would make games more visually appealing, to the point that it's the ultimate objective (once we have realtime photorealism, we're done!), and devs lives easier.
Raytraced illumination on top of rasterised models may well be the best compromise of performance and quality.
Will it be possible in the future? Well the jury is out on that one.
And my contribution to this discussion is that what will ultimately decide that is efficiency.
Lithographic advances won't solve that, partly because they apply to all approaches, partly because it is a well running dry. If can't be efficiently done today, well, don't hold your breath for hardware to solve it for you. It may never, in gaming, amount to more than something cool that help tech nerds such as me justify their expensive PC gaming rigs. It will help nVidia in other markets however.