Old offline CG was not raytraced usually, that technique has only become viable in the last 5 years or so. Pixar has only made the transition through Cars to Monsters University, for example; ILM abandoned PRMan as their primary renderer with Pacific Rim, and Weta is still on PRMan as far as I know (several of my pals work as lighters there).
The first generation of pre-rendered intro movies began with Microprose F1 as far as I can recall; Strike Commander sticks out as one with more than a few frames of CGI and maybe the first Syndicate. Today's real time engines put all of that stuff to shame; even the hour-long and multi-million dollar CGI cinematics of Wing Commander 3 and 4 are looking unforgivably rough nowadays. Offline CGI has came a long, long way since the '90s, too.
There are two reasons, first is that technology has been advancing at an incredible rate, and second is that even movie level VFX has transitioned to PC desktop hardware nowadays. The workflow and software used in pipelines for video games and cinematic productions is converging incredibly fast; VFX uses displacement mapping and raytracing approaches to reach maximum quality, while games use normal maps and various approximations to speed up things. But the asset detail level is very close, the general techniques of HDR rendering, physically correct shading and such are very similar, and so on. Offline rendering can also afford to take enough samples to completely eliminate any kind of aliasing, whereas games have to balance that with the level of detail.
The convergence won't stop and eventually we'll reach a level of computing capacity where offline rendering won't offer any advantages that the general audience could notice. But that would take at least another 20 years IMHO, because there's at least two orders of magnitude of difference between the two or so, and the scale has to increase a lot before that difference becomes negligible.