To me downscaled is *essentially" the same models if you're using the high detail models as a source target. You're downscaling an existing high detail model, you are not building a low detail model from the ground up.
The end result is the same, no matter how you want to call it.
Where did you get 15 million from?
...
ILM modelers/animators use models with millions of polys for their CGI VFX all the time.
Not exactly true.
Digitized/scanned data is easily that dense, XYZRGB can go under milimeter scale so you can get 20-40 million polygons in such a mesh. But it is unusable in that form, you really can't UV map it, skin it or do anything like that.
The preferred workflow in movie VFX is to build a lower reoslution model that's manageable; it can range from a thousand polygons to 100K or whatever the production requires. Then software tools are used to compare the low res model and the high res model and store the differences in a displacement texture map (or rather, a bunch of maps for such a very detailed model). At rendering time, the low res model is subdivided into smaller polygons and the resulting millions of vertices are then displaced using the texture. All skinning, dynamics and other tasks are perfomed on the low res model, before subdivision.
For games, you can do the same but create a normal map instead of a displacement map. This introduces significant differences, like lack of silhouette detail - the edges will still be low-poly, and also the shading will be a lot more rough and generaly lower quality, especially in close-ups.
As far as I know Factor 5 has some dragon maquettes from Phil Tippet (ex-ILM, creator of Draco in Dragonheart) which they are using for the normal maps. Most of their models are created entirely in the computer though.