Back when I had a 17" laptop with a 2GHz Core 2 Duo and a 500MHz G71 (practically the same GPU as RSX?) with 512MB GDDR3, I used to play most of the earlier PS360 multiplatform titles just fine. COD:3, Dead Space, FEAR, Oblivion, Quake 4 and many others played with over 40FPS with everything maxed out @ 1280*800 or higher (it had a 1920*1200 screen).
I don't know how that system would fare in later titles because I eventually sold the laptop, but I wonder if it'd eventually play most games until now and at what difference in IQ to the current gen of consoles.
Generation upon generation, I get the feeling that the "code-to-metal" advantages are overrated for practical situations. Maybe this only shows in flagship AAA exclusive titles?
Sure, I believe there are advantages to coding to the metal, but are they really meaningful?
I look at that "chunks of geometry" limitation on PC mentioned in the bit-tech article, but while I believe that may be true, does it make a difference? Or are both systems bottlenecked by something else well before getting to that number of "chunks of geometry"?
It doesn't look to me like it's meaningful at all, because I see the current mid-end PCs playing multiplatform titles with much higher-resolution textures, higher-polygon models, 4x higher resolution, higher MSAA levels, better pixel shading effects, all while providing a much higher framerate (I'm thinking Skyrim with mods here).
As far as I can see, it just feels like having a PC with a graphics card that is 8x faster than Xenos will just allow me to have 8x more stuff going on in the same game, at the same framerate.
I don't know what the "chunks of geometry" thingie means at all, but it doesn't look like having 10x better performance of that in consoles is revealing to be of that much help.
And honestly, IMO Rage is such an isolated case that I blame its performance problems 99.99% on id software alone. Not on drivers, not on inefficient APIs, not on AMD/ATI, but on id software.
Besides, the game doesn't even look all that good and from what I played it feels more like an awkward attempt of trying to shove a (totally cliché'd) story/setting into quake 3 than anything else. When I played it, it felt as flawed as any other game that was in development hell for too long.
Furthermore, Carmack also said the PS Vita would be able to output graphics as good as smartphones with 2x better theoretical performance because of low-level optimizations alone, but I don't see that either.
I don't even see current or future games for the Vita having much having better graphics than NOVA3, Shadowgun Deadzone, Real Racing 2/3, etc.. All of which my smartphone with a supposedly weaker Adreno 225 seems to play flawlessly @ 1280*720, whereas most Vita titles don't even render at its native 960*540 resolution?
That said, here's what I think:
The way I see it, coding to-the-metal vs. DirectX/OpenGL API in the PC doesn't bring "generational leap" advantages, far from it.
If the rumours on next-gen consoles are true, I think anyone with a ~3GHz Core i3, ~4GHz 3-module Piledriver CPU and a HD7850 or a GTX 660 should be able to play anything that goes into Durango at the same IQ settings.
For Orbis, I'd say the same thing applies for the same CPUs and a HD7870 or a GTX660 Ti.
I think this applies unless there are some surprises to gameplay physics enabled by HSA that cannot be reproduced by our current PCs.