ET said:
I can't understand why people keep arguing with me about this.
Because it is an unrealistic expectation perhaps?
Did I say that GPUs won't be faster? Did I say that there won't be a use for them? Quite the contrary. All I'm saying is that I can envision a time when CPU based graphics will be enough for undemanding users.
Except, it is very likely these undemanding users will be exactly those sitting with systems using Intel "extreme" integrated graphics... (Intel should be given an award or something for best mis-use of a word in the English language...

They're also a candidate on "hyperthreading" too by the way.)
Those who wants to play games would almost certainly find themselves unsatisfied by a CPU's performance.
Drop the AA and ultra-high resolutions, and what is that ultra-high bandwidth for?
Oh, sorry, I thought we were talking about the FUTURE?
You think people are going to want to run games in the FUTURE without AA? Even today the visual difference is rather staggering, I can't believe it to be LESS so in the FUTURE.
(Then again, it was Arthur C. Clarke who said, "The future isn't what it used to be."

)
He doesn't claim that GPUs will be gone altogether, just that they won't be a standard part of a system.)
And I claim he is smokin' some heavy stuff. What does he mean by "high-end" anyway, only those who buy $400+ video cards today? To me, his comment just doesn't seem very well thought-through. Unless a radical shift in CPU architecture comes along, I don't foresee CPUs getting the horsepower to overtake even today's best GPUs for an extremely long time, heck, even today the fastest chips we got struggle to compete with quite pedestrian chips like the TNT series, etc.
You've also got one fact wrong. CPUs aren't serial.
Geez, you don't think I know CPUs can issue multiple instructions per clock cycle? However, if you look at statistics over the average amount of instructions retired per clock, you'll find it isn't a particulary high amount. Further parallelization of the hardware isn't going to give a whole lot compared to the amount of trannies one has to throw at that problem to solve it. Seen as a whole, a CPU still behaves in a very much serial fashion, especially compared to a GPU.
Without that radical shift in CPU architecture, we're not going to be able to have transforms running in parallel with poly setup running in parallel with texture lookups running in parallel with texture filtering running in parallel with pixel shaders running...
A CPU with multiple cores/hyperthreading could have separate threads to run these tasks and perhaps rely on message passing, but they wouldn't be particulary well synchronized, nothing like a dedicated piece of hardware would be. Top-end GPUs today do four vertices and eight pixels at a time (peak), and an incredible amount of work on each vertex/pixel simultaneously (or rather, on a number of verts/pixels in a pipelined fashion), an imaginary very complex future CPU could do SOME work on ONE pixel/vert in parallel.
They also run at clock speed over 6 times faster than the fastest GPU, in case you've forgotten, which can even out the extra parallelism of GPUs.
Sorry, but,

... THat's pretty much all I have to say about that. Try making a P4, even the fastest you can find running on liquid nitrogen cooling, beat even a standard, unoverclocked GF256. You have your work cut out for you, that I can assure you.

You'll have, what? Ten, fifteen clock cycles tops to render an entire pixel. Think a P4 can do that?
No, this is a pipe-dream I say. And like I mentioned in my previous post, it's not the first time Tim's been huffing away on the hallucinogenics either. He seems to be a somewhat okay coder type (certainly no god), but maybe hardware isn't his strong side.
*G*