Tim's thoughts

Bouncing Zabaglione Bros. said:
The fact is that these people *want* to play games at the optimum level.

Perhaps. I'm sure most people won't mind getting high end cards for free (hey, I sure don't). But a lot of people can make do with a lot less than the "optimum" level, and save that extra expense. When I play UT2003 on a GeForce2 Ti, I enjoy it, not because I haven't seen it on better cards, but because once you get into the game, the actual graphics quality is much less meaningful. Who has time to check out shadows or glows in a fast paced game? And if the game is slow paced enough that you have time to watch the scenery, then you typically don't need high frame rates -- so it balances out in the end.
 
ET said:
Perhaps. I'm sure most people won't mind getting high end cards for free (hey, I sure don't). But a lot of people can make do with a lot less than the "optimum" level, and save that extra expense.

Who said anything about free? Are you trying to imply that people only choose good products if they don't have to pay for them? A lot of people are willing to pay to get quality products that fulfil their needs. A lot of people consider spending money on things that do not fulful their needs to be completely wasteful.

ET said:
When I play UT2003 on a GeForce2 Ti, I enjoy it, not because I haven't seen it on better cards, but because once you get into the game, the actual graphics quality is much less meaningful. Who has time to check out shadows or glows in a fast paced game? And if the game is slow paced enough that you have time to watch the scenery, then you typically don't need high frame rates -- so it balances out in the end.

While I won't argue that *you* enjoy playing UT2K on a GF2, there is no doubt *at all* that the image quality, framerate and resolution when playing on a R3x0 level card is much, much better. This isn't a question of opinion, this has been empirically proved. You get more features, more frames, and more pixels.

Personally I like high IQ, and I do notice the likes of AA/AF. You are veering dangerously towards the "Frames are everything, you don't notice any IQ at high frames, so lets remove all details, models, smoke, fog, etc 'cos I can't see any difference" argument.

Grapical nicities like foliage, fog, smoke, shadows, etc are all part of the gaming environment, and can have a significant impact on gameplay. To remove those, or to try to justify their absence because you don't have a decent card and you can't "check out" the graphics is saying that such things are pointless. This is patently untrue. They all add to the gameplay as well as the visuals.

You may be willing to accept a lower quality of IQ because of your budget, or you may be trying to justify it to yourself in the face of someone telling you that your graphics card is no longer up to scratch, but this is not the case for a lot of people who play games, and not how the developers meant their games to be seen at their best.

As I said before, I could watch a big spectacular movies on a small screen with mono sound and enjoy it, but I know I'd enjoy it more on a big screen with Dolby. That doesn't mean I have to go out and spend megabucks on a 42 inch plasma. I can buy a midrange product like a good quality 32inch CRT that gives me an extremely good experience at a much lower price. This is analagous to buying a 9600 instead of a 5950U, but still gaining a much, much better experience than a GF2 or a V5500.
 
I personally think a lot of it still comes down to the differentiation between distributed and centralised processing within the PC platform. If you look at a high end PC, graphics, sound, and networking are often handled by their own core logic, leaving the CPU free to handle other tasks. A low end PC uses the one main CPU for most of these tasks.

When I hear that dedicated graphics hardware is on the way out the question that leaps to my mind is does he mean dedicated hardware or specialized hardware. Given sufficient advances in technology it may be possible to use a multiple core CPU array, much like the Cell architecture, where a system's processing resources are divided up among a number of identical processors. In a case like that it may be more effective, or at least more cost-effective to simply dedicate some of the cores to graphics while others handle the AI, networking, sound, and other features. This sort of shift could kill dedicated specialize hardware, such as graphics cards, while still avoiding the centralization trap and CPU overload.

It would be moving from dedicating hardware for graphics without removing the ability to dedicate hardware to graphics as necessary.

How well it would work, I don't know, but it's one way to look at the issue.
 
JF_Aidan_Pryde said:
"You can't just send DMA packets everywhere"

This is untrue.

Nearly every supercomputer does it, the best you can do is hide it from the developer ... but you cannot get away from distributed memory with parallel processing, even caches need to send packets everywhere to keep memory coherent so even a unified memory model cannot avoid doing this at the hardware level.

Personally I think hiding the basic reality from developers doesnt help. Either they will keep it in the back of their mind and structure their code to keep remote memory access to a minimum, in which case they might as well show the intention explicitly in the code by using message passing, or they wont and performance will suck ass.
 
Rugor said:
It would be moving from dedicating hardware for graphics without removing the ability to dedicate hardware to graphics as necessary.

That's one way to go about it, I guess. I imagine that CPU makers could actually dedicate some chip real estate to graphics hardware. So a CPU might have a texture read unit to speed that up, and use an SSE-style vector processor for doing the pixel calculations.
 
Bouncing Zabaglione Bros. said:
You may be willing to accept a lower quality of IQ because of your budget, or you may be trying to justify it to yourself in the face of someone telling you that your graphics card is no longer up to scratch, but this is not the case for a lot of people who play games, and not how the developers meant their games to be seen at their best.
I simply said it as I see it. I know enough people who play games, some of them quite a few hours per week (although on and off), and have pretty old hardware. Not because they can't afford better, but because they can't justify the expense to themselves.

From a developer's POV of course I'd love to have these users upgrade to better hardware (in fact, I don't even support those who still use a Voodoo3 -- and these people exist). It would make my life easier if the baseline was better. From a user's POV, it's easy for me to understand the people who don't upgrade. There are always people who will feel that buying better hardware is warrented. My point is that there are also people (and I mean people who play games quite a bit) who feel that spending the extra isn't necessary.

When inexpensive PCs come with DX9 graphics, which should happen in less than a year as integrated DX9 chipsets arrive, it'd be even easier to select them over paying for a graphics card. At some point Sweeney may be right and even the CPU will be able to create graphics that are good enough for most people.

Bouncing Zabaglione Bros. said:
You are veering dangerously towards the "Frames are everything, you don't notice any IQ at high frames, so lets remove all details, models, smoke, fog, etc 'cos I can't see any difference" argument.
Quite the contrary. I think that what really matters in a game is neither frame rates nor IQ but gameplay. Frame rates do matter in a competitive FPS environment, but even for shooters a game can be playable at 15FPS. Not optimal, but enjoyable nontheless. In fact I'd generally go for 15FPS (as long as it's relatively consistent) with high image quality over faster frame rates with lower image quality. But I still think you don't notice the image quality much during a fast game (and I don't mean a fast frame rate game, but a game where you have to keep your attention on what's happening, and doesn't have dull moments for just looking around).
 
ET said:
But I still think you don't notice the image quality much during a fast game (and I don't mean a fast frame rate game, but a game where you have to keep your attention on what's happening, and doesn't have dull moments for just looking around).
Except every game has its slow moments.

Anyway, it's just a question of whether or not you care. Me, I care a lot.
 
ET said:
Rugor said:
It would be moving from dedicating hardware for graphics without removing the ability to dedicate hardware to graphics as necessary.

That's one way to go about it, I guess. I imagine that CPU makers could actually dedicate some chip real estate to graphics hardware. So a CPU might have a texture read unit to speed that up, and use an SSE-style vector processor for doing the pixel calculations.

The contra argument though is that such a powerful CPU of the future combined with a powerful GPU of the future is still going to deliver far higher efficiency and performance. That's the point where I fail to see how dedicated hardware will become redundant.

When I play UT2003 on a GeForce2 Ti, I enjoy it, not because I haven't seen it on better cards, but because once you get into the game, the actual graphics quality is much less meaningful. Who has time to check out shadows or glows in a fast paced game? And if the game is slow paced enough that you have time to watch the scenery, then you typically don't need high frame rates -- so it balances out in the end.

I wish it would be all about IQ improving features. The geometry throughput is more than just a bit higher on more recent cards. UT2k3 is a hardware T&L optimized game and you need both a powerful CPU and a strong T&L or VS equipped card; effectively you start avoiding single digit framerates with an at least 2000PR rating CPU and a GF4Ti class card as an example.

A GF2 Ti will play fine in usual indoor standard maps; try heavy outdoor maps, add Epic's Bonus pack or a couple of 3rd party maps and the going gets tough pretty quickly, especially the more the in game action rises. Even if you tune down the settings to the absolute minimum, the extremely high poly amount is going to show.
 
My own thought is while a powerful future CPU and powerful future GPU would definitely have higher performance and efficiency, it would also come with a higher price tag.

Unless the consumer can be convinced that these improvements are worth the additional cost they will become very much the province of the minority. If the default CPU based graphics play the games "well enough" in the opinion of a majority of users who have never experienced anything better it will be very hard to sell them on an expensive upgrade that they don't feel they need.

It's much more a matter of economics than performance and efficiency. At least that's the way I see it.
 
The myth of hardware equipment over time increasing in cost is probably as old as the predicted return to software rendering in relative terms.

It's simple; just look back what the original Pentium or the Voodoo1 did cost in the past. On top of that you may also compare the die sizes and transistor counts of those chips back then and compare them to today's standards and the prices and then we'll see how obcenely expensive a future CPU or GPU might be.

Manufacturers aren't that dumb to build =/>1000$ equipment....
 
I gather I wasn't completely clear when I mentioned a higher price tag. I was referring to the fact that a CPU + GPU combination would cost more than a system with a CPU alone. In general systems with integrated graphics are cheaper than what appear to be comparable systems with a discrete GPU. Just take a look at a Dell catalog to see what I mean.

My point is that once the integrated graphics reaches a certain point and gets "good enough" it will be harder to get people to pay more for the technically better systems with discrete GPUs. The absolute amount doesn't matter so much as the proportionate difference. If the system with discrete graphics costs 20% more then people have to feel they are getting at least 20% more computer for their money. If they are satisfied with the CPU-based graphics they are getting then it will be harder to get them to shell out that extra 20% on the sticker price.
 
Ailuros said:
As far as any sort of anti-aliasing goes, there will come a time where vendors will stop optimizing for aliased graphics, or if it sounds better there won't be a point of not having AA enabled.

And on the other side of the fence we have many new (poorly coded?) games supposedly sporting new graphics technology that refuse to work with FSAA at all :? And they keep coming out. So much for progress.


I just wonder how many people with LCD monitors use lowend graphics cards. Running at anything else besides the usually high native resolution doesn't exactly yield great results.
 
Crazyace said:
In some way I agree with Tim. As graphics cards move towards being general purpose pixel processors, and devs move away from straightforward texture layering - it should be possible for general cpu's to start to close the gap again.
It may take a while... but imagine a 4 way hyperthreaded P4 handling a graphics process - laying down the iterators for a triangle 1 by 1 into streams, then checking visibility and storing the results in a parameter buffer.
and finally running pixel shaders stage by stage from textures held in cache as long as possible.
The efficiency may not match the top of the range graphics cards,
but brute force may just be enough ( Just like everyone takes the brute force
Z buffer for granted today )
I imagine it might, if it's lucky, achieve a 20th of the speed of today's graphics chips.

What you have to realise is that the graphics cheap is, in effect, a multi-multi-threaded device running a vast number of instructions in parallel.
 
ET said:
I can't understand why people keep arguing with me about this.

Because it is an unrealistic expectation perhaps?

Did I say that GPUs won't be faster? Did I say that there won't be a use for them? Quite the contrary. All I'm saying is that I can envision a time when CPU based graphics will be enough for undemanding users.

Except, it is very likely these undemanding users will be exactly those sitting with systems using Intel "extreme" integrated graphics... (Intel should be given an award or something for best mis-use of a word in the English language... :LOL: They're also a candidate on "hyperthreading" too by the way.)

Those who wants to play games would almost certainly find themselves unsatisfied by a CPU's performance.

Drop the AA and ultra-high resolutions, and what is that ultra-high bandwidth for?

Oh, sorry, I thought we were talking about the FUTURE?

You think people are going to want to run games in the FUTURE without AA? Even today the visual difference is rather staggering, I can't believe it to be LESS so in the FUTURE.

(Then again, it was Arthur C. Clarke who said, "The future isn't what it used to be." ;))

He doesn't claim that GPUs will be gone altogether, just that they won't be a standard part of a system.)

And I claim he is smokin' some heavy stuff. What does he mean by "high-end" anyway, only those who buy $400+ video cards today? To me, his comment just doesn't seem very well thought-through. Unless a radical shift in CPU architecture comes along, I don't foresee CPUs getting the horsepower to overtake even today's best GPUs for an extremely long time, heck, even today the fastest chips we got struggle to compete with quite pedestrian chips like the TNT series, etc.

You've also got one fact wrong. CPUs aren't serial.

Geez, you don't think I know CPUs can issue multiple instructions per clock cycle? However, if you look at statistics over the average amount of instructions retired per clock, you'll find it isn't a particulary high amount. Further parallelization of the hardware isn't going to give a whole lot compared to the amount of trannies one has to throw at that problem to solve it. Seen as a whole, a CPU still behaves in a very much serial fashion, especially compared to a GPU.

Without that radical shift in CPU architecture, we're not going to be able to have transforms running in parallel with poly setup running in parallel with texture lookups running in parallel with texture filtering running in parallel with pixel shaders running...

A CPU with multiple cores/hyperthreading could have separate threads to run these tasks and perhaps rely on message passing, but they wouldn't be particulary well synchronized, nothing like a dedicated piece of hardware would be. Top-end GPUs today do four vertices and eight pixels at a time (peak), and an incredible amount of work on each vertex/pixel simultaneously (or rather, on a number of verts/pixels in a pipelined fashion), an imaginary very complex future CPU could do SOME work on ONE pixel/vert in parallel.

They also run at clock speed over 6 times faster than the fastest GPU, in case you've forgotten, which can even out the extra parallelism of GPUs.

Sorry, but, :LOL:... THat's pretty much all I have to say about that. Try making a P4, even the fastest you can find running on liquid nitrogen cooling, beat even a standard, unoverclocked GF256. You have your work cut out for you, that I can assure you. ;) You'll have, what? Ten, fifteen clock cycles tops to render an entire pixel. Think a P4 can do that?

No, this is a pipe-dream I say. And like I mentioned in my previous post, it's not the first time Tim's been huffing away on the hallucinogenics either. He seems to be a somewhat okay coder type (certainly no god), but maybe hardware isn't his strong side. ;)


*G*
 
ET said:
Ailuros said:
Nope you didn't. It's actually Sweeney's rather extreme predictions that caused that pro/contra argument and not your points.

I later edited my post to quote Sweeney (probably after you quoted it). The way I read what he said, he's not saying that there won't be graphics cards, just that they'd only be used at the high end.
And yet, to contradict this, we are now seeing dedicated graphics hardware going into very low-end systems, eg, mobile phones <shrug>
 
Gosh, Grall sure sounds convincing... I just asked Tim how often he smokes whatever it is Grall says Tim smokes....
 
I imagine it might, if it's lucky, achieve a 20th of the speed of today's graphics chips.

What you have to realise is that the graphics cheap is, in effect, a multi-multi-threaded device running a vast number of instructions in parallel.

Hi Simon,

I believe you have also described a high end cpu...

A global speed comparision would be quite futile - I could imagine situations where it would be faster than todays cpus... but the point I want to raise is that silicon support could be added to a cpu quite easily.. Dont be naive and try to compare like for like - the underlying algorithms for efficient software rasterisation will not be like the reference implementations or validation models of hardware devices..

In terms of 'efficiency' the customised HW will be better for the focused application, but if the R&D advantages held in pure performance terms by the cpu manufacturers are leveraged well the customer's perception will differ.

Please note that I would expect a high end system to still contain a 'graphics unit' - just that this would be another identical CPU... ( Low end systems would be cpu only... )
 
Crazyace said:
I imagine it might, if it's lucky, achieve a 20th of the speed of today's graphics chips.

What you have to realise is that the graphics cheap is, in effect, a multi-multi-threaded device running a vast number of instructions in parallel.

Hi Simon,

I believe you have also described a high end cpu...
One that executes 100s of instructions in parallel? That would be a remarkably "high end" cpu.
A global speed comparision would be quite futile - I could imagine situations where it would be faster than todays cpus...
Sorry, you've lost me. If you mean graphics (or, for that matter, some stream processing applications) then GPUs are already faster and cheaper than CPUs.

but the point I want to raise is that silicon support could be added to a cpu quite easily.. Dont be naive and try to compare like for like - the underlying algorithms for efficient software rasterisation will not be like the reference implementations or validation models of hardware devices..
I don't believe I am being naive. FWIW the next generation of mobile devices with graphics are unlikely to be done with extensions to the CPU. It simply doesn't offer the required performance (especially when power is taken into consideration). They are likely to be SoC solutions but that simply means the juxtaposition of GPU and CPU on the one chip.

In terms of 'efficiency' the customised HW will be better for the focused application, but if the R&D advantages held in pure performance terms by the cpu manufacturers are leveraged well the customer's perception will differ.
You've lost me again. To what does "customised HW" refer? A full 3D accelerator? An additional processing unit in the main CPU?

Please note that I would expect a high end system to still contain a 'graphics unit' - just that this would be another identical CPU... ( Low end systems would be cpu only... )
We've seen these sort of cycles in the past where dedicated HW was replaced by HW made from commodity (programmable) chips, which again was replaced by custom HW **. At the moment, powerful CPUs are relatively expensive and I can't see that changing any time in the near future.


** SGI is a very good example.
 
I don’t believe that gpus will be replaced by cpus for any graphical application where speed is of an essence. Or vice versa. This is for the simple fact that the target applications differ so much in their inherent parallelism.

Graphical processing lends itself quite easily and benefits greatly from a highly parallel processor. Whereas general processing as done on the cpu, whilst it may benefit from a couple of extra processing units to take advantage of a small amount of instruction level parallelism, is largely a serial processor and would not benefit from a highly parallel architecture for general processing. It would be a waste of transistors to implement the necessary logic to deal with such graphical functions even for the low/mid end.

Sure with a massively multithreaded rendering engine and enough cpus, you could probably do away with the gpu, Earth Simulator anyone?

Its not like the rate of development is slowing down that much either. Both cpu and gpu are being taken to the limits with upcoming games. If the cpu was being more and more underused rather than being utilised by better AI and physics etc. and graphics development stagnated, then after a while I could see this might happen, but I doubt that will ever occur.

The architectures for gpus and cpus are too disparate. They are pretty useless for doing the others work.

Jab
 
Plus as has been pointed out, CPU+GPU would always trump CPU alone, just because of the extra, dedicated transistors. It's like having a second, superfast processor for anything graphical.

... and a gpu would be trumped, by a cpu+gpu combo, were both cpu and gpu had nice b/w and dedicated h/w.... especially if this combo is well designed, and has a higher transistor budget, and no better process is available for 2 or more yrs...
 
Back
Top