hyperthreading and games

Please don't use the term "hyperthreading" when you actually mean "multithreading". They are very distinct terms.

Entropy
 
Entropy said:
Please don't use the term "hyperthreading" when you actually mean "multithreading". They are very distinct terms.
Ah yes sorry for that. What I actually meant is that a developer might consider writing the game multithreaded because there will be quite a bit of HT-enabled processors out there.
 
mczak said:
You are of course correct that the installed systems today don't have HT. However, if you're talking about developing new games (not almost finished ones like DoomIII or HL2) then you are also talking about 2 years (?). So, if you develop a new game now, then HT is something you might at least consider (though apparently, Gabe Nevell doesn't think that the potential gains are worth the increased development efforts).

The problem is that in two years time, the benefits that you get from supporting multithreading are probably going to be far, far outweighed by the benefits that you get from faster CPU's (including the adoption of 64bit CPUs) and faster VPUs.
 
mczak said:
Entropy said:
Please don't use the term "hyperthreading" when you actually mean "multithreading". They are very distinct terms.
Ah yes sorry for that. What I actually meant is that a developer might consider writing the game multithreaded because there will be quite a bit of HT-enabled processors out there.

A multithreaded application by default will not make use of HT. You see it is the responsibility of OS to run different threads on differrent CPU/Core. But multithreading has its own problem and not every application or module of any application can benefit from it.
I think RTS games can possibly benefit more from multithreading, although I am not sure about it as I have never written an RTS ;) .
 
Bouncing Zabaglione Bros. said:
The problem is that in two years time, the benefits that you get from supporting multithreading are probably going to be far, far outweighed by the benefits that you get from faster CPU's (including the adoption of 64bit CPUs) and faster VPUs.
What is the basis for your claim?

Video processing requirements will continue to increase, and it will obviously be necessary to upgrade that side of the hardware equation, no matter what happens in the CPU arena.

CPU speeds will continue to increase, but there IS a reason that every processor manufacturer has multi-core CPUs on their roadmaps. Obtaining raw increases in clock speed is becoming more difficult. Parallelism will become increasingly important element in performance enhancement, and anyone who isn't at least starting to figure out how to thread their code will be left behind before too long.

Unfortunately, 64-bit gaming will probably be a niche issue until Intel enters the game. And that isn't likely to happen in your 2 year time frame.
 
Oompa Loompa said:
Bouncing Zabaglione Bros. said:
The problem is that in two years time, the benefits that you get from supporting multithreading are probably going to be far, far outweighed by the benefits that you get from faster CPU's (including the adoption of 64bit CPUs) and faster VPUs.
What is the basis for your claim?

I've worked with lots of developers on large multithreading systems. The gains made from increasing hardware resources outweigh the gains made from parallelism, and this is on software such as extremely large (terabyte sized) databases with up to hundres of users. Typically you find the limiting factors are disk access and the fact that many of the required tasks do not benefit from paralellism for a single person. For the most part, you are simply running the same thing multiple times for different people, but their individual task in themselves are not very parallel.

When you look at a single player game, servicing one person that is running (for the most part) tasks which do not inherently benefit from multithreading, I can't see you are going to get anywhere near the benefit from parallelism on a HT enables (ie fake dual processor) that you will from a 5 ghz Athlon FX and an R500 in a couple of years time. It's not the HT hardware that's the problem, it's just that a lot of things you have to do will not translate to a parallel design that gives you significant gains.

It's like trying to make a cup of tea before boiling the water. Sure, you put the kettle on start it boiling, you can get the cup out and put some milk in it, but then you are sittng there waiting for the water to boil. There's nothing you can do about that because you need the hot water before you can make your cup of tea.

And I don't agree with your quote on 64 bit gaming. MS has a 64 bit Windows due at the beginning of 2004, and Epic already have a 64 bit version of UT2K and have stated that their next generation engine that they are developing now will *require* 64 bit if you want to use the content creation tools. Valve have already said it is an easy and cheap way to get significant speed boosts for their engine with very little work. With AMD being a favourite for gamers and pushing Athlon 64, I expect 64 bit gaming to get quite popular sooner rather than later when considering that two year timespan. Note that Valve has also said that optimising for HT gives them the least benefit for the most work, sometimes actually slowing their engine down.
 
Developers are not going to spend %50 percent (or whatever) more time doing a real multithread design if it only gets them %5 more speed for the small percentage of the market with a 3 gig hyperthreading P4.
The speed ofcourse will depend on the type of application. However, looking at the future direction of many CPU designers, it is clear that more parallellity will be the future. Game developers could better start with designing their games from the ground up for full multithreading or else they will be the last on the block.

Not to mention that a 5% - 20% performance speedup from Hyperthreading is something that I would very much like to see from any game.
 
here is something I stumbled upon today that is relevant here:

http://halflife2.homelan.com/forums...1443a73&threadid=1298&perpage=15&pagenumber=2
question
From Me:
I was wondering,

Will Half-Life 2 support Hyper-Threaded CPU's?
In theory shouldn't it take alot of load of the system?

Example, the first processor does all the physics, while the second does some other tasks, like AI, 3d Sound etc

Any Thoughts?

answer
Hyperthreaded CPUs attempt to extract thread-level parallelism, as opposed to traditional pipelined architectures which attempt to take advantage of instruction level parallelism. Hyperthreading can be somewhat unpredictable in terms of the performance impact, as you can, in some cases, run slower.

Implementing and maintaining a "deeply" multi-threaded version of Source would be a pain (i.e. multi-threading the renderer). Implementing a hacky version (e.g. having a discreet physics thread or running the client and server in different threads) is something we may do depending upon how much bandwidth we have before we ship. Right now we don't get nearly as much bang for the buck working on hyperthreading as we do on other optimizations. That may change as we run out of things to optimize.

64-bits, in contrast, is a one-time cost and is fairly simple to take advantage of. It's a huge win for tools as it not only gets more work done per instruction, but it also gets us past the current memory limitations, which are a problem for us today on tools.

Distributed computing is harder than hyperthreading but it has the potential to increase performance by a huge amount (8X on our tools) as opposed to hyperthreading (30%). All of our tools are going to a distributed approach.

So the taxonomy looks like this:

- general algorithmic optimization (general good thing to do)
- DX9 optimization (big gains, long term direction)
- 64-bits (not that hard, solves memory problem as well as performance gains)
- hyperthreading (hard initial cost, on-going code maintainence cost, limited unpredictable performance gains, benefits in multiprocessor environments as well)
- distributed computing (hardest to do, biggest potential gains, great for tools, may be great for servers, not sure how it works with clients)

Hope this makes sense. Jay, any corrections?

Gabe
 
Bouncing Zabaglione Bros. said:
When you look at a single player game, servicing one person that is running (for the most part) tasks which do not inherently benefit from multithreading, I can't see you are going to get anywhere near the benefit from parallelism on a HT enables (ie fake dual processor) that you will from a 5 ghz Athlon FX and an R500 in a couple of years time.
I'm forced to ask what your point is. Are you arguing that the appropriate strategy for a developer is to do nothing, and wait for hardware manufacturers to optimize their performance? The issue at hand is what optimizations a developer can make that will increase their ability to exploit existing and near-future hardware. Investing effort in threading offers only a modest payoff in the short term, but will become increasingly important.

I think you'll find that there is a great deal of potential parallelism in a modern game. The types of applications we're describing are essentially life simulators, with simulated individuals making independent decisions, and being independently acted on by physical rules. The renderer is a different matter, but then, that's a problem for the video processor.

HT is a pretty limited tool for exploiting this parallelizability, but it has the huge advantage of being mainstream. From a developer's perspective, it is a good enough reason to start writing threaded engines, despite the fact that the big payoff won't come until multi-core CPUs go mainstream.

I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself. There are significant performance advantages to making AMD64-versions of commercial software, but developers are businessmen; until the installed base of 64 bit CPUs is large, it would be a financially unwise to work on dedicated 64 bit code. The trivial amount of announced support for 64 bit in the gaming industry reflects this. Hopefully, this will slowly change - but probably not in the next two years.
 
Oompa Loompa said:
HT is a pretty limited tool for exploiting this parallelizability, but it has the huge advantage of being mainstream. From a developer's perspective, it is a good enough reason to start writing threaded engines, despite the fact that the big payoff won't come until multi-core CPUs go mainstream.

Not really, the main issue here is how much benefit the developer will get from multithreading ? And is it worth spending extra effort ?
Only the developer will know the answer and the benefits will vary for different games. So if one developer does get benefit does not mean it is beneficial for everyone.


Oompa Loompa said:
I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself.

btw "64 bit advocate" does not imply "AMD Fan".
I think many developers specially epic have emphasized their need for 64 bit platform , I guess they are all AMD fan , going by your definition and Intel too is AMD fan for having 64 bit CPUs.
 
crystalcube said:
Oompa Loompa said:
I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself.

btw "64 bit advocate" does not imply "AMD Fan".
I think many developers specially epic have emphasized their need for 64 bit platform , I guess they are all AMD fan , going by your definition and Intel too is AMD fan for having 64 bit CPUs.
Don't be silly. Epic is not being emotional - they have a vested financial interest in endorsing 64-bit processing. They feel ready to exploit it with their future products. I'm pretty sure that "Bouncing Zabaglione Bros." is not in the same position.
 
Oompa Loompa said:
I'm forced to ask what your point is. Are you arguing that the appropriate strategy for a developer is to do nothing, and wait for hardware manufacturers to optimize their performance? The issue at hand is what optimizations a developer can make that will increase their ability to exploit existing and near-future hardware. Investing effort in threading offers only a modest payoff in the short term, but will become increasingly important.

Are you suggesting that developers should spend significant amounts of time and money, increasing development time and costs in order to give a very small gain for a very small part of the market?

The point is that the investement for HT means a lot of work for the developer with very little payoff. Why would a developer put that investment in for a game who's life will be over by the time multithreading will make any difference - especially as by that point there will be more payback from the newer CPU/VPU's ? It's just not very economical.

Oompa Loompa said:
I think you'll find that there is a great deal of potential parallelism in a modern game. The types of applications we're describing are essentially life simulators, with simulated individuals making independent decisions, and being independently acted on by physical rules. The renderer is a different matter, but then, that's a problem for the video processor.

Let's see *your* evidence for this assertation, epecially given what Valve recently said and is quoted above.

You will gain very little by parallelising the things (such as your music code), if they only take a very small amount of your game cycle. The place you will make gains is in the things your engine spends the most time doing, ie rendering. Saving 10 percent of what your engine spends 90 percent of it's time doing is worthwhile. Saving 10 percent of what your engine spends 2 percent of it's time doing is not worthwhile. You've just glossed over "the renderer", and passed off it's performace to the graphics card, thus agreeing that it's the hardware that will make the difference to 90 percent of the game's performance - not multithreading. :rolleyes:

Oompa Loompa said:
HT is a pretty limited tool for exploiting this parallelizability, but it has the huge advantage of being mainstream. From a developer's perspective, it is a good enough reason to start writing threaded engines, despite the fact that the big payoff won't come until multi-core CPUs go mainstream.

You're suggesting that it's economical for a developer to spend time and effort designing, coding, and maintaining a *second* multithreading version of their code, for HT processors that have only just arrived on the very high end of the CPU market, and does not even exist on the AMD processors that are a significant part of the gaming market? Even Valve doesn't think that, and they have the biggest game/engine of the year and spent significant amounts of time optimising for NV30 and several different versions of DirectX. :rolleyes:

Oompa Loompa said:
I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself. There are significant performance advantages to making AMD64-versions of commercial software, but developers are businessmen; until the installed base of 64 bit CPUs is large, it would be a financially unwise to work on dedicated 64 bit code. The trivial amount of announced support for 64 bit in the gaming industry reflects this. Hopefully, this will slowly change - but probably not in the next two years.

What you don't seem to understand is that supporting 64 bit is relatively easy - for the most part you just need to recompile your code, so unless you've written something really badly, or very close to particular hardware, its realtively little work, for a payback of 30-50 percent depending on the app. It's near transparent if your code is portable.

To write a proper multithreading app, your tasks need to be able to be parallelised - not all can be. For instance, if you parallelise your renderer, threads may spend a lot of time sitting idle and gaining you nothing, because they are waiting to see where the player looks and moves before being *able* to process anything in parallel. And yet your renderer is what your game spend 90 percent of its time running, and where you need to make the most gains - in this case faster CPUs and graphics cards will have a massive effect, wheras multithreading will not.

You can't just hack in a bit of HT code here and there and expect to make much gain. It's not like MMX, it's not descrete code - multithreading is a whole planning and design philosphy for your software if you want it to make any difference.

As for addressing your intention to belittle me as some kind of AMD fan boy, are you some kind of Intel fan boy because you think HT is the only way forward? Would you be disturbed to hear that at the recent Athlon 64 launch that AMD also said they intended to move forward with HT-type pseudo-multiple processors-on-a-single-die on their future Athlon 64s?

I'm not saying HT or multithreading is a bad thing per-se, but for *games*, it has a smaller benefit that for many other apps (even Valve agree with this), and at the same time processor speed and VPU speed has a much, much larger effect on the performance of a game. Obviously in this situation, it would not make sense for a developer to put it a large amount of work for very small gain, when they can have performance increased far more dramatically and with far less outlay on their part simply by telling people to buy faster graphics cards.
 
Bouncing Zabaglione Bros. said:
And yet your renderer is what your game spend 90 percent of its time running,

While this is an oversimplification, it's not that far off from reality.
It's certainly true for fast CPU and videocard combinations.

The faster you CPU the less the game/sound/AI code will require because it's relatively constant per time.
With a fast videocard the framerate "wants" to go up requiring higher CPU cost (if available), since the rendering CPU requirement is per frame.

And it's not unlikely that 50% of that time is spent in the driver and the runtime.
The driver is single threaded - there's nothing the game programmer can do about that!
The runtime is single threaded by default - you can set it to multithreaded but it will increase overhead...
Rendering calls cannot be intermixed anyway - that would result in unexpected rendering result.
Paralellizing the engine code and the driver/runtime code would result in a serious memory overhead and latency hit.
 
Hyp-X said:
Bouncing Zabaglione Bros. said:
And yet your renderer is what your game spend 90 percent of its time running,

While this is an oversimplification, it's not that far off from reality.
It's certainly true for fast CPU and videocard combinations.

It was a simplication I made for Oompa Loompa's benefit. ;) IIRC, a couple of years back John Carmack wrote a .plan file or Slashdot posting on optimisation and multiple processors, and he pointed out what a waste of time it would be to optimise aspects of his engine that were only a small part of the overall game cycle.

He took the same view above. There is no point in expending large effort hand optimising in assembler for things that are only going to give very small overall benefits, because those routines only take a very small amount of the game cycle. By far and away Carmack considered the rendering to be the thing that needed to be optimised as much as possible, because this is what his engine spent 90 percent of it's time doing. Small gains there had a significant influence on performance. Large gains on other routines that take one percent of the game engine means an imperceptable difference.

Now we have uber-graphics hardware, the difference you gain from optimising for niche areas like HT is tiny compared to the difference you get from upgrading to a true DX9 class graphics card like R360. Of course this is because 3D games spend so much time doing 3D, that the strength of the graphics card and the ability of the CPU to feed it is what really counts, rather than being able to spin the game functions out to different threads.
 
Bouncing Zabaglione Bros. said:
Are you suggesting that developers should spend significant amounts of time and money, increasing development time and costs in order to give a very small gain for a very small part of the market?
I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.

Bouncing Zabaglione Bros. said:
What you don't seem to understand is that supporting 64 bit is relatively easy - for the most part you just need to recompile your code, so unless you've written something really badly, or very close to particular hardware, its realtively little work, for a payback of 30-50 percent depending on the app.
Where do 64-bit integer ops or the ability to adress more than 4 GB of RAM benefit game(r)s?

Code doesn't only have to be written but maintained and supported, a seperate 64-bit version means higher costs for questionable gains. Do you agree?

Where do your 30-50% improvements come from?

cu

incurable
 
incurable said:
I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.

Then I guess that puts me in the same boat as Epic, ID, Valve and just about every other developer out there. I think you are overestimating the gain that you can get with HT in a game for the amount of work you have to put into it.

If HT is so good and so significant in the gaming market, why haven't we seen any games trumpeting HT support, or talking about the massive gains made by using it? If there's load of Ht chips out there in gaming rigs, it's been out two years, and it's gives such a boost for so little work, why arn't the game developers using it?

incurable said:
Code doesn't only have to be written but maintained and supported, a seperate 64-bit version means higher costs for questionable gains. Do you agree?

The gains are not questionable - they result in a speedup, not a slowdown as in many cases with HT (see the Valve quote above again). Support is much more transparent with much of the work being done by the compiler. A separate 64 bit version is more expensive than not doing a 64 bit version at all, but is still way, way cheaper than doing a HT enabled, multithreaded design.

64bit is much cheaper to build and maintain than a true multithreaded application which needs to have a ground up, multithreading design implemented for everything in order to make it really multthreaded.

64 bit is more than just large file support, the processor pipe is effectively "wider" and thus faster, as the Athlon 64 reviews have shown.

incurable said:
Where do your 30-50% improvements come from?

See the (p)reviews around the net that have focussed on gaming, particularly the ones that have used the advanced version of 64bit Windows.

Epic reported a 30 percent increase in UT2K3 just from compiling to 64bit last summer. I would expect to see more as better compilers arrive.

Show me where Intel's HT does better with less work from the developer.
 
Bouncing Zabaglione Bros. said:
incurable said:
I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.
Then I guess that puts me in the same boat as Epic, ID, Valve and just about every other developer out there. I think you are overestimating the gain that you can get with HT in a game for the amount of work you have to put into it.

If HT is so good and so significant in the gaming market, why haven't we seen any games trumpeting HT support, or talking about the massive gains made by using it? If there's load of Ht chips out there in gaming rigs, it's been out two years, and it's gives such a boost for so little work, why arn't the game developers using it?
I say something about the number of CPUs shipped with HT enabled (likely double-digit millions to this day) and you answer commenting on their performance. *hmmm* Ok.

Bouncing Zabaglione Bros. said:
incurable said:
Code doesn't only have to be written but maintained and supported, a seperate 64-bit version means higher costs for questionable gains. Do you agree?

The gains are not questionable - they result in a speedup, not a slowdown as in many cases with HT (see the Valve quote above again). Support is much more transparent with much of the work being done by the compiler. A separate 64 bit version is more expensive than not doing a 64 bit version at all, but is still way, way cheaper than doing a HT enabled, multithreaded design.

64bit is much cheaper to build and maintain than a true multithreaded application which needs to have a ground up, multithreading design implemented for everything in order to make it really multthreaded.
I'll take that for a 'yes'. (cause actually, turning a 32-bit app into a 64-bit one can slow it down just as well as multithreading can)

Bouncing Zabaglione Bros. said:
64 bit is more than just large file support, the processor pipe is effectively "wider" and thus faster, as the Athlon 64 reviews have shown.
"large file support"? Ahem, 64-bit in this context is about the width of the integer pipeline(s) and support for a larger address space.

The Athlon64 reviews all over the web were mostly showing that it's a damn good 32-bit/x86 processor, precious few did even try a 64-bit OS, much less benchmark (not-provided-by-AMD) 64-bit applications.

Bouncing Zabaglione Bros. said:
incurable said:
Where do your 30-50% improvements come from?
See the (p)reviews around the net that have focussed on gaming, particularly the ones that have used the advanced version of 64bit Windows.
What game is available in a x86-64 ....errrr... AMD64 version? Who ran it? What were the results?

Bouncing Zabaglione Bros. said:
Epic reported a 30 percent increase in UT2K3 just from compiling to 64bit last summer. I would expect to see more as better compilers arrive.
Wait, let's see what Mr. Sweeney himself said on the topic according to firingsquad.com:
interview[/url] with firingsquad.com] I think you'll probably see a clock-for-clock improvement over Athlon XP of around 30% in applications like Unreal that do a significant amount of varied computational work over a large dataset.
Aha, he's talking Athlon XP vs. Athlon 64. Not Athlon 64 32-bit vs. Athlon 64 64-bit. There're some peculiar differences between the K7 and K8 architecures and their implementations, woulnd't you agree? Heck, even the latter couldn't be used as a pure example for the benefits of 64-bit vs. 32-bit computing because with -err- AMD64, you gain twice the number of GPRs and twice the number of SSE2 registers in addition to the wider integer pipe and larger address space.

cu

incurable
 
incurable said:
I say something about the number of CPUs shipped with HT enabled (likely double-digit millions to this day) and you answer commenting on their performance. *hmmm* Ok.

How many HT units you ship is irrelevent if no one is programming HT for it, don't you think? In the context of games, is anyone doing anything significant with HT?

incurable said:
I'll take that for a 'yes'. (cause actually, turning a 32-bit app into a 64-bit one can slow it down just as well as multithreading can)

Sure, and for a lot more effort you can do the same slowdown with HT if you make a crappy port. The point I'm making is that you have to expend a lot more effort to get less result with HT than you will with 64 bit. I expect to see a 64 bit apps arrive quite quickly next year when MS ships their 64 bit Windows. One of the reasons for that is because converting to 64 but is relatively painless for the developer compared to writing a multithreaded game. That's why after two years and shipping "millions" of HT processors, we're still not seeing multthreaded games.

What's your opinion on why we haven't got any significant multithreaded games for HT if it's so good? Why does Valve publicly state it's a lot of effort for little return, and comes one step up from distributed grid computing as far as they are concerned, for one of the biggest games of the last few years?
 
When I got my dual Xeon Machine last december, hyperthreading was only available in the Xeons and not the P4's yet.

Im not saying HT is beneficial, but making an argument that they are available and thuse we would see it taken advantage of in games doesnt make sense to me. Especially since AMD has a piece of the gaming pie.

..or am I remembering wrong. What CPU introduced HT to the gaming market and when?
 
Back
Top