In both cases the work done by application is almost same but the way CPU usage is reported makes it appear that under HT less cpu is being utilized.
Ah yes sorry for that. What I actually meant is that a developer might consider writing the game multithreaded because there will be quite a bit of HT-enabled processors out there.Entropy said:Please don't use the term "hyperthreading" when you actually mean "multithreading". They are very distinct terms.
mczak said:You are of course correct that the installed systems today don't have HT. However, if you're talking about developing new games (not almost finished ones like DoomIII or HL2) then you are also talking about 2 years (?). So, if you develop a new game now, then HT is something you might at least consider (though apparently, Gabe Nevell doesn't think that the potential gains are worth the increased development efforts).
mczak said:Ah yes sorry for that. What I actually meant is that a developer might consider writing the game multithreaded because there will be quite a bit of HT-enabled processors out there.Entropy said:Please don't use the term "hyperthreading" when you actually mean "multithreading". They are very distinct terms.
What is the basis for your claim?Bouncing Zabaglione Bros. said:The problem is that in two years time, the benefits that you get from supporting multithreading are probably going to be far, far outweighed by the benefits that you get from faster CPU's (including the adoption of 64bit CPUs) and faster VPUs.
Oompa Loompa said:What is the basis for your claim?Bouncing Zabaglione Bros. said:The problem is that in two years time, the benefits that you get from supporting multithreading are probably going to be far, far outweighed by the benefits that you get from faster CPU's (including the adoption of 64bit CPUs) and faster VPUs.
The speed ofcourse will depend on the type of application. However, looking at the future direction of many CPU designers, it is clear that more parallellity will be the future. Game developers could better start with designing their games from the ground up for full multithreading or else they will be the last on the block.Developers are not going to spend %50 percent (or whatever) more time doing a real multithread design if it only gets them %5 more speed for the small percentage of the market with a 3 gig hyperthreading P4.
From Me:
I was wondering,
Will Half-Life 2 support Hyper-Threaded CPU's?
In theory shouldn't it take alot of load of the system?
Example, the first processor does all the physics, while the second does some other tasks, like AI, 3d Sound etc
Any Thoughts?
Hyperthreaded CPUs attempt to extract thread-level parallelism, as opposed to traditional pipelined architectures which attempt to take advantage of instruction level parallelism. Hyperthreading can be somewhat unpredictable in terms of the performance impact, as you can, in some cases, run slower.
Implementing and maintaining a "deeply" multi-threaded version of Source would be a pain (i.e. multi-threading the renderer). Implementing a hacky version (e.g. having a discreet physics thread or running the client and server in different threads) is something we may do depending upon how much bandwidth we have before we ship. Right now we don't get nearly as much bang for the buck working on hyperthreading as we do on other optimizations. That may change as we run out of things to optimize.
64-bits, in contrast, is a one-time cost and is fairly simple to take advantage of. It's a huge win for tools as it not only gets more work done per instruction, but it also gets us past the current memory limitations, which are a problem for us today on tools.
Distributed computing is harder than hyperthreading but it has the potential to increase performance by a huge amount (8X on our tools) as opposed to hyperthreading (30%). All of our tools are going to a distributed approach.
So the taxonomy looks like this:
- general algorithmic optimization (general good thing to do)
- DX9 optimization (big gains, long term direction)
- 64-bits (not that hard, solves memory problem as well as performance gains)
- hyperthreading (hard initial cost, on-going code maintainence cost, limited unpredictable performance gains, benefits in multiprocessor environments as well)
- distributed computing (hardest to do, biggest potential gains, great for tools, may be great for servers, not sure how it works with clients)
Hope this makes sense. Jay, any corrections?
Gabe
I'm forced to ask what your point is. Are you arguing that the appropriate strategy for a developer is to do nothing, and wait for hardware manufacturers to optimize their performance? The issue at hand is what optimizations a developer can make that will increase their ability to exploit existing and near-future hardware. Investing effort in threading offers only a modest payoff in the short term, but will become increasingly important.Bouncing Zabaglione Bros. said:When you look at a single player game, servicing one person that is running (for the most part) tasks which do not inherently benefit from multithreading, I can't see you are going to get anywhere near the benefit from parallelism on a HT enables (ie fake dual processor) that you will from a 5 ghz Athlon FX and an R500 in a couple of years time.
Oompa Loompa said:HT is a pretty limited tool for exploiting this parallelizability, but it has the huge advantage of being mainstream. From a developer's perspective, it is a good enough reason to start writing threaded engines, despite the fact that the big payoff won't come until multi-core CPUs go mainstream.
Oompa Loompa said:I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself.
Don't be silly. Epic is not being emotional - they have a vested financial interest in endorsing 64-bit processing. They feel ready to exploit it with their future products. I'm pretty sure that "Bouncing Zabaglione Bros." is not in the same position.crystalcube said:Oompa Loompa said:I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself.
btw "64 bit advocate" does not imply "AMD Fan".
I think many developers specially epic have emphasized their need for 64 bit platform , I guess they are all AMD fan , going by your definition and Intel too is AMD fan for having 64 bit CPUs.
Oompa Loompa said:I'm forced to ask what your point is. Are you arguing that the appropriate strategy for a developer is to do nothing, and wait for hardware manufacturers to optimize their performance? The issue at hand is what optimizations a developer can make that will increase their ability to exploit existing and near-future hardware. Investing effort in threading offers only a modest payoff in the short term, but will become increasingly important.
Oompa Loompa said:I think you'll find that there is a great deal of potential parallelism in a modern game. The types of applications we're describing are essentially life simulators, with simulated individuals making independent decisions, and being independently acted on by physical rules. The renderer is a different matter, but then, that's a problem for the video processor.
Oompa Loompa said:HT is a pretty limited tool for exploiting this parallelizability, but it has the huge advantage of being mainstream. From a developer's perspective, it is a good enough reason to start writing threaded engines, despite the fact that the big payoff won't come until multi-core CPUs go mainstream.
Oompa Loompa said:I understand that you are a "64 bit advocate" of some sort (ie. an AMD fan), but I think you're kidding yourself. There are significant performance advantages to making AMD64-versions of commercial software, but developers are businessmen; until the installed base of 64 bit CPUs is large, it would be a financially unwise to work on dedicated 64 bit code. The trivial amount of announced support for 64 bit in the gaming industry reflects this. Hopefully, this will slowly change - but probably not in the next two years.
Bouncing Zabaglione Bros. said:And yet your renderer is what your game spend 90 percent of its time running,
Hyp-X said:Bouncing Zabaglione Bros. said:And yet your renderer is what your game spend 90 percent of its time running,
While this is an oversimplification, it's not that far off from reality.
It's certainly true for fast CPU and videocard combinations.
I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.Bouncing Zabaglione Bros. said:Are you suggesting that developers should spend significant amounts of time and money, increasing development time and costs in order to give a very small gain for a very small part of the market?
Where do 64-bit integer ops or the ability to adress more than 4 GB of RAM benefit game(r)s?Bouncing Zabaglione Bros. said:What you don't seem to understand is that supporting 64 bit is relatively easy - for the most part you just need to recompile your code, so unless you've written something really badly, or very close to particular hardware, its realtively little work, for a payback of 30-50 percent depending on the app.
incurable said:I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.
incurable said:Code doesn't only have to be written but maintained and supported, a seperate 64-bit version means higher costs for questionable gains. Do you agree?
incurable said:Where do your 30-50% improvements come from?
I say something about the number of CPUs shipped with HT enabled (likely double-digit millions to this day) and you answer commenting on their performance. *hmmm* Ok.Bouncing Zabaglione Bros. said:Then I guess that puts me in the same boat as Epic, ID, Valve and just about every other developer out there. I think you are overestimating the gain that you can get with HT in a game for the amount of work you have to put into it.incurable said:I think you're underestimating the numbers of HT-enabled chips Intel ships/ed over the past 2 years and their percentage in the gaming market.
If HT is so good and so significant in the gaming market, why haven't we seen any games trumpeting HT support, or talking about the massive gains made by using it? If there's load of Ht chips out there in gaming rigs, it's been out two years, and it's gives such a boost for so little work, why arn't the game developers using it?
I'll take that for a 'yes'. (cause actually, turning a 32-bit app into a 64-bit one can slow it down just as well as multithreading can)Bouncing Zabaglione Bros. said:incurable said:Code doesn't only have to be written but maintained and supported, a seperate 64-bit version means higher costs for questionable gains. Do you agree?
The gains are not questionable - they result in a speedup, not a slowdown as in many cases with HT (see the Valve quote above again). Support is much more transparent with much of the work being done by the compiler. A separate 64 bit version is more expensive than not doing a 64 bit version at all, but is still way, way cheaper than doing a HT enabled, multithreaded design.
64bit is much cheaper to build and maintain than a true multithreaded application which needs to have a ground up, multithreading design implemented for everything in order to make it really multthreaded.
"large file support"? Ahem, 64-bit in this context is about the width of the integer pipeline(s) and support for a larger address space.Bouncing Zabaglione Bros. said:64 bit is more than just large file support, the processor pipe is effectively "wider" and thus faster, as the Athlon 64 reviews have shown.
What game is available in a x86-64 ....errrr... AMD64 version? Who ran it? What were the results?Bouncing Zabaglione Bros. said:See the (p)reviews around the net that have focussed on gaming, particularly the ones that have used the advanced version of 64bit Windows.incurable said:Where do your 30-50% improvements come from?
Wait, let's see what Mr. Sweeney himself said on the topic according to firingsquad.com:Bouncing Zabaglione Bros. said:Epic reported a 30 percent increase in UT2K3 just from compiling to 64bit last summer. I would expect to see more as better compilers arrive.
Aha, he's talking Athlon XP vs. Athlon 64. Not Athlon 64 32-bit vs. Athlon 64 64-bit. There're some peculiar differences between the K7 and K8 architecures and their implementations, woulnd't you agree? Heck, even the latter couldn't be used as a pure example for the benefits of 64-bit vs. 32-bit computing because with -err- AMD64, you gain twice the number of GPRs and twice the number of SSE2 registers in addition to the wider integer pipe and larger address space.interview[/url] with firingsquad.com] I think you'll probably see a clock-for-clock improvement over Athlon XP of around 30% in applications like Unreal that do a significant amount of varied computational work over a large dataset.
incurable said:I say something about the number of CPUs shipped with HT enabled (likely double-digit millions to this day) and you answer commenting on their performance. *hmmm* Ok.
incurable said:I'll take that for a 'yes'. (cause actually, turning a 32-bit app into a 64-bit one can slow it down just as well as multithreading can)