Intel to buy Havok

This argument, while common, is a complete fallacy. You always want to do the computation on the most efficient (perf/$, perf/W, etc) place possible. That may well be a GPU for physics nowadays, perhaps even a low-to-mid range one.


In theory, that's true, but in practice, that's not what's happening. Game devs are not doing their physics on GPUs, they are doing it on CPUs, and even that they are not doing particularly well. It seems they are having trouble multithreading their code on CPUs as it is, let alone doing it on GPUs.
 
In theory, that's true, but in practice, that's not what's happening. Game devs are not doing their physics on GPUs, they are doing it on CPUs, and even that they are not doing particularly well. It seems they are having trouble multithreading their code on CPUs as it is, let alone doing it on GPUs.
That's true in the short term certainly (conservatism always rules in the short term! :)), but not in the mid-long term, depending on the needs of the particular game. In those time frames I fully expect that people will get used to parallelism as the norm rather than the exception, and languages, libraries and software platforms will take care of the details of targeting the various architectures.
 
That's true in the short term certainly (conservatism always rules in the short term! :)), but not in the mid-long term, depending on the needs of the particular game. In those time frames I fully expect that people will get used to parallelism as the norm rather than the exception, and languages, libraries and software platforms will take care of the details of targeting the various architectures.

I can point to the time it's taking devs to take advantage of DX9 (let alone DX10), Hyperthreading, multicores, stream processing on GPUs, etc and catagorically state it will take far longer than both you and I would like.

I wouldn't be at all surprised to find in a couple of years time that we're still trying to count the few games that do decent physics on the fourth CPU core, let alone the even smaller subset doing it on the GPU.
 
I wouldn't be at all surprised to find in a couple of years time that we're still trying to count the few games that do decent physics on the fourth CPU core, let alone the even smaller subset doing it on the GPU.
I tend to agree with you, and while we can certainly discuss the time frame ad nauseam, I think we can agree that it will happen eventually. Personally when I do development I'm most interested in being as forward looking as is reasonably possible, although I certainly understand that game developers do not have the same luxury.

The point that I was trying to make is that GPU/SPU/Larabee/Torrenza/Terascale/whatever massively-parallel physics are certainly desirable moving forward, and there's no real need or desire to have full x86 cores with shared and synchronized caches for these sorts of data-parallel applications. Thus IMHO "massively-parallel" physics are a more compelling direction for future R&D rather than getting them up and running on our current multi-core CPU architectures.
 
When I read that I think Gates will try even harder to convince Intel to be part of the next xbox.
I don't think Intel will let MS own hardware IP but they could join to push a gaming/homeserver device.
Back to my cristal ball ... lol
 
How useful are SSE4 for physics calculations?

SSE4? Quite usefull i'd say. In particular it brings the PC's instruction set closer to the specialized instruction sets of the current consoles... so it makes porting optimized console code easier. e.g. parsing compressed collision datastructures with all the integer handling in xmm regs. Great for crossplatform development.

I agree I'd rather see physics on beefed up CPUs rather than on GPUs - GPU physics is a marketing spoiler tactic to squash Agiea.

EDIT: (comment also applies to SSE3/SSSE3 partly.. viewpoint relative to a port not being able to consider these as minimum spec)
 
Last edited by a moderator:
I agree I'd rather see physics on beefed up CPUs rather than on GPUs - GPU physics is a marketing spoiler tactic to squash Agiea.
Now why do you think GPU physics are any less desirable than CPU physics? There's a lot more theoretical power on modern GPUs than CPUs, and that power can be used fairly efficiently to solve physics problems (which are parallel-friendly). Thus regardless of the marketing involved, etc. why would you prefer to run physics on the CPU?
 
I am surprised MS hasnt incorporated a physics API in DixrectX.

That can give the industry a more focused and steady future to work with.
 
Has anyone noticed the intel ice fighters benchmark (designed to show off quadcores) installs the ageia drivers
 
I am surprised MS hasnt incorporated a physics API in DixrectX.

That can give the industry a more focused and steady future to work with.

I believe MS has been investigating (with some cooperation from Havoc and the graphics IHVs) ways to include DirectPhysics into DirectX.

I was under the assumption that eventually they would have licensed something from Havoc for use in DirectX.

Personally, now that I have a couple X1k GPUs gathering dust in a drawer here, I'd absolutely LOVE some meaningful support for physics on GPU. Since I have no plans on ever going back to SLI/Crossfire unless they come up with a more elegant solution to multi-GPU rendering, I'm already setup for graphics based physics processing.

I have my HD 2900 XT render the graphics for the program while and old X1800 XT renders the physics. It would be a perfect and ideal solution for me, rather than using CPU cycles that I can better use for other tasks.

Regards,
SB
 
Back
Top