Intel to buy Havok

I guess it will if Havok gets integrated into Intel chipsets in the future.
 
It probably means more trouble for AMD than PhysX I'm willing to speculate since the latter has big-title UE3 games to lean back on while Havok runs on CPUs.

And with intel owning the company you know whose CPUs they'll optimize for..
Peace.
 
What does this mean for Havok FX's future (Phyics on the GPU)?
It means it'll be renamed "Larrabee FX", most likely! ;)
PR said:
It will add to Intel's visual computing and graphics efforts, while continuing to develop products for all computing platforms, the company said.
And despite all the 'finally, this will kill Ageia' comments this will generate, I think otherwise. I think it positions Ageia splendidly to be acquired by Intel's competitors. At least, that's what they should do if they know what's best for them...
 
Instead of Havok's software being tailored towards Intel's hardware, how about Intel tailoring a mini-core towards the middleware :?:
 
It means it'll be renamed "Larrabee FX", most likely! ;)
And despite all the 'finally, this will kill Ageia' comments this will generate, I think otherwise. I think it positions Ageia splendidly to be acquired by Intel's competitors. At least, that's what they should do if they know what's best for them...

Yep. Next PR = "Nvidia buys Ageia".
 
Game.over for non-proprietary GPU physics. For NV & AMD. Fuck you Intel.

I would HavokFx not called “non-proprietaryâ€￾. If you really want something independent you should hope for “DirectPhysicsâ€￾ or “OpenPLâ€￾. But I am not sure if this is the right direction. I am more hoping for a new “vector unit APIâ€￾. It should provide independent access to SSE, ALTIVEC, GPUs or whatever can do vector based data processing. This would help on multiple places.
 
Good.

Why would you want your GPU to waste cycles on physics instead of drawing graphics?

Games in the future aren't going to become LESS grapically intensive oyu know..


Niooo, don't fuck them. Thank tthem instead! :cool:
Peace.

So the 100+ FPS current GPUs can deliver with insane detail levels isn't enough for you? I'd rather sacrifice a bit of FPS (while maintaining playability) and have more realistic physics/physics effects.
 
The obvious first step is that Intel helps Havok to really optimize it's products from Penryn, or more specifically SSE4. How useful are SSE4 for physics calculations?

Per
 
So the 100+ FPS current GPUs can deliver with insane detail levels isn't enough for you? I'd rather sacrifice a bit of FPS (while maintaining playability) and have more realistic physics/physics effects.

The problem is you're not going to see 100 fps with every bit of eyecandy enabled from the likes of Far Cry 2, Crysis, Alan Wake, UT3, etc. There is never a surplus of graphics power available with new games, because the developers use it all up with better graphics, and players use it up with higher resolutions. Even now, you can see that people can push Crossfire/SLI setups towards the edge of playability.

Sure, you can elect to have worse graphics and do some physics on the GPU instead, but it's far more likely that there will be a free core in a quad core processor that can be dedicated to physics than there will be "spare" capacity on the GPU.
 
The problem is you're not going to see 100 fps with every bit of eyecandy enabled from the likes of Far Cry 2, Crysis, Alan Wake, UT3, etc.

Every video I've seen of all the aforementioned games is very playable on current hardware with plenty of graphical detail. Granted no one outside of the studios knows the exact FPS and with what eye candy on which hardware precisely, but it's fair to assume 40+ fps with max. details and a relatively high-res widescreen resolution on current hardware.

There is never a surplus of graphics power available with new games, because the developers use it all up with better graphics, and players use it up with higher resolutions. Even now, you can see that people can push Crossfire/SLI setups towards the edge of playability.

Not unless you're running 8xAA or higher on a 30" screen :LOL:

Sure, you can elect to have worse graphics and do some physics on the GPU instead, but it's far more likely that there will be a free core in a quad core processor that can be dedicated to physics than there will be "spare" capacity on the GPU.

GPU physics would most likely require the use of an extra GPU (at least for best results), thus the push for 3 or even 4 PEG slots in enthusiast mobos.
 
The problem is you're not going to see 100 fps with every bit of eyecandy enabled from the likes of Far Cry 2, Crysis, Alan Wake, UT3, etc. There is never a surplus of graphics power available with new games, because the developers use it all up with better graphics, and players use it up with higher resolutions.
This argument, while common, is a complete fallacy. You always want to do the computation on the most efficient (perf/$, perf/W, etc) place possible. That may well be a GPU for physics nowadays, perhaps even a low-to-mid range one.

Thus the argument is simple, as it always has been: if you want more power, add a more powerful GPU (or more GPUs). The *only* place in which this doesn't work is if you already have 3 8800 Ultras (or whatever the maximum that a motherboard can take nowadays) and I severely doubt that you're going to sacrifice much - if any - graphical quality on such a setup to do the minimal computation that is required by physics. A single 8800 can already happily simulate tens of thousands of objects AND render them at 60+fps.

[...] but it's far more likely that there will be a free core in a quad core processor that can be dedicated to physics than there will be "spare" capacity on the GPU.
Even though these sorts of imbalances may exist right now, that's no reason for them to be desirable or expected. If there are "spare cores" available on the CPU, then you spent too much on your CPU and should have got a better GPU :)
 
Even though these sorts of imbalances may exist right now, that's no reason for them to be desirable or expected. If there are "spare cores" available on the CPU, then you spent too much on your CPU and should have got a better GPU :)


But can we trust the majority market to go with the faster GPU, nevermind adding more? That is, wouldn't it be more likely to find a multi-CPU setup versus SLI :?:
 
But can we trust the majority market to go with the faster GPU, nevermind adding more? That is, wouldn't it be more likely to find a multi-CPU setup versus SLI :?:
Right now, maybe, but that's largely ignorance (and the immaturity of SLI/Crossfire). I fully expect that to change, particularly with AMD and Intel both looking towards a more formal add-on "accelerator" model for their CPUs.
 
Back
Top