You mean compared with other PhysX (TWIMTBP) titles?
No. they just wanted to make batman fancier. With more PhysX effects than what exists in the shipping game. Time just ran out.
Chris
You mean compared with other PhysX (TWIMTBP) titles?
When did AMD drop Havok? Last I heard, that was still going quite strong.
-Charlie
Got a good explanation for why Physx doesn't even max a single CPU?
-Charlie
PhysX = proprietary NVIDIA
Havok = proprietary Intel (and why AMD in the end dropped it...after much(again) PR and nothing to show)
Then we have BulletPhysics that from what I gather only has been used in "indiegames"..and now is the hope of AMD...(until they pick something else).
Except that as far as we know, ATI is still assisting Havok with porting to OpenCL. So they haven't exactly dropped it.
[Edit] Replying to your later post, it's business. Businesses aren't so petty usually as to snub their competition if their competition has something they can work on together. MS and Sony still work on projects together even though they are direct competitors. MS still works with Apple. And AMD is still working with Intel (especially with regards to Havok).
And to the one above this one. Yup, voting with Wallet. Not a single person I know in RL has purchased Batman AA due to vendor lock in. And many have stated to me they will be skipping at least 1 (or more) generation of Nvidia hardware even if it completely blows ATI away in hopes Nvidia gets the message that they would prefer open standards/stuff to work cross vendor.
It's a bit of a sea change around here, as many of these same people have never owned anything other than Nvidia cards.
Regards,
SB
I fail to see the relevance on this topic (which is AMD PR (even making false claims))?
I should have put that into a new post, it was a reply to a post above yours that I replied to, where the person urged people to vote with their wallets rather than complain if a company is doing things some people view as harmful to the industry.
As to Havok. It is a wholely owned subsidiary of Intel as far as I'm aware, which means it makes it's own profits regardless of what Intel does. Whether Larrabee is ever released or not, Havok as a company still makes a profit and for Intel to abandon it would be to throw away their investment as well as a continuing revenue stream.
If Intel truly felt Havok was no longer of use for them, then they'd be better off to spin it off or look for a new buyer/allow Havok to buy themselves out.
Regards,
SB
Since Intel aquired Havok (and just let AMD hang in the wind...they are not friends you know)...and since AMD annnoced this last year:
http://www.amd.com/us/press-releases/Pages/amd-announces-new-levels-of-realism-2009sept30.aspx
But we are still in 2006...all words...and nothing to show.
Which is why Huddy really shouldn't have bended the facts like that.
Even Nvidia and Amd work together sometimes
"In a normal class certification hearing, the plaintiffs revealed an email between Nvidia senior VP of marketing, Dan Vivoli, and ATI’s president and COO, Dave Orton, which points to inflated prices and collusion. It reads: “I really think we should work harder together on the marketing front. As you and I have talked about, even though we are competitors, we have the common goal of making our category a well positioned, respected playing field. $5 and $8 stocks are a result of no respect.”"
They make money because they generally follow good business practices.
Havok is providing a postive revenue stream.
It's not only Larrabee related. They want Havok to maintain dominance in the physics market because then they can use the CPU implementation to their advantage over AMD's CPUs.Intel's only (real) interrest in Havok is "larrabee" related.
Funny how all your arguments are pre-"larragate"
Yeah unfortunately ATI is epic for making promises and never fulfilling them.. but hey they haven't joined the likes of DNF/BB .. yetThis is good stuff. nVidia just keeps stepping in it.
http://www.xbitlabs.com/news/multim...ling_Multi_Core_CPU_Support_in_PhysX_API.html
But to nVidias defense..at least they have something. The following video is from 06..... running on a X1600. The 58xx series should be a monster at physics Epic FAIL at the end of the video with the "end of this year" promise. We don't even have a flipping demo yet...
(The bold part) I don't get isn't it obvious? Would it be possible to preserve throughput and improve serial perfs in chip of the same size at larrabee? Is it a bit of free bashing or he suggests that as Larrabee was not to compete with nowadays GPUs and Intel should have known it they may have made sense to sacrifice some throughput to offer a more balanced chip (by the way making plain X86 choice for the ISA more relevant)?R.Huddy said:bit-tech: Given Intel's approach to using Intel Architecture (IA) in Larrabee, and as an x86 company yourself, do you think it's because Intel are using IA specifically that it's the problem?
RH: They really have a whole host of problems: some of which I wouldn't want to describe in too much detail because it points them in the right direction and they've got their own engineers to see that. The x86 instruction set is absolutely not ideal for graphics and by insisting on supporting the whole of that instruction set they have silicon which is sitting around doing nothing for a significant amount of time.
By building an in-order CPU core - which is what they are doing on Larrabee - they are going to get pretty poor serial performance. When the code is not running completely parallel their serial performance will be pretty dismal compared to modern CPUs. As far as I can tell they haven't done a fantastic job of latency hiding either - it's hyperthreaded like mad and has a huge, totally conventional CPU cache. Well it shouldn't come as a big surprise that it's simply not what we do. Our GPU caches simply don't work like CPU caches and they are for flushing data out of the pipeline at one end and preparing it to be inserted at the other - a read and write cache to keep all the GPU cores filled. One large cache and lots of local caches for the cores is not a great design. On top of which it doesn't actually have enough fixed function hardware to take on the problem as it's set out at the moment, so it needs to be rearchitected if Intel is to have a decent chance of competing.
It's amazing that you continue to completely ignore the fact that Havok on its own is a profitable venture. If it was costing Intel money I could see they may shutter it or far more likely, try to find a buyer for it.
But Intel is a business. They make money because they generally follow good business practices.
Havok is providing a postive revenue stream. Intel won't be shuttering it or otherwise changing any aspect of Havok, especially since Havok is relatively independant of Intel.
Noone will argue that Intel most likely were hoping to use Havok to help push Larrabee. But to think that Intel will suddenly attempt to sabotage Havok in order to spite the rest of the industry? That's just so far out in left field, I don't even know how to comment on it.
In fact if you are going to use emotions to guide business practices as you seem to be advocating. Intel would be far more likely to partner with AMD with regards to Havok in order to spite Nvidia who have been directly provoking Intel. Fortunately, Intel is a business. And such things like that won't factor into it.
Havok makes money. Havok continues to make money. Thus business as usual.
Regards,
SB