AGEIA bought!

This is an area where I would like to see Microsoft step into. They're pretty much one of the only organisations who can force a standard. With Havok going to Intel and Ageia going to Nvidia, it'd be nice to see some sort of "DirectPhysics" to provide a standard API that everyone can use.
 
Will they have some influence on future GPU design too, or can we expect a mere software API port to CUDA ?
 
Will they have some influence on future GPU design too, or can we expect a mere software API port to CUDA ?
The design of the next generation is likely closed already, and the architecture of the generation after that is likely going to be even more programmable and flexible anyway (with physics, GPGPU and maybe even raytracing in mind). So I don't think this acquisition changes anything, it was planned all along.
 
i found this quote here:
http://www.bootdaily.com/index.php?option=com_content&task=view&id=1006&Itemid=59

"Earlier this week, NVIDIA announced it had entered into an agreement to purchase Ageia for an undisclosed amount but it would be hosting a conference call later this month to go over details. We've learned from sources close to the deal that all shareholders of Common Stock within Ageia (which includes many former employees and current ones) that the company's stock has been nulled through this deal."

can nvidia just take away peoples shares ?
 
i found this quote here:
http://www.bootdaily.com/index.php?option=com_content&task=view&id=1006&Itemid=59

"Earlier this week, NVIDIA announced it had entered into an agreement to purchase Ageia for an undisclosed amount but it would be hosting a conference call later this month to go over details. We've learned from sources close to the deal that all shareholders of Common Stock within Ageia (which includes many former employees and current ones) that the company's stock has been nulled through this deal."

can nvidia just take away peoples shares ?

This has nothing to do with Nvidia but all with the VC's: they determine the rules about what to do in case of a take-over, going public etc, and they pretty much have full control to change those rules as you go.

In this particular case, it's likely that there simply was nothing left to distribute: VC's pumped $55M into the company, they have the priority right in getting back that money. If employees indeed didn't receive anything, then that's a nice indication that Nvidia paid less than that. This shouldn't be too surprising: Intel paid $110M for Havok, which has a much wider and more successful product portfolio that actually makes some money. Ageia bet on HW, lost, and ran out of money. Not a great position to be in...

You shouldn't be too upset about the fate of its employees. When you join a startup, this is what you sign up for: a low(er) salary, insane working hours, gobs of stock options, the uncertainty of losing your job at any time, a 5% chance that those options will one day be worth something, and a 1% chance that they'll be worth A LOT. It's exciting, and, in 95% of the cases, not as profitable as a working for a 'regular' job, but where's the fun in that?
 
Seems as though Silent Guy's predictions are going to be on the money.

I wonder how quickly they will be able to port it to CUDA? My guess is very fast. Perhaps on DX10 GPUs acceleration wont be as robust as with DX11, but I can't imagine they won't have something fairly soon.
I'm amazed; it seems as though everyone here is reading something different than I am into NVIDIA/Ageia's public comments. At least the spin from xbitlabs on their own interview is much more compatible with what I'm saying: http://www.xbitlabs.com/news/multim..._with_Dedicated_Physics_Processors_Ageia.html

I think the fundamental difference in points of view emerges from many people's assumption that PhysX could be ported fully or mostly to CUDA. Errrrr, no. Havok FX can only do one thing: approximate effects physics. There's a very good reason why it doesn't handle more than that: because it couldn't. And that's with plenty of help from the CPU already!

For those interested in the technical details, there's this pretty nice presentation about it: http://www.gpgpu.org/s2007/slides/15-GPGPU-physics.pdf - the limitations are pretty much as such:
- Integration on the GPU.
- Broad phase (detecting potential collisions) on the CPU.
- Narrow phase (verifying potential collisions) on the GPU for rigid objects but approximated using a "proprietary texture-based shape representation".
- GPU->CPU feedback is really low to improve scalability.

Particles are likely a slightly optimized path where at least one of the two rigid objects is a sphere (->easier), and fluids/cloth/etc. are likely complete separate code paths.

So I'm sorry, but that's so far from being a full physics API it's not even funny. And no, you couldn't port a full physics API to CUDA anyway; the problem is that there is no MIMD control flow logic anywhere on the chip and the PCI Express latency is too high. The PhysX chip is quite different there: many units are basically MIMD (although Vec4) iirc, and there are control cores and a full MIPS core. Larrabee will also handle that problem via on-core MIMD ALUs (since it's a CPU with vector extensions and some extra logic, not a traditional GPU).

The reason why Havok FX is so fast also isn't just the GPU's performance, but also that proprietary texture-based shape representation. I like the idea (a lot), but it's not very precise (by definition) and couldn't be used for anything else. I'm also not sure who owns that IP (NV? Havok?) - either way, I do feel that this is something that's missing in the PhysX API and a new special-purpose path for that would nearly be necessary.

Ideally, in the short-term, NVIDIA would give away a free CUDA-based effects physics mini-engine (with an optimized CPU path eventually?) that developers could use for free and easily integrate. However, one possibility is they'll just extend the PhysX API and add that, forcing everyone to use PhysX if they want to benefit from it. There are both advantages and disadvantages to that approach.

The goal in the longer-term would be to implement a more correct narrow-phase and do broad-phase on the GPU too, along as with more advanced functionality than just basic rigid bodies. This would require on-chip MIMD control ALUs/cores. What I've been trying to imply is that I suspect that's NV's plan in the DX11 timeframe; otherwise, their benefit from the PhysX API would be near-zero. You just can't accelerate much if any of what's currently in PhysX on modern DX10 GPUs.

It is also from that perspective that I suggest it would be a significant disadvantage *not* to release another PPU, because then the PhysX API will likely have become even more irrelevant by 2H09/1H10 when you might be able to accelerate it directly on the GPU. And that'd make the acqusition effectively useless - not a very wise move...
 
It is also from that perspective that I suggest it would be a significant disadvantage *not* to release another PPU, because then the PhysX API will likely have become even more irrelevant by 2H09/1H10 when you might be able to accelerate it directly on the GPU. And that'd make the acqusition effectively useless - not a very wise move...
PhysX SDK isn't tied to the PPU you know :)
 
...

It is also from that perspective that I suggest it would be a significant disadvantage *not* to release another PPU, because then the PhysX API will likely have become even more irrelevant by 2H09/1H10 when you might be able to accelerate it directly on the GPU. And that'd make the acqusition effectively useless - not a very wise move...
Interesting breakdown. Thanks.

I wonder how many of these disadvantages will remain to exist with DirectX 11 era graphics cards though.
 
I think the fundamental difference in points of view emerges from many people's assumption that PhysX could be ported fully or mostly to CUDA. Errrrr, no. Havok FX can only do one thing: approximate effects physics. There's a very good reason why it doesn't handle more than that: because it couldn't. And that's with plenty of help from the CPU already!
Jen-Hsun would believe otherwise. From the financials call:
Jen-Hsun said:
Our strategy is to take the AGEIA physics engine, which has been integrated into tools and games all over the world, and we’re going to port the AGEIA physics engine onto CUDA.
 
Shush, with all due respect, Jen-Hsun also seems to think Tri-SLI wasn't released yet! :p Anyway, as per the discussion in the Q407 results thread (Semiconductor Financials forum), I think he's simply confused in believing they can port the full API to GPUs and truly benefit from the existing software install base.

I can believe they could accelerate part of it on the GPU though and perhaps create a new 'effects' path, but I'll simply say that certainly is NOT the strategy I would be adopting here. The physics acceleration scene has already been so underwhelming in recent years that the last thing the industry needs is yet another overhyped but pointless solution, IMO. I still think it'd be much wiser to wait this out until you can actually make it really compelling.

Oh well, it's not like Jen-Hsun or NV's GPU business in general cared about my advice either way, so I'll stop this right here!
 
I don't think it's entirely impossible. Remember NVIDIA is trying to push CUDA into research area, and many HPC projects are actively looking into using CUDA to do something. Therefore, it's entirely possible that future versions of CUDA will be quite capable of doing more complex physics simulations.

Of course, a fundamental problem is, games are not going to be bound by a specific hardware. This is true for PPU, and is also true for CUDA-capable GPUs. IMHO, earlier possibility of CUDA-enhanced effects will be cosmetic only. This is not actually that bad though. An explosion with 10x particles will look much better. Water waves can also be simulated much better even with current CUDA GPUs.

Game mechanic related physics simulation will still be done on the CPU, until someday there's a standard interface for GPUs to do these things. But as Arun pointed out, the bandwidth/latency limit between PCI Express and main memory is still a major obstacle.
 
I think that CUDA will serve like somewhat of a version 0.5 for standard GPGPU interface which will eventually be included in DX and OGL :)
Much like Cg was for HLSL/GLSL.
 
I think that CUDA will serve like somewhat of a version 0.5 for standard GPGPU interface which will eventually be included in DX and OGL :)
Much like Cg was for HLSL/GLSL.

It would be best if there's a standard streaming processing language for everything, i.e. NVIDIA's GPU, ATI/AMD's GPU, and Larrabee. Actually I don't think CUDA is that different from Brook+. However, even with a common language, you'll still want to optimize your program for specific GPUs because they tend to have quite different performance characteristic.

That's why I think eventually game developers (and even application developers) will more likely to use higher level libraries such as Ageia. For example, suppose that you want to decode JPEG with GPU acceleration, you don't write your own GPU accelerated JPEG decoder, you buy one.
 
quote from the top dude at nvidia:

We're working toward the physics-engine-to-CUDA port as we speak. And we intend to throw a lot of resources at it. You know, I wouldn't be surprised if it helps our GPU sales even in advance of [the port's completion]. The reason is, [it's] just gonna be a software download. Every single GPU that is CUDA-enabled will be able to run the physics engine when it comes. . . . Every one of our GeForce 8-series GPUs runs CUDA.

Our expectation is that this is gonna encourage people to buy even better GPUs. It might—and probably will—encourage people to buy a second GPU for their SLI slot. And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics.
 
Nvidia GPU physics engine up and running, almost
While Intel's Nehalem demo had 50,000-60,000 particles and ran at 15-20 fps (without a GPU), the particle demo on a GeForce 9800 card resulted in 300 fps. If the very likely event that Nvidia’s next-gen parts (G100: GT100/200) will double their shader units, this number could top 600 fps, meaning that Nehalem at 2.53 GHz is lagging 20-40x behind 2006/2007/2008 high-end GPU hardware. However, you can’t ignore the fact that Nehalem in fact can run physics.

There was also a demonstration of cloth: A quad-core Intel Core 2 Extreme processor was working in 12 fps, while a GeForce 8800 GTS board resulted came in at 200 fps. Former Ageia employees did not compare it to Ageia's own PhysX card, but if we remember correctly, that demo ran at 150-180 fps on an Ageia card.

Actually, there is no word, that they (NV) have "ported" Ageia's PhysX to CUDA to run these tests.
 
Back
Top