AGEIA bought!

Discussion in 'Graphics and Semiconductor Industry' started by INKster, Jan 22, 2008.

  1. Sc4freak

    Newcomer

    Joined:
    Dec 28, 2004
    Messages:
    233
    Likes Received:
    2
    Location:
    Melbourne, Australia
    This is an area where I would like to see Microsoft step into. They're pretty much one of the only organisations who can force a standard. With Havok going to Intel and Ageia going to Nvidia, it'd be nice to see some sort of "DirectPhysics" to provide a standard API that everyone can use.
     
  2. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
  3. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Will they have some influence on future GPU design too, or can we expect a mere software API port to CUDA ?
     
  4. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    The design of the next generation is likely closed already, and the architecture of the generation after that is likely going to be even more programmable and flexible anyway (with physics, GPGPU and maybe even raytracing in mind). So I don't think this acquisition changes anything, it was planned all along.
     
  5. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    15,068
    Likes Received:
    2,397
    i found this quote here:
    http://www.bootdaily.com/index.php?option=com_content&task=view&id=1006&Itemid=59

    "Earlier this week, NVIDIA announced it had entered into an agreement to purchase Ageia for an undisclosed amount but it would be hosting a conference call later this month to go over details. We've learned from sources close to the deal that all shareholders of Common Stock within Ageia (which includes many former employees and current ones) that the company's stock has been nulled through this deal."

    can nvidia just take away peoples shares ?
     
  6. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    if they are buying it out right yes pretty much they are buying all shares.
     
  7. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    This has nothing to do with Nvidia but all with the VC's: they determine the rules about what to do in case of a take-over, going public etc, and they pretty much have full control to change those rules as you go.

    In this particular case, it's likely that there simply was nothing left to distribute: VC's pumped $55M into the company, they have the priority right in getting back that money. If employees indeed didn't receive anything, then that's a nice indication that Nvidia paid less than that. This shouldn't be too surprising: Intel paid $110M for Havok, which has a much wider and more successful product portfolio that actually makes some money. Ageia bet on HW, lost, and ran out of money. Not a great position to be in...

    You shouldn't be too upset about the fate of its employees. When you join a startup, this is what you sign up for: a low(er) salary, insane working hours, gobs of stock options, the uncertainty of losing your job at any time, a 5% chance that those options will one day be worth something, and a 1% chance that they'll be worth A LOT. It's exciting, and, in 95% of the cases, not as profitable as a working for a 'regular' job, but where's the fun in that?
     
  8. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I'm amazed; it seems as though everyone here is reading something different than I am into NVIDIA/Ageia's public comments. At least the spin from xbitlabs on their own interview is much more compatible with what I'm saying: http://www.xbitlabs.com/news/multim..._with_Dedicated_Physics_Processors_Ageia.html

    I think the fundamental difference in points of view emerges from many people's assumption that PhysX could be ported fully or mostly to CUDA. Errrrr, no. Havok FX can only do one thing: approximate effects physics. There's a very good reason why it doesn't handle more than that: because it couldn't. And that's with plenty of help from the CPU already!

    For those interested in the technical details, there's this pretty nice presentation about it: http://www.gpgpu.org/s2007/slides/15-GPGPU-physics.pdf - the limitations are pretty much as such:
    - Integration on the GPU.
    - Broad phase (detecting potential collisions) on the CPU.
    - Narrow phase (verifying potential collisions) on the GPU for rigid objects but approximated using a "proprietary texture-based shape representation".
    - GPU->CPU feedback is really low to improve scalability.

    Particles are likely a slightly optimized path where at least one of the two rigid objects is a sphere (->easier), and fluids/cloth/etc. are likely complete separate code paths.

    So I'm sorry, but that's so far from being a full physics API it's not even funny. And no, you couldn't port a full physics API to CUDA anyway; the problem is that there is no MIMD control flow logic anywhere on the chip and the PCI Express latency is too high. The PhysX chip is quite different there: many units are basically MIMD (although Vec4) iirc, and there are control cores and a full MIPS core. Larrabee will also handle that problem via on-core MIMD ALUs (since it's a CPU with vector extensions and some extra logic, not a traditional GPU).

    The reason why Havok FX is so fast also isn't just the GPU's performance, but also that proprietary texture-based shape representation. I like the idea (a lot), but it's not very precise (by definition) and couldn't be used for anything else. I'm also not sure who owns that IP (NV? Havok?) - either way, I do feel that this is something that's missing in the PhysX API and a new special-purpose path for that would nearly be necessary.

    Ideally, in the short-term, NVIDIA would give away a free CUDA-based effects physics mini-engine (with an optimized CPU path eventually?) that developers could use for free and easily integrate. However, one possibility is they'll just extend the PhysX API and add that, forcing everyone to use PhysX if they want to benefit from it. There are both advantages and disadvantages to that approach.

    The goal in the longer-term would be to implement a more correct narrow-phase and do broad-phase on the GPU too, along as with more advanced functionality than just basic rigid bodies. This would require on-chip MIMD control ALUs/cores. What I've been trying to imply is that I suspect that's NV's plan in the DX11 timeframe; otherwise, their benefit from the PhysX API would be near-zero. You just can't accelerate much if any of what's currently in PhysX on modern DX10 GPUs.

    It is also from that perspective that I suggest it would be a significant disadvantage *not* to release another PPU, because then the PhysX API will likely have become even more irrelevant by 2H09/1H10 when you might be able to accelerate it directly on the GPU. And that'd make the acqusition effectively useless - not a very wise move...
     
  9. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,340
    Likes Received:
    66
    Location:
    msk.ru/spb.ru
    PhysX SDK isn't tied to the PPU you know :)
     
  10. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    Interesting breakdown. Thanks.

    I wonder how many of these disadvantages will remain to exist with DirectX 11 era graphics cards though.
     
  11. Rufus

    Newcomer

    Joined:
    Oct 25, 2006
    Messages:
    246
    Likes Received:
    60
    Jen-Hsun would believe otherwise. From the financials call:
     
  12. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Shush, with all due respect, Jen-Hsun also seems to think Tri-SLI wasn't released yet! :p Anyway, as per the discussion in the Q407 results thread (Semiconductor Financials forum), I think he's simply confused in believing they can port the full API to GPUs and truly benefit from the existing software install base.

    I can believe they could accelerate part of it on the GPU though and perhaps create a new 'effects' path, but I'll simply say that certainly is NOT the strategy I would be adopting here. The physics acceleration scene has already been so underwhelming in recent years that the last thing the industry needs is yet another overhyped but pointless solution, IMO. I still think it'd be much wiser to wait this out until you can actually make it really compelling.

    Oh well, it's not like Jen-Hsun or NV's GPU business in general cared about my advice either way, so I'll stop this right here!
     
  13. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,774
    Likes Received:
    157
    Location:
    Taiwan
    I don't think it's entirely impossible. Remember NVIDIA is trying to push CUDA into research area, and many HPC projects are actively looking into using CUDA to do something. Therefore, it's entirely possible that future versions of CUDA will be quite capable of doing more complex physics simulations.

    Of course, a fundamental problem is, games are not going to be bound by a specific hardware. This is true for PPU, and is also true for CUDA-capable GPUs. IMHO, earlier possibility of CUDA-enhanced effects will be cosmetic only. This is not actually that bad though. An explosion with 10x particles will look much better. Water waves can also be simulated much better even with current CUDA GPUs.

    Game mechanic related physics simulation will still be done on the CPU, until someday there's a standard interface for GPUs to do these things. But as Arun pointed out, the bandwidth/latency limit between PCI Express and main memory is still a major obstacle.
     
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,340
    Likes Received:
    66
    Location:
    msk.ru/spb.ru
    I think that CUDA will serve like somewhat of a version 0.5 for standard GPGPU interface which will eventually be included in DX and OGL :)
    Much like Cg was for HLSL/GLSL.
     
  15. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    Also don't forget that "porting" can be very flexible in it's meaning.
     
  16. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,774
    Likes Received:
    157
    Location:
    Taiwan
    It would be best if there's a standard streaming processing language for everything, i.e. NVIDIA's GPU, ATI/AMD's GPU, and Larrabee. Actually I don't think CUDA is that different from Brook+. However, even with a common language, you'll still want to optimize your program for specific GPUs because they tend to have quite different performance characteristic.

    That's why I think eventually game developers (and even application developers) will more likely to use higher level libraries such as Ageia. For example, suppose that you want to decode JPEG with GPU acceleration, you don't write your own GPU accelerated JPEG decoder, you buy one.
     
  17. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    15,068
    Likes Received:
    2,397
    quote from the top dude at nvidia:

    We're working toward the physics-engine-to-CUDA port as we speak. And we intend to throw a lot of resources at it. You know, I wouldn't be surprised if it helps our GPU sales even in advance of [the port's completion]. The reason is, [it's] just gonna be a software download. Every single GPU that is CUDA-enabled will be able to run the physics engine when it comes. . . . Every one of our GeForce 8-series GPUs runs CUDA.

    Our expectation is that this is gonna encourage people to buy even better GPUs. It might—and probably will—encourage people to buy a second GPU for their SLI slot. And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics.
     
  18. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    LMAO, is crack that easy to get nowadays? :lol:
     
  19. Arnold Beckenbauer

    Veteran

    Joined:
    Oct 11, 2006
    Messages:
    1,415
    Likes Received:
    348
    Location:
    Germany
    Nvidia GPU physics engine up and running, almost
    Actually, there is no word, that they (NV) have "ported" Ageia's PhysX to CUDA to run these tests.
     
  20. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    What do you mean?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...