AGEIA bought!

INKster

Veteran
I'm usually reluctant to post anything Fudzilla due to their shady track-record so far, but Fuad is reporting that AGEIA has been acquired by an unnamed suitor (he only says that it's not AMD):
Unknown bidder


An industry source has confirmed that Ageia has been acquired. We don’t know who got the company but we can confirm that it is not AMD.

We suspect Intel or Nvidia but we still don’t know, maybe is a third player after all.

We will try to get more details.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=5260&Itemid=1
 
My bet is NVIDIA is most likely, followed by MS, then Intel and finally Sony. Why NV? SW & Engineering talent + if they want to add stuff to their DX11 GPU especially for physics (rather than just marketing buzzwords), now is the time to get started. And they don't know what to do with their money anyway...
Q3 CC said:
On the balance sheet, cash and marketable securities ended the quarter at $1.85 billion, which is an increase of $281 million from the second quarter, despite repurchasing $125 million of stock during the quarter.

Operating cash flow hit $400 million in the quarter for the first time. Accounts receivable were $552 million, an increase of only $44 million from the second quarter as the even profile of shipments during the quarter helped collections.
 
My bet is NVIDIA is most likely, followed by MS, then Intel and finally Sony. Why NV? SW & Engineering talent + if they want to add stuff to their DX11 GPU especially for physics (rather than just marketing buzzwords), now is the time to get started. And they don't know what to do with their money anyway...

When its NV thats can cause bad things in the future for AMD terms of GPU physics, because Intel has Havok, NV has Ageia, AMD has nothing.
 
“We have no information about [the acquisition of Ageia]. There have been no changes [in the structure of investors]. As you know this kind of rumors pops up from time to time…” said Dan Forster, a spokesperson for Ageia Technologies.

From X-bit labs
 
Hasn't nV already bought some physics middleware company from Norway or such? Was it Meqon or so? I can't remember exactly.

As for Ageia, whoever thought buying them is a good idea should be forced to clean the toilets for the rest of his career.
 
Hasn't nV already bought some physics middleware company from Norway or such? Was it Meqon or so? I can't remember exactly.

As for Ageia, whoever thought buying them is a good idea should be forced to clean the toilets for the rest of his career.

That's a little harsh. Their card may not have amounted to much, but they've got decent physics middleware and are the only product left on the open market that could support GPUs. I think an acquisition of Ageia would make them significantly more interesting as they'd have the muscle to push their software, and would perhaps be forced to abandon their cards in favor of the gpu approach.
 
Hasn't nV already bought some physics middleware company from Norway or such? Was it Meqon or so? I can't remember exactly.
I'm not aware of that. And regarding Meqon, you seem horribly confused: http://www.meqon.com/ (Ageia acquired both Novodex and Meqon...)

As for Ageia, whoever thought buying them is a good idea should be forced to clean the toilets for the rest of his career.
I think you're not really thinking that through. It depends a lot on the price of course, but good SW, good SW engineers, and probably rather good HW engineers who have "what went wrong?" experience never hurts.
 
“We have no information about [the acquisition of Ageia]"
What on earth type of statement is that from a PR guy? Either your company has been bought or it hasn't, what do you mean you have no information!?
 
Hasn't nV already bought some physics middleware company from Norway or such? Was it Meqon or so? I can't remember exactly.

Ageia bought Meqon in 2005. Meqon was Swedish.
 
Ok, confused that with nV buying Meqon.

As for the comments above, physics on GPU is just plain dumb and nothing substantial for the future. We'll have 8-core and bigger CPU's in a year or two, why would anyone abuse the precious GFX flops for a bit of physics? That's a complete no-go long term.

And also what is so special about their API to begin with?
 
As for the comments above, physics on GPU is just plain dumb and nothing substantial for the future. We'll have 8-core and bigger CPU's in a year or two, why would anyone abuse the precious GFX flops for a bit of physics? That's a complete no-go long term.

And also what is so special about their API to begin with?

Oh boy, here we go again.

GPU physics is not "dumb".

Think about this: what is the average PC gamer more likely to have?
1) a 4-8 core CPU
2) a discrete GPU

Clearly the answer is a discrete GPU. GPU physics has already been demonstrated as a viable technology, simply lacking software support (API, game dev inclusion).
 
Think about this: what is the average PC gamer more likely to have?
1) a 4-8 core CPU
2) a discrete GPU
The average gamer is most likely going to have a dual-core CPU that isn't fully utilized and a GPU that would have a hard time doing physics work without noticably dropping frame rates.

Single-core CPUs are already extinct, while IGPs and low-end cards are actually gaining popularity as a viable option for people who don't game every day. So for game developers it's much more interesting to do their physics on the CPU instead of on the GPU, which can vary widely in performance. Quad-cores are also already getting affordable, and it won't take many years till you can't buy a dual-core any more, while weaker GPUs will keep existing next to the awesomely powerful cards.

Of course, for low-interaction 'eye candy' physics the GPU is an excellent candidate. For less embarassingly parallel work that requires sending data back and forth it's better to just stick to using the CPU.
 
The average gamer is most likely going to have a dual-core CPU that isn't fully utilized and a GPU that would have a hard time doing physics work without noticably dropping frame rates.
I definitely don't disagree with the first statement in the current market situation (but I'd point out this kind of strategy would be long-term), however 'dropping frames' makes absolutely zero sense unless you're thinking of raw performance requirements.

Single-core CPUs are already extinct, while IGPs and low-end cards are actually gaining popularity as a viable option for people who don't game every day.
That latter information is completely inaccurate. Double-attach rates are at all-time records, ffs! (number of discrete GPUs being used on IGPs, thus being counted as likely both one unit for Intel and one for NV or AMD in market share reports, artificially inflating Intel's share). And let's be honest with ourselves: $100 GPUs are completely inadequate for any AAA game released in the last 6 months.

So for game developers it's much more interesting to do their physics on the CPU instead of on the GPU, which can vary widely in performance.
That's completely true today, yes. In the long-term, I think that's a flawed assumption for perf/mm² and perf/watt reasons as well as market segmentation though. The fact Intel wants to run Havok on Larrabee should make it obvious they think that way too (although you could argue it's even more desirable on Larrabee than classic SIMD GPUs!)

Quad-cores are also already getting affordable, and it won't take many years till you can't buy a dual-core any more, while weaker GPUs will keep existing next to the awesomely powerful cards.
We must not be looking at the same roadmaps. Dual-cores will remain important throughout the 32nm era, which means them truly exiting the market is a good 5 years away! Of course, by then it won't be gamers buying those chips, but you get the point.

Of course, for low-interaction 'eye candy' physics the GPU is an excellent candidate. For less embarassingly parallel work that requires sending data back and forth it's better to just stick to using the CPU.
Mostly agreed, yeah. That's certainly true on current GPUs, I'm skeptical it'll remain the case forever though... :)
 
Oh boy, here we go again.

GPU physics is not "dumb".

Think about this: what is the average PC gamer more likely to have?
1) a 4-8 core CPU
2) a discrete GPU

Clearly the answer is a discrete GPU. GPU physics has already been demonstrated as a viable technology, simply lacking software support (API, game dev inclusion).

Oh boy, and imagine what that GPU will be doing, especially if the game is gfx-intensive. Then you force it to do two entirely different things and stel the valuable gfx flops for physics halving your fps in the process. All that while at least one CPU core (or a few) sit idle.

Uhm, yeah, very clever.
 
Oh boy, and imagine what that GPU will be doing, especially if the game is gfx-intensive. Then you force it to do two entirely different things and stel the valuable gfx flops for physics halving your fps in the process. Uhm, yeah, very clever.
You're NOT looking at it the right way. It's a long-term battle to make the IPC-optimized CPU a commodity (even in the gaming market) and steal its ASP to your own advantage. So obviously a 'balanced' system there would have a cheaper CPU and a slightly more expensive GPU.

Whether that will work, whether it could even work, is a complex debate that also requires knowledge of DX11/DX12 GPU architectures, adoption rates, and marketing budgets. Personally, I think that it could happen if NV or AMD started really executing in the right direction there, but we'll see if they do.
 
That latter information is completely inaccurate. Double-attach rates are at all-time records, ffs! (number of discrete GPUs being used on IGPs, thus being counted as likely both one unit for Intel and one for NV or AMD in market share reports, artificially inflating Intel's share). And let's be honest with ourselves: $100 GPUs are completely inadequate for any AAA game released in the last 6 months.


Im curious, where are these statistics? It would be interesting to find out the "market" size for certain levels of graphics performance.
 
I don't have access to recent price segment statistics sadly (they're only in paid reports). The only ones you *might* be able to find are channel-specific and won't consider OEMs, which are obviously a major part of the market. And you'd expect OEMs to be lower-end in average, too, which makes the numbers rather worthless unless you know by how much (which I don't).

Regarding double attach rates, NVIDIA's internal estimates (which were given at a non-quartely analyst conference in November) are a mind-blowing 55%. Yes, that means less than half of IGPs being sold are actually being used according to them! Which would put Intel very significantly behind NVIDIA in practice too. I'd expect NV's claims to be on the high side, but still...
 
Back
Top