AGEIA bought!

Arun, I would like to see your thoughts on this: Will the future NV/AGEIA Physics SDK (a) be designed to work on other GPUs and (b) will it will work on CPUs as well (and which ones--will it be an x86, Cell, PPC, etc product?).

Or will NV make a bold push for, "Only on Nvidia"?

I ask because this could go all sorts of directions. How they handle this will be very interesting.
 
Anyhow, since I won my bet wrt the Ageia acquisition (now I'm just waiting to see if I won my bet wrt how much NV would be willing to pay too), I'll take a new one: NVIDIA will announce they will acquire VIA before May 2008.

Will you eat your hat, if you're wrong?

Hybrid SLI with an IGP as a "PPU" for effects would be nice.
http://www.tgdaily.com/content/view/26667/128/
When I recently spoke to ATI, they told me that they're also looking at unlocking their onboard graphic chips, which currently sit idle on the motherboard when graphics cards are present, to do the physics work. Everything is gravy, apparently.
 
Last edited by a moderator:
I was just going to post that Arun wins. :D Congrats.

As for eating hats, he could also just buy some VIA stock. That solves both reward and punishment, more or less. ;)
 
Arun, I would like to see your thoughts on this: Will the future NV/AGEIA Physics SDK (a) be designed to work on other GPUs and (b) will it will work on CPUs as well (and which ones--will it be an x86, Cell, PPC, etc product?).
My expectation would be initial support for CPU/PPU, and DX11 NV-only GPU support coming later. If the PPU is ~$99 and they promise it'll be integrated into next-gen GPUs, I really don't think adoption will be too much of a problem. Also, I'm not sure why you seem to be expecting an all-new SDK?

Regarding CELL/PPC, well, what this means for the PS4 will be interesting. PhysX already works on CELL, and presumably CELL2 will be similar enough to port it easily. Who knows though, and that's more of a political question than a technical one anyway.
Or will NV make a bold push for, "Only on Nvidia"?
Let's put it this way: What are the chances they make it run on Larrabee's vector units, or on Fusion's GPU? :p
Arnold said:
Will you eat your hat, if you're wrong?
I'll eat my hat if it doesn't happen in 2008. However, I certainly won't eat it if nothing happens before May 2008 - I'm well past the point of eating hats because companies don't know what's best for them or are a bit slow to 'get' it.
Arnold said:
Hybrid SLI with an IGP as a "PPU" for effects would be nice.
One day, one day, I'll have the head of whoever thought it'd be funny to imply physics and graphics couldn't easily run on the same chip at the same time.
Arwin said:
I was just going to post that Arun wins. :D Congrats.
As for eating hats, he could also just buy some VIA stock. That solves both reward and punishment, more or less.
Thanks! As for VIA, I'm dead scared of Asian stocks, so I'll pass... :) (and I wouldn't be surprised if their valuation went down a little bit more first)
EDIT: Ohh, VIA is down a fair bit since I last looked; was at ~$800M, and it's ~$650M now.
 
Arun, I would like to see your thoughts on this: Will the future NV/AGEIA Physics SDK (a) be designed to work on other GPUs and (b) will it will work on CPUs as well (and which ones--will it be an x86, Cell, PPC, etc product?).

Or will NV make a bold push for, "Only on Nvidia"?

I ask because this could go all sorts of directions. How they handle this will be very interesting.


I think it become obvious why Intel is so pissed off regarding Nvidia.
Since quite some time (at least a year),Intel internaly views Nvidia as the main threat, not AMD.


Time will tell, but when they acquire Via too, they are definitly positioning themselfeves against AMD and Intel
 
Compared to the CPU, an IGP is fast at graphics thanks to its texture samplers. But for physics you just need GFLOPS, and I don't see much advantage over using an IGP instead of using CPU cores.
Maybe the fact it's there, idle, and completely free? PhysX has traditionally been able to work on either the PPU or the CPU.

I'd certainly expect CPU support to remain; so whether it'd run on the CPU, discrete GPU or IGP could even be determined at loadtime depending on what should give the best performance. As I pointed out though, I don't believe NV will support PhysX on GPUs before DX11. We'll see what happens.

EDIT: And as I said above, I'm also not convicned running physics on an IGP makes sense, for a variety of reasons (including memory bandwidth being stolen from the CPU, latency, and overall performance). I'm not willing to exclude the possibility either though.
 
Compared to the CPU, an IGP is fast at graphics thanks to its texture samplers. But for physics you just need GFLOPS, and I don't see much advantage over using an IGP instead of using CPU cores.

There is a funny old presentation from ATi:
000287853.jpg

And here is an article in russian: http://www.3dnews.ru/video/ati_physics2/ (september 2006). The russian guys had a possibility to make some own tests with ATi/Havok FX demos. "Physics hardware" were X1900, X1600 and Intel Core 2 Duo X6800.
http://www.3dnews.ru/video/ati_physics2/index2.htm
http://www.3dnews.ru/video/ati_physics2/index3.htm
http://www.3dnews.ru/video/ati_physics2/index4.htm
And you can see, the X1600 was faster than X6800. Most of current IGPs are as powerfull as X1600, some of them are more powerfull.
 
Last edited by a moderator:
Anyhow, since I won my bet wrt the Ageia acquisition (now I'm just waiting to see if I won my bet wrt how much NV would be willing to pay too), I'll take a new one: NVIDIA will announce they will acquire VIA before May 2008.


Anybody got Transmeta on the radar?
There is a public offer to buy out Transmeta. AMD also hold about 7% ?
Anybody expecting that Transmeta is bought by AMD or Nvidia, even Intel?
 
I never understood why or what purpose the PPU was for. I don't understand what it has that is soppose to make it excel over a graphics core considering GPU's are already math monsters as they are and are much better equiped with a higher bandwidth infrastructure. I thought(and and still do) that the PPU concept was nothing but a gimmick.
 
And you can see, the X1600 was faster than X6800. Most of currents IGP are as powerfull as X1600, some of them are more powerfull.
As Sound_Card already pointed out, current IGPs are surely not as powerful as an X1600.

Also, what software was used for the "CPU" benchmark? Is it Havok code optimized by Intel with SSE or a poor attempt at translating a shader to C code? I've done some profiling of AGEIA code a while ago and it was full of horrible things like changing FPU mode to get the correct rounding, full-precision square roots and divisions for vector normalization, it was single-threaded, etc.

Furthermore, people running games that require high performance physics processing likely already consider upgrading to quad-core. And Nehalem octa-cores might be presented by the end of the year. That's 205 GFLOPS at 3.2 GHz. Nothing to sneeze at...

And lastly, benchmarks with a massive number of repetitive physics elements without user interaction will do well on a GPU. Running the physics game developers and artists actually want in their games is a different story. As soon as you need lots of communication back and forth between CPU and GPU it becomes a lot more interesting to just keep everything on the CPU.
 
As Sound_Card already pointed out, current IGPs are surely not as powerful as an X1600.
But nearly:

X1600XT: 57.6GFLOPs MADD
HD3200 IGP: 40GFLOPs MADD

;)

And since NV has with MCP78 scalar ALUs it should be in some situtations serious faster.
 
I think it is most telling that NVIDIA is pushing out their Hybrid SLI to the high end crowd with the 7x0a series of chipsets. Their MCP78 will be integrated on a LOT of boards ranging from the low end to the high end. The installed base will be pretty significant, and if games come out using PhysX with all the extra goodies if you have some sort of hardware acceleration, likely the game will tell you "hey, you can turn some of this cool crap on cause you are not using your IGP, but I certainly can!"

It would be in NVIDIA's own best interest to open up PhysX to AMD, as that would sell a lot more developers on PhysX. But of course, since NV controls the software, they will obviously make it run far better on their sps than what AMD could do.
 
See Beyond3D's news post. My personal interpretation of that is they'll finish their current next-gen PPU and sell it as a discrete part, then stop PPU development.
Have a look at the press release: it mentions PhysX HW just as a footnote. There's really no point in releasing on more product and then kill it. Ageia's chip designers shouldn't be surprised to find a DXx spec on their desk this morning. ;)
 
VIA's "Isaiah" would be very tasty in Nvidia's PoV.
Here's a industry standards modern x86 chip that looks very promising in terms of Performance/Watt/mm^2, and that would give them direct and instant competing platform solutions (together with GoForce and/or PortalPlayer's ARM CPU's) on the booming UMPC and Smartphone markets.
"Isaiah" looks more than capable to fight against Intel "Silverthorne".

Yet, it would be the third (and, by far, the most expensive) acquisition that Nvidia would make in less than two quarters.
 
VIA's "Isaiah" would be very tasty in Nvidia's PoV.
Here's a industry standards modern x86 chip that looks very promising in terms of Performance/Watt/mm^2, and that would give them direct and instant competing platform solutions (together with GoForce and/or PortalPlayer's ARM CPU's) on the booming UMPC and Smartphone markets.
"Isaiah" looks more than capable to fight against Intel "Silverthorne".

Isaiah had too high of a power profile. ULV is somewhere around 5W. Need to be around 200-300mW TDP and 20-50 mW idle execution and <5 mW sleep. Basically you are limited to a <5 WH battery in these types of devices and want to shoot for a battery life of around 20-40 hours.

Even the rumored low power skew at 1/5-1/10 the CN ULV skews power is being called too power hungry.

aaron spink
speaking for myself inc.
 
Question, can we still expect MS to develop Physics for DX11?
Some sites talk about 'compute shaders'. So this would be targeted at GPGPU in general, not just physics. Hence it's likely that Microsoft didn't change any plans for DirectX 11. The specification might be finalized already anyway.
 
Back
Top