Industry applications for AGEIA?

Honestly these days AGEIA sounds kinda desperate.
If you are there talking to AMD, NVIDIA (both of whom have there own physics solutions in the works, and far greater resources to push them) and Intel you aren't looking for someone to collaborate with you, you are looking for someone to buy you...
 
Honestly these days AGEIA sounds kinda desperate.
If you are there talking to AMD, NVIDIA (both of whom have there own physics solutions in the works, and far greater resources to push them) and Intel you aren't looking for someone to collaborate with you, you are looking for someone to buy you...

Tell us something new, we all knew that when we first heard about AGEIA.
 
Honestly these days AGEIA sounds kinda desperate.
If you are there talking to AMD, NVIDIA (both of whom have there own physics solutions in the works, and far greater resources to push them) and Intel you aren't looking for someone to collaborate with you, you are looking for someone to buy you...

I have heard the same old chime since the beginning.
Fact is they are the only compnay having games out using hardware (Not CPU) accelerated physcis.
They go a BIG sum of cash when Sony licensed the PhysX ApI for all PS3 games.
It's even out on the Xbox360 too, in "Gears of War"..physx is the physcis API there.
Developers seems to be adopting there API quite fine?

How long before the "doom 'n gloom" comes true? :)
I'd like a timeline please?
 
Developers seems to be adopting there API quite fine?

Thats all well and good but the API is free for the developer to use, thats why adoption has been good.
They need the hardware sales and right now there is no real compelling reason to buy a card, they need a killer app like 3DFX had with Tomb Raider or GLQuake and the likes, something which gives consumers a tangible benefit from having spent £200 on there card.
 
Thats all well and good but the API is free for the developer to use, thats why adoption has been good.
They need the hardware sales and right now there is no real compelling reason to buy a card, they need a killer app like 3DFX had with Tomb Raider or GLQuake and the likes, something which gives consumers a tangible benefit from having spent £200 on there card.

Right now they don't need anything, it's in the hands of the developers, the new CF enigne(not the same as used in the "CF - Traning" demo) will be nice.
UE3 wil also support this and you are telling me that people that use >$500 to get more eyecandy won't use ~$140 to get more eyecady + better phycis?

Then it's not a problem at AGEIA's side, but a bias at the consumer level...
 
And you trying to compare the architechture of the PPU to a GPU?
If so, have fun with that...
It's a big vector processor with onboard memory, much like a GPU. If you want to deal with large data sets, the PCI bus is going to stab you in the face. This isn't some crazy "hey, a PPU is exactly like a GPU" statement. It's a simple statement of fact regarding what a PPU actually is and what your average load would entail.
 
Well, it depends... the cheapest PhysX Accelerator here in Finland is ~270$ and there are no PhysX powered games out there worth buying it for so, although I'm very excited about PhysX I'm still not going to buy it just for cool SDK tech demos etc..

Project Offset will be using PhysX (hardware support), but that game is still far away from release though.

EDIT: Argh... that was way too offtopic. Forgive me :)
 
It's a big vector processor with onboard memory, much like a GPU. If you want to deal with large data sets, the PCI bus is going to stab you in the face. This isn't some crazy "hey, a PPU is exactly like a GPU" statement. It's a simple statement of fact regarding what a PPU actually is and what your average load would entail.

And disregarding the architechtual and API diifferences ;)
 
And disregarding the architechtual and API diifferences ;)
API differences have nothing to do with it. The architecture has nothing to do with it. You have a bottleneck. If you want to do what they propose, you will always be limited by this bottleneck in a really significant way.
 
If they claim it's twice as fast as a Cell (which is twice as big), you might wonder where three quarters of the transistors of Cell are used for. They're used for local memory and fast communication.

So, it might be twice as fast, on very small datasets. And that's definitely a problem when you need to go through slow PCI to get your data, as the others said.
 
API differences have nothing to do with it. The architecture has nothing to do with it. You have a bottleneck. If you want to do what they propose, you will always be limited by this bottleneck in a really significant way.

By that reasoning it should be too "slow" for physcis too, right?
 
If they claim it's twice as fast as a Cell (which is twice as big), you might wonder where three quarters of the transistors of Cell are used for. They're used for local memory and fast communication.

So, it might be twice as fast, on very small datasets. And that's definitely a problem when you need to go through slow PCI to get your data, as the others said.

It's more specialized than the Cell is my impression.
What interests me is thelower power consumption, combiend with it's preformance.
Adn what bugs me is that so little is know about the acutual hardware and it's innner workings.
 
It's more specialized than the Cell is my impression.
What interests me is thelower power consumption, combiend with it's preformance.
Adn what bugs me is that so little is know about the acutual hardware and it's innner workings.

Yep totally different architectures. The PPU like the Clearspeeds have no problems running on a PCI bus. If the Clearspeeds can do real scientifc problem acceleration using the PCI bus with great efficiency I don't see why a PPU couldn't do the same with the right software tools.
 
If the Clearspeeds can do real scientifc problem acceleration using the PCI bus with great efficiency I don't see why a PPU couldn't do the same with the right software tools.

Therein lies the rub IMO. If Ageia are to survive in the broader non-PC-gaming market they're going to have to provide the tools to program their hardware directly, rather than through some niche game-physics API, which is almost completely useless for technical and scientific applications.

Do they have the resources to do this? CUDA and CTM are already in the wild, likewise the development kits for Cell. You can bet that when Fusion and Intel's equivalent come to market they'll be well supported in this respect. [Clearspeed lag here too from what I've seen of their documentation, they could be roadkill too if they're not careful].
 
If you want to deal with large data sets, the PCI bus is going to stab you in the face.
Isn't there a PCIe version of the Ageia card out yet?

I thought that was demonstrated ages ago. Even 1x PCIe is like 3-4x faster than regular PCI or siomesuch isn't it?

Peace.
 
Isn't there a PCIe version of the Ageia card out yet?

I thought that was demonstrated ages ago. Even 1x PCIe is like 3-4x faster than regular PCI or siomesuch isn't it?

Peace.
They've been saying a PCIe PhysX card would be coming soon since PhysX was announced. Even with PCIe 1x, you're still quite a bit behind compared to the PCIe 16x link that GPUs have.
 
What is the chance they develop a version of PhysX for the AMD Torrenza platform? By doing that, people buying future 4x4 motherboards will get the option to either populate it with 2xCPU, or 1xCPU + 1xPhysX.

Maybe they could even get some official support from AMD doing this? I don't expect AMD to be able to match Intel on the CPU's, so being able to add a PPU directly on the motherboard could create some really powerfull AMD boxes.

Per
 
Last edited by a moderator:
Back
Top