Nvidia Ageia acquisition - console edition

http://www.tomshardware.com/2008/02/05/analysis_nvidia_s_ageia_purchase_a_brilliant_move_/index.html


The actual success of Ageia is the PhysX SDK. PhysX is reportedly used in more than 140 game titles developed for Microsoft's Xbox 360, Nintendo's Wii, Sony PlayStation3 consoles and PC platforms. At the time of this writing, there were more than 10.000 active users of the PhysX SDK. Game developers who requested to remain anonymous told us that "PhysX is the best thing that can be utilized on a pathetically under-performing [Xbox 360] PowerPC processor".

I frowned when I read that comment about the "pathetically" under performing Xbox 360 Power PC processor, but nevermind that.


Why Ageia matters

In order run physics effects on your PC today, you typically have to use the CPU, regardless of the platform you rely on. IBM's Xenon & Broadway, Sony Cell, Intel Core or AMD Phenom - all of these CPUs, however, have not yet shown that they can be capable physics drivers, so, in our opinion, specialized physics accelerators will be the solution for the future.

Even at Intel there is Larrabee, which is designed to become an all-purpose accelerator chip that is used for graphics as well as ray-tracing and physics, according to sources close to company. The second part of the equation is the development of next-generation game engines, which are going to drive implementation of real-world physics with next-generation consoles and PCs.

Let's look the public statements made in regards to the Nvidia-Ageia deal:

Nvidia released a following statement from Jen-Hsun Huang, co-founder and CEO:

"The AGEIA team is world class, and is passionate about the same thing we are - creating the most amazing and captivating game experiences. By combining the teams that created the world's most pervasive GPU and physics engine brands, we can now bring GeForce-accelerated PhysX to hundreds of millions of gamers around the world."

Manju Hegde, co-founder and CEO of Ageia, released the following statement:

"Nvidia is the perfect fit for us. They have the world's best parallel computing technology and are the thought leaders in GPUs and gaming."

True or not, the two statements refer to the present situation. But this deal was all about the future and controlling (or at least balancing) the world of physics computing, which set to march beyond the domain of games. Based on these statements, you might think that all currently-shipped GeForce products support PhysX, while the truth is that PhysX will be implemented in future chips, destined to be shipped in the hundreds of millions. Suddenly there is a pretty good reason for developers and publishers to jump on PhysX immediately.

Following the acquisition yesterday, we had the chance to talk to Tim Sweeney, founder of Epic Games and the brain behind the Unreal engine. Sweeney said that "we've had a great relationship with the Ageia team for years, and bundle their PhysX library with Unreal Engine 3 as its standard physics solution." He added that he was "happy to see Nvidia jump in and throw its massive weight behind physics."

Sweeney mentioned that he is planning to use Ageia physics features with "future Unreal Engine 3 games on all platforms."

The "all platforms" note is particularly interesting. Hidden away from the eyes of public, engineers are creating next-generation Xbox, next-gen PlayStation and next-gen Wii titles. We managed to find out that all creative spirits of these projects are now hidden in caves, working hard on getting the new silicon for future parts. You can expect that new wave of consoles comes will come to market in the 2010/11 timeframe, even though conservative estimates are talking about 2012 at this point.

wheredoesphysxfit425.jpg


But, clearly, Nvidia's mention of "hundreds of millions of gamers" was a signal for the IT industry as whole. It will be driven in all major graphics application markets. When it comes to PC itself, Nvidia has several plans, seen in this author's 2nd grade MSPaint skills in the picture above. The future is in physics being rendered on Nvidia's integrated chipsets and graphics cards.

The key to this strategy is not to think just about Intel or AMD processors, but a bit wider than that. If we are listening to the "rumors that could be true" department, we should to pay attention to the next-generation Sony console, which may integrate physics capability directly into Nvidia's GPU, which reportedly is not going to be the last-minute patchwork Nvidia had to deliver with the PS3 RSX GPU.

What makes this deal a sensible solution is the fact that Ageia has the engineers to take advantage of Nvidia's future hardware. You can bet the farm on the fact that future GPUs will have substantial input from Ageia's staff in terms of effectively channeling: Current GPUs have a deadly flaw in GPGPU terms - there are substantial performance penalties when branching is used.

At the other hand, CPU and PPU excel in branching, because there is enough cache to put
"what-if" instructions and correctly predict what could happen. Intel knew that and is building Larrabee with massive cache in the middle, while Nehalem, Westmere and Sandy Bridge will continue to increase the overall amount of cache, while re-introducing Hyper-Threading, enabling up to 16 threads on a single socket.

It is too early to say what will be the first GPU influenced by Ageia's engineers, but we expect that some influence might already be seen in the high-end graphics chip coming in 2009.

Conclusion: Nvidia claims a key spot in next-generation console and PCs


I am drooling at the thought of a PS4 with

*next-generation CELL2 with 32 - 64 SPEs, providing TeraFlop or multi TeraFlop compute performance.

*Rambus main system memory in the 1 TeraByte/sec range (XDR2 will be too old for PS4)

*a much more custom Nvidia GeForce 11 or GF12 derived GPU
(that means 1 or 1.5 or 2 architectural generations beyond the next-gen Nvidia architecture, and what I mean by that next-gen architecture, GF10, is not the expected highend refresh of G80 that we thought was G92, but is known as D9E or what I still call 'NV55'. The G80 is 'NV50', the next highend GPU 'NV55' (2008?) the next totally new architecture is 'NV60' (2009) The PS4 GPU should be anything from custom NV70 (2011?) to NV75 (2012?) custom NV80 (beyond 2012). Alot could happen in Nvidia's roadmap between now and PS4, but I think we could at least agree PS4 GPU will be beyond GF10 / NV60.
*features equivalent of DX12 / D3D12. Integrated next-gen PhysX tech, and EDRAM providing the absolutely emense bandwidth that even Rambus' 1 TB/sec main system memory would not be able to provide. something like the equivalent leap in bandwidth that Xenos' EDRAM provided in 2005.
*Ability to not only render CGI like graphics but simulate all the physics interactions. That might leave CELL2 free to help with raytracing, global illumination or merely geometry & lighting calculations so RSX2 can do everything else.

PS4 should be a machine that lasts for the rest of the next decade, after it's released. That means at the very most, 9 years (if released in 2011) but more like 7-8 years (if released in 2012-2013).

It would be sad if PS4 is a modestly more powerful machine
that starts to look outdated by the middle of the next decade, just as PS3 looks outdated now in terms of graphics, much less than 2 years after release.

I think Sony needs to push PS4 as the one-console-future as much as they can. I am NOT saying I think we should have one universal console. there WILL be competition. There has to be. However from Sony's perspective, if they can get most of the industry on board PS4, like they did with Blu-ray (but failed to do so with PS3), they can get the support of other manufacturers like Matsushita, Apple, Pioneer, Philips, Samsung, Sharp etc. Sony has to go big, or go home. Xbox 3 is coming, Super Wii is coming. Sony cannot afford to F around. It's not about having the winning media format (Blu-ray) it's about having a killer chipset IMO.
 
Last edited by a moderator:
If MS is about to invest a lot of its silicon budget in a GPU that would do graphics and physic, invest heavily on the cpu starts to make a lot less sense IMO.

Anyway it makes sense Nvidia and Sony announced that they have term plans.
 
But it has to handle other things, like graphic related stuff, AI, animations etc.
Graphic related stuffs, no need if the gpu is good enough.
Animations could be handle buy the gpu too.

So this let AI, larrabee could do well supposely where thegu will be in this regard by 2011/12... I have no clue :LOL:

Anyway the decision Sony would depend on what is the more convinient for devs.

having a pretty weak cpu in row power (but efficient and easy to use) and use a huge GPU (or GPUs) as an accelerator where workload can be balanced

A more balanced hardware in regard to where the power is but may be a tougher time balancing workload.
 
Hmm, CPU handling GFX stuff and GPU being used for physics? How clever, what a reversal of roles ;)

Next thing we'll use the network adapter to drive the display?
 
Anyway from what I read in the pc version the cpu would always to deal with some calculations.
And GPU will become more than GPU, more general purpose accelerators, flexibility is more and more neede for the graphic pipeline and it looks like manufacturers are drooling at the idea of capturing HPC market share.
As for the cleverness both Intel and Nvidia are well known dumb companies ;)
 
I'd be surprised if physics ops on the PS4 were migrated from the Cell to the GPU, just because it would be, well... a seeming counter-intuitive allocation of resources on a mm^2 basis. I do believe that there is a strong niche for NVidia to carve out with this, but I wonder if it would make sense for Sony. It would almost seem to me to be a way for NVidia to bring other platforms up to that level of physics rather than target the Cell platform itself. BUT, anything's possible of course, and I certainly believe that NVidia will be the company onboard for the GPU in PS4. Still, I just hope that the present hats at Sony are up on their technical vision enough to not let bloat/redundant features get incorporated into the console just because NVidia pushes it on them.
 
Sony has to go big, or go home. Xbox 3 is coming, Super Wii is coming. Sony cannot afford to F around. It's not about having the winning media format (Blu-ray) it's about having a killer chipset IMO.
Well they are in another format war now... the new format is the Cell B.E. programming model and SPE ISA. More powerfull chipsets will eventually appear, but they won't wait too long as long as they carry this abstract format.
 
I agree CarlB if there a derivation of cell in the ps4, Sony would be dumb to invest a more transistors than needed in order for the gpu do deal the kind of graphic fidelity they want to achieve with their next system.

You say something interesting in the other Ageia thread, you said that the PPU include a MIPS core, do you more informations or idealy a link about the PPU architecture? It's pretty much non public stuffs ain't it?

Do you think Nvidia could include cpu in their gpu in order to achieve the kind of flexibility they 're aiming at?

EDIT :oops: my bad Carl really I mixed you up with Arun... Sorry
 
Last edited by a moderator:
PhysX in PS4 is just plain dumb ;). You have a new-improved performance monster in Cell2, and presumably a much better designed GPU chosen earlier in development with greater balance and unified shaders and graphics niceties, and then add a monster physics processor. Well, what would the Cell do? If graphics and physics are being taken care of, that's most of your heavy lifting done there! Better still to use Cell to drive physics using existing engines scaled up to improved hardware - a simple design for devs and a cheaper solution, and it actually makes use of Cell's design rather than adding it in as a truck-load of redundant processing power.
 
Well, on the physic IMHO the better is to have calculation on CPU and on GPU working together…
Physic have a impact on gameplay (like environment deformation) need probably to have CPU implementation and physic who are visual (particules, explosion) can be done on GPU alone…
 
Well, on the physic IMHO the better is to have calculation on CPU and on GPU working together…
Physic have a impact on gameplay (like environment deformation) need probably to have CPU implementation and physic who are visual (particules, explosion) can be done on GPU alone…
If you have a classic cpu that may be true, but with a cell which is about to include say 20 spu AI and physic should be a concern (even the part that would map efficiently on a actual GPU)and GPU would be better spending all its ressources on graphics rendering.

It's all about how balancing your silicon budget I guess, Let speak about Ms (sony is likely to use a cell like device so physic AI is not a concern the cpu will have monster muscles )
Imagine if Ms use 6 redone xenon cores (with no register issues, better cache, more custom logic=> tinier, etc.) @32 nm the chip will be really tiny they will be let with a lot more silicon budgget to spend on the GPU and they may want to use the GPU as a general purpose accelerator (if GPU prove to be efficient at this game).
They could chose an oversize GPU to be sure enough power is left to handle the graphic workload.
 
That's far more plausible, except for this being an nVidia acquisition and everyone is expecting ATi to land any GPU deal. That's not to say MS wouldn't go with with a separate PhysX-esque processor. TBH though I think they'll be waiting at the moment to see what options arise for the 'bulk-vector-processing-engine', whether on CPU, GPGPU, separate vector processor, or unified processor. I can't see this announcement having any immediate correlations with the consoles, other than perhaps better widespread PhysX adoption and integration into DX resulting in improved PC development with feedback into the console space both for algorithms, APIs and hardware.
 
why not integrating physics into the gpu pipeline ,by hardware design.Physics based shading.
BTW ,the physics chip is allready very close to the cell.(or vice versa)
 
If MS is about to invest a lot of its silicon budget in a GPU that would do graphics and physic, invest heavily on the cpu starts to make a lot less sense IMO.

I think strategically this is probably MS's best option.
 
Regardless of the AGEA acquisition, GPUs were always going to get more 'general' and vendors like nVidia were always going to explore shifting more types of processing task onto the GPU. AGEIA just helps nVidia in that effort.

So I think a scenario where you have two chips in a console, one of which is a GPU that does rendering and also can be practically leveraged for a wider range of other tasks, was always going to happen. I think the question over next gen is to what degree this will be true.

Will we have distinct chips, with a programming model that supports more general processing on the GPU?

Will we have the same set up, but automatic load balancing between the two chips?

Will we have a set up where a programmer can "write once, run anywhere" between the two chips?

Will we be at a point where we have two chips of the same type, that can both be used for either kind of task, without a compromise on rendering performance vs 'dedicated' GPUs?

The idea that a developer can choose what his hardware focusses on, where you can have up to two chips working on graphics, or two working on non-graphics tasks, or anything in between, where both chips are equally competent at anything, is fairly appealing. If such a chip can compete reasonably with dedicated hardware for rendering.
 
Hmm, CPU handling GFX stuff and GPU being used for physics? How clever, what a reversal of roles ;)

Next thing we'll use the network adapter to drive the display?

Lucky I saved all those network adapters... and to think we where going to throw them out :p
 
Last edited by a moderator:
Back
Top