Shifty Geezer said:
Um, a bit off track there. Anyway, yeah they've got backing - that's no indicator in itself AFAICS. Given its apparent huge memory requirements I can't see it making it's way into a console, though the possibility is there and might make up for some of that Teraflop Xenon performance.
What memory requirements? The PC version will have 128MB of memory on card.
But it would be totally silly comparing the PC version to the console version. This is like saying an acceptible gaming PC requires at LEAST 512MB of memory and 128MB of video memory MINIMUM--and then applying that to Consoles.
It does not work that way. We are looking at next gen consoles having unified memory architectures where the GPUs and CPUs share a pool of common memory. There is absoluately nothing stating that a PPU could not be introduced into such a setup if such a chip was included in the initial design plan.
On a tangent, we are not currently aware of WHY the chip needs its own memory. You would think that possibly it could share the systems memory over PCIe. But when you consider GDDR3 memory has a bandwidth of 20-35GB/s and system memory is in the 3.2-6.4GB/s range--and shared among the system--that alone makes a good case of WHY it would need its own memory pool. It looks like it is meant to offload all the physics which with 30,000-50,000 rigid body objects intereacting in realtime would require a lot of memory bandwidth. Fighting the system memory, which is already limited, would bottleneck the performance. It is like comparing a two identical chips (e.g. NV40), one on PCI using system memory and the other on 8x AGP with its own GDDR3 memory.
So just as memory hungry/bandwidth intensive GPUs can play nicely on consoles but not on PCs, a similar situation may exist on the consoles. We do not know--and the very reason stating that the PC memory setup possibly precludes its inclusion on the console.
I do see 2 reasons why I see it difficult to believe this is in.
1) A 125M transistor chip requiring 25W of power would add additional cost and heat.
2) With all the rumblings about how flexible the GPU is, it does give you pause to consider the possibility if the GPU can do this type of work. If the answer is "Yes" then I would wonder why not just add another GPU and allow developers to choose what they do with it. I find this a less likely option (especially a 350M transistor chip) but so far we have based all our understandings of the system on a supposedly accurate OLD leak and ATi/nVidia are moving to multiple chips, so who knows.
In the end I would like to see the Ageia chip in a console because it would mean an increased chance it would catch on in the PC sector, but I think we wont know too much more until May.