Nagorak said:
I think the major selling point of Power VR breaking (breaking the bandwidth barrier) has basically been destroyed thanks to DDR memory and now faster and faster DDR memory. They developed a system for defeating a bottleneck that never developed thanks to faster memory, along with some HSR aspects incorporated in IMRs. The deferred rendering approach might actually be "better" and maybe the one to choose if one was able to choose which path 3D graphics initially followed. It is a smarter approach...
You are not the only one making the statement that memory bandwidth is less of an issue now and in the immediate future than it has been. I'd still question that assertion. From the Voodoo days to the R9700 of today, effective bandwidth and real-life performance have been very closely connected. The only example of a next-generation 3D-engine that is on the horizon (depending on where you stand, of course
) is DOOM3, and it makes much higher demands on fillrate than earlier games. So there is no real-life support for asserting that memory bandwidth isn't critically important. Add to that the general trends towards better filtering and anti-aliasing, higher complexity environments and greater amounts of mobile/interactive geometry.
I can't really see how one can make a general statement about memory bandwidth not continuing to be a critical enabling/gating parameter for the forseeable future.
But now that IMRs are the standard and faster memory has made memory bandwidth less of a concern, there's just not enough going for Power VR anymore. I'll give them this: they have one more shot at breaking into the market. If this next chip does not compete favorably with ATi and Nvidia's chips, then that's it: game over.
In that case Nvidia and ATi will just keep pumping out IMRs until memory speed seriously becomes a problem and then promptly switch over to deferred rendering long after Power VR is out of the picture.
Again, is memory bandwidth less of a concern today than it ever has been? Gfx design has always been a case of balancing cost vs. performance.
The case for deferred renderers has
never been about avoiding any absolute limits. The point has been that they might offer a cheaper way to achieve comparable performance(, or higher performance at the same price point).
Deferred renderers have the technological/political disadvantage of not being entrenched, which means that applications will be designed to run as well as possible on IMRs. Deferred renderers will have to beat IMRs at their own game so to speak. Furthermore, you need both high production volumes and preferably multiple board manufacturers/suppliers to achieve low prices, which will be difficult for any newcomer trying to break into a market. Add to that how critical time to market is in the gfx business and things look rough. That hasn't stopped either SiS or Trident from trying though.
Since IMG doesn't operate in the same fashion as ATI or nVidia, I wouldn't be so quick to dismiss them. I agree that the total effort required to design and support a new 3D-gfx architecture is probably growing, and thus the financial stakes involved may become prohibitive. However, 3D-graphics is one of very few areas of current day PCs that aren't completely commoditized yet, and where manufacturers can hope for decent margins for some time yet. Not saying anyone
will make money there, but at least it looks better than, say, going into harddrive or mainboard chipset manufacturing.
Entropy