As I understand it, deferred renderers are considerably more complex in their operation than standard IMRs.
IIRC, That's not what the proponents like to say...they usually talk of much simpler operation and lower transistor counts.
Another possibility is that Videologic must now hold considerable amounts of IP relating to deferred rendering - could other companies easily create a deferred renderer which didn't infringe on this IP?
Possible, but I would imagine VideoLogics "IP" on deferrend rendering isn't much different than every other company's "IP" related to IMR. In short, I doubt that's a show-stopper.
A third argument is that deferred rendering technology must be worth something -
Or worth nothing
, depending on how you look at it.....
Microsoft was originally in talks with Gigapixel about supplying the Xbox chip.
And yet, they went with an IMR....
3Dfx then paid a lot of money for Gigapixel
And yet....nothing came of it....
and NVidia happily bought all their IP when 3Dfx collapsed...
And yet, still no deferred renderer, or any known plans to make one in the forseeable future.
So, what does it say when it appears that people are "interested" in deferred rendering, and then presented with the choice....end up going for IR, or not capable of producing a deferred renderer? That's pretty much exactly my point.
The forthcoming MBX chips from ImgTch show that the technology is excellent for small devices.
Not proven until the product is on the market and successful.
Still, I have in fact always been optimisitic about deferred renderers in
closed devices. Ones that don't have legacy apps designed with the "limitations" of IMRs in mind.
I want to make it clear that my "pessimism" for deferred rendering, is pretty much limited to the PC space.
why should it not be possible for a high-end deferred renderer to be produced using similar technology?
Well, that's what we're debating, isn't it?
I suggest you ask IMG, ATI, and nVidia those questions.