Why would they spend hundreds of millions of dollars to change something on which they've already spent billions to ensure it never changes?
You think if PC CPU clocks hadn't hit a scaling wall, we would have still seen the demise of Netburst? nVidia and ATI are the 800-ton gorillas, so it should go without saying that will never "jump ship" until some punctuation occurs that says the status quo is failing.
It depends how much competition we're truly seeing in the market. If ATI and nvidia are both just content to one up each other every couple months by 10%, then maybe there is no incentive. But if there's truly gains to be had, I think you'd have seen at least one company pursuing the technology to differentiate their product. (unless we are seeing a duopoly, but judging by ati's loss of market share and margins over the past few years, you could argue against that) Shame some form of TBDR didn't make it into the Wii, that would have been a good application of the technology I think. (anyone wanna argue the benefits of flipper versus a theoretical series 3, 4, or 5 based chip? then again, we do have naomi 2 w/ elan, and I think gamecube clearly demonstrated an advantage over that...and even in embedded markets, serious competition in the form of nvidia and ati's offerings, as well as the psp's graphics chip exist)
Problem is that last part will never be known unless either of them tries it or some 3rd party comes in and actually shows it to be viable. Not one example has been for hardware that ever came within a hundred miles of nVidia or ATI spec wise or cost-wise. They've all been about low-end cheap stuff, which only really shows that it's a win in otherwise constrained situations where your performance would be far below abysmal otherwise, and we'd prefer just keep it a little better than total suckage. There's not even an isolated example for how it works out with high-end hardware.
Kyro 2 was probably the closest example, and it sometimes it went toe to toe with a geforce 2 gts (or maybe even an ultra), but there were other times it couldn't even keep up with geforce 2 mx. It was very game dependent, favoring the narrow corridor shooters of the time (much of which had software rendering modes and were poly constrained anyhow) over the newer emerging games with broad, wide open expanses.
It also had rendering issues, even under direct3d, which I don't know if they were just driver bugs or unfixable aspects of the underlying tech without developing the software specifically for the chip.
You could point to the CLX in dreamcast as a high end example, but it performed worse and looked worse in every game that was on both Dreamcast and PC. Not only that, but I wouldn't say there were any Dreamcast exclusives that really blew away anything on PC either. Shenmue had some horrible iq problems, along with a bad framerate, as well as some serious draw in problems, and was more interesting for its scene and texture variety than for any isolated graphical element. IMO, Shenmue showed what a big budget could do for a game, not what the Dreamcast technology could enable over PC hardware of the time. (sure, there are quoted specs for games that say they blow away any PC game of the time and would even have trouble running on pcs now, didn't test drive le mans claim something like 20 million polygons per second, at least on the ps2 version, or something along with 4x anisotropic filtering and such? the game looked good, but it's no gt 4)
That said, there is something sexy about TBDR. In general, I tend to favor solutions that add more onto the main chip silicon to reduce the load on external memory. Detaching from an external memory subsystem as much as possible just seems appealing, since you aren't reliant on fast and expensive ram for optimal performance, and just seems like it would bring about a broader variety of areas the same chip could be used without castrating its performance. Why has Imgtech stayed out of the PC, console, and arcade market? Are they really making that much money off of cell phones?