And you are certain the Tesla architecture does not support it at all?! - 100% sure; OK, i will let it rest for good.
Before G80 was launched, I heard some rumours about game devs emulating DX10 on R580 shaders. I suppose it's possible since the ALUs are highly versatile, but the performance sucks so it can't be used for real. Current nVidia chips can also do almost anything through CUDA, but not everything might be really usable. Besides, DX10.1 isn't about new technologies, but about speeding up the current ones, so emulation wouldn't make any sense. nVidia says they don't care about DX10.1 because the game devs have enough problem, but in my opinion it's just smoke & mirrors maneuver for G80 being built primarily for DX9 and not for all those new DX10.1 features. The R600 was clearly designed with these in mind, though the architectural flexibility cost ATi more transistors, forcing them to use the 80nm process... and you know the rest.
and i just find it difficult to believe that nVidia will just ignore DX10.1 - for two more whole years!
Two years? No I don't think so. GT200 is a slightly or heavily modified G80, but its principles won't last another two years. Three years on the market is long enough for even the best architectures to grow obsolete. R300 was launched in August 2002, R520 came in September 2005 (with a three-month delay or so) and it was just about time the old architecture was replaced. So, just as Megadrive1988 says, Q4'09 could be the right time to release a new, DX11 based product.
However, there are those who say Fusion is a Pipe-Dream and the real reason that AMD acquired ATi was for ATi's engineers to teach AMD how to transition from their own FABs to commodity FABs.
That is, forgive me, total bullshit. Every manufacturing process, even from the same company, has its specifics and chips must be designed from scratch in order to be able to use it. In the past, AMD transitioned from traditional bulk process to SOI with no trouble. In the past, nVidia fabbed some of its chips (notably the famous NV30) in IBM fabs before settling at TSMC for good.
And, Yes, we DO know that AMD wants X4! That is the purpose of CrossfireX!
Something tells me you don't mean knowing as in Arun's (was it Arun?) definition. Four GPUs on a card, if we're talking about RV670 or RV770, that is - sorry - also total bullshit. The purpose of CrossFireX is to allow for X2 cards to work together, or with a single card. Just as you can't put two chips of the G80/R600/GT200 calibre on one card, you can't put four RV670/RV770s on a card.
By the way, Quad CrossFire scaling sucks so badly ATi would be only shooting itself in the foot by marketing it as a usable graphics solution.