Rosaline: Roughly a G80 die shrink is my best guess right now too - with some minor architectural changes, but probably less than NV40->G71... As I think I said before, I'm definitely expecting the ALUs to be slightly modified to increase real-world utilization rates. Maybe same for the ROPs since you'd expect there to be fewer of them. Oh, and welcome to the forum Rosaline!
Now, one thing I'm very curious about is whether NV will want to use GDDR4 at all or not. If they don't, then 256-bit clearly restricts them to 8800GTS performance. If they're willing to go with 1.4GHz GDDR4, then they could match a 8800GTX in bandwidth and beat an Ultra in overall performance if they're clocked much higher...
I don't think the question is whether G92 is a mainstream chip or not. It obviously is: NVIDIA was ready to sell a 480mm2 (+50mm2 NVIO) chip for the ~$279 market segment. You'd be crazy to think they wouldn't be willing to sell a <=300mm2 chip with a cheaper PCB for <$249. So the real question in my mind is this: is it *also* a high-end chip?
My guess is that it is: it's very easy to see how it could beat in a 8800Ultra on average if it was basically a 800MHz G80 (albeit with fewer ROPs) and 1.4GHz GDDR4 on a 256-bit memory bus. It would be bandwidth limited much of the time, but that's not the point: if you can keep your PCB/memory costs constant and improve your performance, you can just increase your chip's price and make more money. Engineering and financial balances are not the same.
But if G92 didn't use GDDR4 at all, then we could only presume that it could not beat a 8800GTX in many cases. It's not impossible that NVIDIA is waiting for GDDR5... But this would be rather strange, because there were (afaict) very reliable rumours that G81 was a higher-clocked G80 on 80nm and with GDDR4 before it was canned. Unless G81 was canned precisely *because* NVIDIA decided to completely boycott GDDR4, why wouldn't they be willing to use GDDR4 now if they were (according to the rumour mill, at least) willing to back then?