Just as a G70@5xxMHz might not be possible as a single slot design.Ailuros said:But only because NV40@400MHz (130nm) wasn't possible as a single slot design back then.
Just as a G70@5xxMHz might not be possible as a single slot design.Ailuros said:But only because NV40@400MHz (130nm) wasn't possible as a single slot design back then.
Xmas said:Just as a G70@5xxMHz might not be possible as a single slot design.Ailuros said:But only because NV40@400MHz (130nm) wasn't possible as a single slot design back then.
Jawed said:Why did the 6800U need a huge cooler when so many 6800GTs clocked to 6800U speeds (and beyond) on their single slot cooler?
Jawed
Ailuros said:Well, they did both single and dual slot versions of NV40, didn't they?
But only because NV40@400MHz (130nm) wasn't possible as a single slot design back then.
I'd be very, very surprised if there wasn't a dual slot version of G70 (even if its given another name) that launches to compete with R520.
G70 has 10.32 GTexels/sec theoretical fill-rate. In order to reach that fill-rate R520 has to be clocked at 645MHz. Now let's assume ATI managed to get even higher than that, it'll never reach the 30% fillrate advantage the X800XT PE had against the NV40.
G70 was launched to compete against what exactly? I'm not excluding the possibility, I just don't seem to find a good reason for a higher clocked dual slot version this far. Even more if ultra high end R520 turns up with a dual slot cooling system, NVIDIA will have a marketing advantage against it.
How much more effective are they, or is there as much "show" as "go" in using a dual slot cooler?
If they would be just there for decorational reasons, I think the IHVs would spare the extra expenses and "bulkiness" for those.
Many people are making calculations regarding how high R520 would need to be clocked if 16 pipes to match or beat G70 in fill rate etc. That's really irrelevant, since G70 is basically bandwidth limited except when it's shader, or HDR etc limited. So given that ATI will have access to pretty much the same memory as NV, its awfully likely that R520 will have fairly similar performance to G70 in gmaes like Doom 3, Far Cry etc where G70 is bandwidth limited (without HDR on Far Cry of course - oh and I am assuming that the rumoured new memory interface/controler, whatever, on R520 only provides a marginal advantage, otherwise all bets are off).
Where things will get interesting is in games that are shader limited on G70, which Battlefield 2 appears to be for instance.
Finally, there's HDR issue, both in terms of pure performance and AA compatibility. With all that in mind, I think it's pretty easy to imagine an R520 with only 16 pipes and clocked at well under 700MHz that delivers very similar peformance to G70. I fancy it will beat G70 (in GTX trim) on balance, but that an Ultra G70 will give back the lead to NV pretty much immediately. Perhaps R580 vs whatever NV have going at that stage will be the closest fight of all.
caboosemoose said:Hmmm, I can't get my head around the idea that there's going to be a 90nm G70 this year (for sale). That feels too soon for me. I would expect the first 90nm G70 chip to be more of a mainstream performance part in the mould of NV42 (which was the first large NV4x chip based on 110nm) followed next spring by a high end 90nm chip. But I also feel that there will be surprises from both players.
Well, from an architecture standpoint we’re just still at the beginning of shader model 3.0. And we need to give the programmers out there some time to continue to really learn about that architecture. So in the spring refresh what you’ll see is a little bit faster versions...
I think you’ll see the industry move up a little bit in performance. But I don’t think you’ll see any radical changes in architecture. I doubt you’ll see any radical changes in architecture even in the fall. When we came out with GeForce 6, we tend to create a revolutionary architecture about every actually two years. And then we derive from it for the following time. So even the devices that we announced this fall, that will be I think a lot more powerful than the ones we actually had a year ago. Architecturally we’re still in the shader model three type era.â€
If you look at when we go to 90, my guess will be is we’ll have one or two products this year going from 90 in the second half.
Ailuros said:Float HDR is important to me at least; I sure hope that R520 will be able to combine 64bpp HDR with MSAA. If it's FP10 only that's combinable with MSAA I'm not so sure I'll be that much interested.
geo said:I'm not much into blind faith, but I do believe in past as prologue. When thinking about FP10 MSAA, so far as how good the results will look in scenarios available today and the near future (say the next two years or so), do you take into consideration what ATI did with FP24 when the conventional wisdom was all about FP32? I look back at the famous debate with Sireric and Reverend, and I see an ATI that clearly spent a great deal of skull sweat working thru the theory in advance as to how much "eyeball impact" the reduced precision was going to have for the lifetime of the card at the top end.
So I apply that to the FP10 MSAA discussion and am comforted. Am I kidding myself?
caboosemoose said:Ailuros said:Well, they did both single and dual slot versions of NV40, didn't they?
But only because NV40@400MHz (130nm) wasn't possible as a single slot design back then.
I'd be very, very surprised if there wasn't a dual slot version of G70 (even if its given another name) that launches to compete with R520.
G70 has 10.32 GTexels/sec theoretical fill-rate. In order to reach that fill-rate R520 has to be clocked at 645MHz. Now let's assume ATI managed to get even higher than that, it'll never reach the 30% fillrate advantage the X800XT PE had against the NV40.
G70 was launched to compete against what exactly? I'm not excluding the possibility, I just don't seem to find a good reason for a higher clocked dual slot version this far. Even more if ultra high end R520 turns up with a dual slot cooling system, NVIDIA will have a marketing advantage against it.
How much more effective are they, or is there as much "show" as "go" in using a dual slot cooler?
If they would be just there for decorational reasons, I think the IHVs would spare the extra expenses and "bulkiness" for those.
Bottom line is this - they didn't use the Ultra name.
geo said:If you look at the history, it seems to me that they only pull out an "Ultra" when they are in a tuff position competitively. For intance, if I'm recalling correctly, GF3 & GF4 had no "Ultras".