And the generation after Fermi might do the very same to AMD's generation at the time...do you notice a trend by any chance? Why would I as a consumer care what each IHVs roadmap
Evergreen is a fully compliant DX11 GPU family and yes there are very good chances that future DX11 architectures from AMD will be far more efficient. But then again it's not coming that soon either..
.
.
Apart from funky marketing material show me where I can buy a GF100 and judge myself how it compares to X or Y. Until I see a number of independent tests/comparisons on final available products I won't dare to jump to any conclusions.
And no I won't care either directly as a consumer if IHV A has higher manufacturing costs then IHV B. What I'll rate personally is the price/performance ratio in strict combination with image quality and if the balance of all those factors is worth it I might even close an eye in terms of power consumption assuming it's not over Hemlock's power envelope. As for final MSRPs since I haven't seen anything yet related for GF100's you obviously know something definite we don't know yet.
Well, I wouldn't go so far as to say that we can't conclude anything about GF100. There are a couple of immediate benefits to image quality (with coverage AA and otherwise improved anti-aliasing), and there's obviously a strong focus on geometry and PhysX performance with this part. As far as I can tell, anisotropic filtering is still an unknown, but we can expect it to be no worse than nVidia's current parts, which are quite good.Hmm. Well I was cautiously optimistic when the initial Fermi information came out a couple of days ago, but with a bit more thought, I am less so. Really all we've had is a couple of canned benchmarks and PR material. Without all the other information such as independent gaming benchmarks, clocks, power, noise, heat, price, etc, there isn't really enough information to form an opinion on the product as a whole. Fermi could be ten times faster than the competition, but if it sounds like a jet engine and costs ten times the price, it won't be a viable mainstream product, even for the gaming high end. I agree wholeheartedly with Rhys - we still don't know enough, and what we do know is just from Nvidia marketing with their expected bias and spin.
Fermi is still several months away, and the ATI refresh will be straight on top of it, no doubt with price cuts on their current cards. This will change the context of Fermi as a product by the time it finally arrives in the market. When Fermi finally arrives in that new competitive landscape, and we actually know what it is beyond the current constrained PR spin, we can actually decide if it's any good, and more or less desirable than competitor offereings.
I suspect that even if Fermi wins the battle at the high end, ATI will win the war with their better yields, smaller dies, more latitude for price cuts, power/heat envelopes, and a full top to bottom DX11 range. I think Nvidia may not make much money if it has to cut prices, and may not sell many units if they cost so much more than an ATI product that offers nearly all the same gaming performance for significantly less money.
Also, more interesting will be what happens at the end of the year, with a Fermi re-spin to give us a full 512 SP product at better clockspeeds, but it may well be facing the R9xx - possibly on a 32nm Gobal Foundries process? I think the next 12 months will be very interesting, with frantic competition and some very interesting new products at great prices for gamers.
And thats not really the case.*ARGL*
Of course and YES!!!
But # of VRM phases and power routing for a dual-gpu-solution should be a bit more difficult than for a single-gpu board.
Are you saying you don't expect Fermi to even be competitive, i.e. less than ~30% over Cypress?Due to the fact that Fermi is hardly able to veat an evolutionary ATI chip, I have little confidence for the future of that line. Especially if you consider the production problems and the TDW, which means one GPU from Fermi is more expensive than 2 RV870s. More expensive for NV and more expensive for the user, while lacking eyefinity.
So you blame DX11 for NV delay? Have the performance gains of the past (again time-wise) in architectures that didn't increment DX# been smaller than otherwise or has it only/mostly been a timing problem?Yep, an entirely reasonable way to look at things.
It's worth noting, though, that major DX version inflections have created serious problems in time to market, DX9 gave NVidia grief, D3D10 ATI and now D3D11...
Looking at the previous generation, unless I'm mistaken, ATI only gained ~10% or so with their refresh. The time between them was around 10 months. Do you expect them to deliver better results faster this time around?Fermi is still several months away, and the ATI refresh will be straight on top of it, no doubt with price cuts on their current cards. This will change the context of Fermi as a product by the time it finally arrives in the market. When Fermi finally arrives in that new competitive landscape, and we actually know what it is beyond the current constrained PR spin, we can actually decide if it's any good, and more or less desirable than competitor offereings.
I would tend to expect that power regulation on two chips would actually be a bit easier, as if you have two chips rendering different things, the variations in power draw between the two of them are likely to be relatively uncorrelated, leading to an overall smoothing out of the total power that needs to be supplied. The only thing that would obviously make it more difficult, in my mind, is the larger total power consumption.And thats not really the case.
Are you saying you don't expect Fermi to even be competitive, i.e. less than ~30% over Cypress?
If you say so, then I'll rest my case in this regard.And thats not really the case.
I would tend to expect that power regulation on two chips would actually be a bit easier, as if you have two chips rendering different things, the variations in power draw between the two of them are likely to be relatively uncorrelated, leading to an overall smoothing out of the total power that needs to be supplied. The only thing that would obviously make it more difficult, in my mind, is the larger total power consumption.
Looking at the previous generation, unless I'm mistaken, ATI only gained ~10% or so with their refresh. The time between them was around 10 months. Do you expect them to deliver better results faster this time around?
The difference aligns with turning off 2 SMs either both in the same GPC or 1 in two different GPCs. But then again, we all know how *in*accurate some of nvidias descriptions and diagrams have been in the past...
For all we know the raster and setup units are right next to each other and output to a LL queue that the SM read from and the grouping they are showing doesn't really exist. Hopefully people still remember then trying to pass off x8 simd as 8 seperate cores, right?
Well, to me this is all down to a bit of electrical engineering. When you design a dual-GPU board, you have a choice to make: should I build one single, large power regulation node and distribute that to all components requiring power? Or should I divide the power distribution between the different components?And I would have assumed that each GPU get's fed independetly by it's own circuitry - which I guessed would have made sense for 2D-, GUI and Video from a power consumption perspective. But as it seems, Dave has a different opinion there and since he's the one sitting closer to the IHV… Hope he isn't biased wrt to Dual-GPUs though.
Yes, it is an andless cycle, just this time around ATI seems to be in the leading position. They have a product in the market know, but jave not revealed their "real" DX11 Design.
It's also very strange, given that ATI has traditionally not made very significant improvements on the parts they've released between major architectural changes.Ah so all the praise to AMD for being the first to market with DX11, now turns into "the real DX11 design is coming", after leaks suggest that Fermi is better in DX11 features than Cypress...Now that is funny
Are you sure of that? I think the hardware indeed very likely is no worse than previous parts. But things like brilinear filtering are adjustable by the driver. I don't doubt the texture units have improved in efficiency, but still you're looking at 4 tmus per 32 SPs, instead of 8 tmus per 16 SPs (g9x) or 8 tmus per 24 SPs (g2xx). Compared to g92, that's only 1/4 the number of tmus per SP (ok a bit more since the clock has improved a bit relative to shader clock), even considering efficiency that will be a lot less texturing throughput per flop. Hence the incentive to cheat a bit is probably much larger, should texturing turn out to be a bit limiting in some apps... Not saying it has to be, but I wouldn't be totally surprised if it suddenly showed similar artifacts from undersampling as do AMD's parts...As far as I can tell, anisotropic filtering is still an unknown, but we can expect it to be no worse than nVidia's current parts, which are quite good.
Well, it will be interesting to see, but I strongly suspect that as games become more shader-heavy, they are requiring many more flops per texture access than they did previously. So the only games that will really need a large number of texture units are old games that will run insanely fast anyway.Are you sure of that? I think the hardware indeed very likely is no worse than previous parts. But things like brilinear filtering are adjustable by the driver. I don't doubt the texture units have improved in efficiency, but still you're looking at 4 tmus per 32 SPs, instead of 8 tmus per 16 SPs (g9x) or 8 tmus per 24 SPs (g2xx). Compared to g92, that's only 1/4 the number of tmus per SP (ok a bit more since the clock has improved a bit relative to shader clock), even considering efficiency that will be a lot less texturing throughput per flop. Hence the incentive to cheat a bit is probably much larger, should texturing turn out to be a bit limiting in some apps... Not saying it has to be, but I wouldn't be totally surprised if it suddenly showed similar artifacts from undersampling as do AMD's parts...
That'll make sense though I wonder how the situation really is with games currently used by reviewers.Well, it will be interesting to see, but I strongly suspect that as games become more shader-heavy, they are requiring many more flops per texture access than they did previously. So the only games that will really need a large number of texture units are old games that will run insanely fast anyway.
and there's obviously a strong focus on geometry
I remain to be convinced based purely on a small part of a single benchmark supplied by Nvidia PR. I'll wait to see it running in a game and compared to competing hardware. After all, wasn't it Nvidia telling us just a few months back how DX11 tessellation wasn't that important?