I would guess it will be closer to 1 year, but yeah if you already have a current gen card, I would definitely wait.lanek said:Nvidia will replace them with Maxwell in less of 9 months.
I would guess it will be closer to 1 year, but yeah if you already have a current gen card, I would definitely wait.lanek said:Nvidia will replace them with Maxwell in less of 9 months.
I wonder how the situation is compared to 2006-2008 and GDDR3. NVIDIA went to 512-bit with the GT200, so even if it's difficult, maybe they'd go to 512-bit on, say, "big" Maxwell if they were backed into a corner on memory bandwidth for it. (Knights Corner already has a 512-bit bus with 5.5 Gbps GDDR5.)I think the biggest issue that will be holding back next gen releases is the finalization and mass production of GDDR6 memory. Performance of today's cards are more or less at bandwidth limits, especially with Kepler. Unless either camp plans on moving to ridiculously large bus sizes (which will inefficiently take up more die space in controllers and can be potentially problematic) then both Nvidia and AMD will be slave to GDDR6 being viable.
If GDDR6 isn't in mass production by the end of this year or early 2014, then I think we are more likely to see die shrinks before new architectures.
How so?
I wonder how the situation is compared to 2006-2008 and GDDR3. NVIDIA went to 512-bit with the GT200, so even if it's difficult, maybe they'd go to 512-bit on, say, "big" Maxwell if they were backed into a corner on memory bandwidth for it. (Knights Corner already has a 512-bit bus with 5.5 Gbps GDDR5.)
What would be the costs of 20 nm chips compared to 28 nm? IIRC AMD moved the in-development 32 nm NI chips, except for the high-end part, to 40 nm even before 32 nm was canceled, since the chips would be cheaper on 40 nm. Also, if a 28 nm chip was shrunk to 20 nm then it could have similar or greater FLOPS but not necessarily (much) more bandwidth (at least for those that are at or near 6 Gbps already). It might even have less bandwidth if the shrink means that fewer memory interfaces can fit on it. So shrinks may not give much more performance (although that may not be the goal of a particular shrink) but should have a lot higher power efficiency.
More memory, far better compute, and judging from titan, smoother, more consistent frame rates. I know that last one may not necessarily scale down, but comparing 680 to Titan when they are at their closest, I think it would still be a factor.
You shouldn't forget the 50% higher ROP count, the 50% higher TMU count and also not the higher geometry throughput.Anyway its ironic how the spec of the 780 are so close of the 7970 outside a bit more shader cores.
7970 vs 780:
- 384bits vs 384bits
- 3GB vs 3GB - 6GB/s vs 6GB/s
- 900mhz vs 925mhz ( 1050mhz on the GHZ ) but its the min. turbo clock speed ( card should go as high of 950-985mhz ( some reviewer tell me just under 1ghz )
- 4.0 Tflops SP vs 4.0 Tflops SP
- (For the DP i got contradict numbers right now )
- 2304 vs 2048 SP
12.5% more shader, this should translate on software to something like 15-20% more performance.
According to the extremetech article, Nvidia believes that it'll be Q1 2015 before 20nm becomes worth it in cost terms.
http://www.extremetech.com/computin...y-with-tsmc-claims-22nm-essentially-worthless
Even if TSMC has pulled 20nm in by 6 months, you'd still be looking at Q3 2014. If AMD releases this year on 20nm the cost of the cards are going to be astronomical surely?
As far as I'm concerned, AMD won't release a new series on 20nm at the end of this year. They already went through this with TSMC once before when Northern Islands was cancelled for 32nm. We know they'll be releasing *something* though so the smart money would be on a similar situation to Northern Islands, that is the new series on the same node.
If by some miracle TSMC actually manage a 2-year cadence this time around, it's feasible that Nvidia will be ready with Maxwell on 20nm before the middle of 2014. For me it's a lot more likely that Nvidia learned from AMD's mistake and Maxwell will be seen on 28nm first as well.
According to the extremetech article, Nvidia believes that it'll be Q1 2015 before 20nm becomes worth it in cost terms.
http://www.extremetech.com/computin...y-with-tsmc-claims-22nm-essentially-worthless
Even if TSMC has pulled 20nm in by 6 months, you'd still be looking at Q3 2014. If AMD releases this year on 20nm the cost of the cards are going to be astronomical surely?
how can an hypotetic gtx880 goes better than Titan, or at least gtx780, if the chip is on 28nm?
It doesn't have to be faster than Titan, that's why Titan got the name. Also, Nvidia don't have to be releasing the GTX 880 first - there's no reason why they wouldn't be planning their original plan this time around and releasing the 860 Ti or 870 first.
no reasons until Amd will release her 20nm based vgas, and what about if amd will release those vgas before nvidia, as it was for 7970...?
It doesn't have to be faster than Titan
that's why Titan got the name
20nm will be worth it from the start for Nvidia especially for the high-end parts where the price is sky high. Even from the charts shown parity in price with 28nm will occur in 2013Q3 and with TSMC pulling in 20nm production 3-6 months that means the chart should slide forward 3-6 months.
I expect the split in Kepler between the GK110 and the GK104 to again happen with Maxwell so the higher initial cost of 20nm would be somewhat muted.
Also if any of the rumored 28nm Maxwell's come to be it would be in the low end of the stack and be replaced with 20nm parts when price parity occurs.