How about we agree that you scored on the black curtain point, and that I scored on being right about AMD not only failing miserably at execution, but also doing so very publicly by prancing around with it and by crowing about how much ahead they are and then not deliver.Hey look everyone, pcper has a special camera that was able to see through a black curtain back in early January:
Using this kind of argument penalizes those who with the best engineering: you're able to make a 300mm2 die with better perf/W, perf/mm2 and better absolute perf than a 360mm2 die (as was the case at GTX 680 launch) ?No 7970 was high end with a die close to the max feasible as was it's 250 TDP.
The maximum die size doesn't change over the life time of a process. It's simply defined by the stepper size. It should be somewhere in the 700+mm2.I don't have any hard evidence if there was a max die size for all the revisions of the 28nm process during the last 4 year, so feel free to enlighten me.
How about we agree that you scored on the black curtain point, and that I scored on being right about AMD not only failing miserably at execution, but also doing so very publicly by prancing around with it and by crowing about how much ahead they are and then not deliver.
So if I get these two, I'd be set for Battlefield, eh?for those who like LEDs...
Sorry, buddy, you're mid-end, while your less able competitor is allowed to call himself high-end..
If 10nm is a year late, we could see a big gpu since nvidia obviously already has gp100 which they could eventually release as a consumer card once the process ramps to really good yields and the demand for the card dies down a bit.
Indeed that would make perfect sense.I am very sceptical of that chip ever becoming a Geforce. A 450-480mm2 GP102 with the same SM count as the GP100, dropped FP64 baggage with 384bit GDDRX5 should be far better Geforce than GP100 would ever be.
+1000I am very sceptical of that chip ever becoming a Geforce. A 450-480mm2 GP102 with the same SM count as the GP100, dropped FP64 baggage with 384bit GDDR5X should be far better Geforce than GP100 would ever be.
No-one knows as they only reported the co-processor/accelerator aspects of the Pascal architecture in context of Tesla.I thought the GP100 was reportedly lacking in ROP count and other consumer stuff, in order to make room for more computational resources.
According to what reports?I thought the GP100 was reportedly lacking in ROP count and other consumer stuff, in order to make room for more computational resources.
Now you are turning it into a comparison of competitors.
Fact remains that the 1080, while being a great achievement, doesn't stretch the design parameters to the max.
Their is clearly headroom for a higher end card that will appeal more to people having currently a 980Ti, Titan-X or Fury-X.
I'm sure the AIB partners are already sketching up their custom boards with this in mind. TheTo be honest, if nvidia manages to put a GP104 running at 2GHz base / ~2.4GHz boost clocks, coupled with 12GT/s GDDR5X while keeping power consumption under 275W, then they don't really need to make a GP102.
Get an AiO liquid cooler to make it silent and more premium, sell it for $750 and there's your 980 Ti.
To be honest, if nvidia manages to put a GP104 running at 2GHz base / ~2.4GHz boost clocks, coupled with 12GT/s GDDR5X while keeping power consumption under 275W, then they don't really need to make a GP102.
Get an AiO liquid cooler to make it silent and more premium, sell it for $750 and there's your 980 Ti.
Back in 2008, nvidia got through an entire year using only G92 from upper-mid to top-end ranges, and they were very successful at it.
Why not? Besides memory amount and price, Titan as a brand is a moving goalpost.However GP104 is not and will not be the Pascal Titan and typically the Titan chip gets a cut SKU as well. Maybe Titan is 2017 or Christmas 2016, but it's coming and it won't be GP104.