Nvidia Pascal Announcement

Am I the only one that thinks that gp106 will have a 128 bit bus? It doesn't make sense for it to have a 192 bit bus if it has 1280 cores? no?
On the other hand, there's surely going to be a 192 bit bus1060Ti based on a cut down gp104 to make use of all them chips that didn't make it to the 1070:s
 
If we take this at face value, we have the 1060 at 15% faster than a 480, with 25% less bandwidth, and (if we take the upper estimate of 200mm2), a die size that's 15% smaller, in a process that's supposed to have 10% less density.

That's a perf/mm2 difference of 35% in die size alone, or 50% if you adjust for process. It think Nvidia's margins will do just fine.

Too bad that Nvidia doesn't have anything in the sub-$500 range.
NVidia's performance numbers are obvious bonkers.
IF 1700boost clock is true, than it will be under 980 7-8%. So around 5% faster than Rx480 if you average lots of games bias for both vendors. Still smaller die, but maybe 15% pref/mm^2 with GDDR5 and no process adjustment. Look if AMD gets GDDR5x the pref/mm^2 should be close to zero w/no adjustment.

But even "only" 15% better pref/mm^2 still wins on power and should price as smaller bus.
 
Does it make sense to have 2304 cores with a 256-bit bus?

Yes, for both cases: the MCs can trivially be decoupled from the SMs/CUs. It could be any ratio, really.
I think I read a site said did a 10% memory OC with 3% shader and got 10% performance scaling. So 256-bit bus doesn't make any sense with GDDR5. Bottleneck.
 
NVidia's performance numbers are obvious bonkers.
I agree, they're crazy if true!

IF 1700boost clock is true, than it will be under 980 7-8%. So around 5% faster than Rx480 if you average lots of games bias for both vendors. Still smaller die, but maybe 15% pref/mm^2 with GDDR5 and no process adjustment.
The 1060 has 1280 cores vs 2560 of the 1080. That's a 2:1 ratio. But for a 192 bus, BW is a 10:6 ratio (GDDR5 vs GDDR5X), and ROPs are a 4:3 ratio.
So worst case, the 1060 will have 50% performance of the 1080 for pure shader limited workloads, but the other workloads will nudge it higher.
And wider designs are inherently less efficient, so that should help the 1060 as well.

Look if AMD gets GDDR5x the pref/mm^2 should be close to zero w/no adjustment.
I'm assuming GDDR5 for both the 1060 and 480, so apples to apples. Who knows if Polaris (or the 1060) even supports GDDR5X.
 
NVidia's performance numbers are obvious bonkers.
IF 1700boost clock is true, than it will be under 980 7-8%. So around 5% faster than Rx480 if you average lots of games bias for both vendors. Still smaller die, but maybe 15% pref/mm^2 with GDDR5 and no process adjustment. Look if AMD gets GDDR5x the pref/mm^2 should be close to zero w/no adjustment.

But even "only" 15% better pref/mm^2 still wins on power and should price as smaller bus.

That's a lot of if's, lets cut that down, we know the 1060 can match the 480 and most likely edge it out, and have less power. Its a smaller chip, margins will be better.
 
The 1060 has 1280 cores vs 2560 of the 1080. That's a 2:1 ratio. But for a 192 bus, BW is a 10:6 ratio (GDDR5 vs GDDR5X), and ROPs are a 4:3 ratio.
So worst case, the 1060 will have 50% performance of the 1080 for pure shader limited workloads, but the other workloads will nudge it higher.
And wider designs are inherently less efficient, so that should help the 1060 as well.
Use the 980 instead of 1080. It matches up in effective bandwidth, FLOPs and probably rest of specs very similar to rumor 1060 so far.


I'm assuming GDDR5 for both the 1060 and 480, so apples to apples. Who knows if Polaris (or the 1060) even supports GDDR5X.
I say this as Rx480 seems memory bottlenecked, 980 isn't, so I assume 1060 won't be. So going to GDDR5x will barely benefit 1060 while it will greatly benefit Rx 480. It makes the Rx480 even more expensive than 1060 though.
 
Use the 980 instead of 1080. It matches up in effective bandwidth, FLOPs and probably rest of specs very similar to rumor 1060 so far.
It has different number of SMs per GPC, different cache ratios, different core clock vs memory clock ratios, different compression efficiency. Why make it more complex than it needs to be?

I say this as Rx480 seems memory bottlenecked, 980 isn't, so I assume 1060 won't be. So going to GDDR5x will barely benefit 1060 while it will greatly benefit Rx 480. It makes the Rx480 even more expensive than 1060 though.
There is no such thing as a universal bottleneck. It shifts all the time. Another reason to better compare against other GPUs that are as similar as possible.
 
But at $299 they can probably make bank and tell people what a good deal they are getting :nope: and most will believe it.
well withr RX480 powergate fiasco, they have no reason to be more aggressive than necessary and it's sad for all of us, poor consumers...
 
I say this as Rx480 seems memory bottlenecked, 980 isn't, so I assume 1060 won't be. So going to GDDR5x will barely benefit 1060 while it will greatly benefit Rx 480.
We can test that assertion by looking at this excellent review comparing the RX 480 4GB at 7GT/sec to the same 4GB at 8GT/sec and finally to 8GB at 8GT/sec.
Increasing memory bandwidth by 14% from 7GT/sec to 8GT/sec gave a consistent but minor 4% boost to FPS. This shows extra bandwidth is useful, but the significantly sublinear fps boost imples it's not the critical bottleneck. Boosting further with GDDR5X would be wasted.

The other conclusion from the comparison is that 4GB vs 8GB is irrelevant. There's no real advantage of 8GB in either game at any resolution including 4K. There may be other texture-heavy games where this would matter, of course.

This memory size datapoint might apply to the GTX 1060.. 3GB may indeed be enough for a mid-level GPU. But as ninelven pointed out, 3GB just feels dirty.
 
I wonder if 8 gigs would matter in Skyrim: Now 64 Bit Edition and the texture mods that will come out for it after it releases... Rx 480's not gonna have a super hard time running it.
 
There are cases where more than 4GB definitely makes a difference so 3GB, dirty or not, is something I would definitely not want.

CmIBi-vUoAABSDl.jpg:large
 
^ If that is the "ultra" texture setting in RotTR it's pretty much a mess and will even strangle an 8 gig vram GPU given enough time. Played around 40 hours of that game and i have to say, that's not a good way to measure vram perf (goes up to 10-11 gigs usage on the TitanX at 1080p...). Although, i still agree that 3-4 gigs of vram doesn't quite cut it anymore, i wish both AMD and Nvidia would just skip these versions.
 
Increasing memory bandwidth by 14% from 7GT/sec to 8GT/sec gave a consistent but minor 4% boost to FPS. This shows extra bandwidth is useful, but the significantly sublinear fps boost imples it's not the critical bottleneck. Boosting further with GDDR5X would be wasted.
What's most interesting about this exercise is that it confirms once again the findings of Anandtech back in the day of the GTX 980 that there is less than 50% ratio between memory increases and corresponding performance.

When GDDR5X eventually reaches 14Gbps, 40% more than today, that should be more than sufficient to sustain a GPU performance that's 80% faster than today, which should exceed the expected performance improvements of next generation processes.
 
Increasing memory bandwidth by 14% from 7GT/sec to 8GT/sec gave a consistent but minor 4% boost to FPS. This shows extra bandwidth is useful, but the significantly sublinear fps boost imples it's not the critical bottleneck. Boosting further with GDDR5X would be wasted.
From a pure performance point of view, overall probably yes. But let's not forget, GDDR5X runs at significantly lower voltage than regular GDDR5 with 8 Gbps (1.35 vs. 1.5 volt). I don't know whether or not it would offset higher data rate in the PHYs, but still. might give Polaris 10 a slight nudge in terms of a few extra watts to burn on the ASIC instead of the memory.

Which makes me wonder whether or not Polaris 10 was planned for GDDR5X all along and AMD had to cancel that due to the price point being targeted.
 
From a pure performance point of view, overall probably yes. But let's not forget, GDDR5X runs at significantly lower voltage than regular GDDR5 with 8 Gbps (1.35 vs. 1.5 volt). I don't know whether or not it would offset higher data rate in the PHYs, but still. might give Polaris 10 a slight nudge in terms of a few extra watts to burn on the ASIC instead of the memory.

Which makes me wonder whether or not Polaris 10 was planned for GDDR5X all along and AMD had to cancel that due to the price point being targeted.
Raja said in an interview that polaris targets where chosen about 2 years ago.
 
Back
Top