http://www.techpowerup.com/216203/nvidia-gp100-silicon-moves-to-testing-phase.html
GP100 silicon being tested
GP100 silicon being tested
How does "Samsung starts mass producing early 2016" turn into "up and running before the end of this year"? Of course there will be samples earlier, hell, they have to have them already to do any tests on the chips, but that's a far cry from anything relevant.Isn't the news that NVidia will have HBM2 up and running before the end of this year?
We already knew that Samsung was planning to make HBM2:
https://forum.beyond3d.com/posts/1867707/
Why?People seem to be assuming that GP100 comes first, but I'm dubious that the largest Pascal with HBM2 will come first.
The past two generations both debuted with a smaller part, with the Big chip coming 10-12 months later.Why?
That sounds like a reason to ship GP100 as early as possible because it is also delaying their own revenue and giving Intel a chance with Xeon Phi. Not many people are going to commit to a large-scale Kepler installation if they believe Pascal is around the corner.The supercomputing crowd will wait for as long as it takes to get GP100, they aren't going to buy anything else. So if it's autumn 2016 or spring 2017 it's kinda immaterial - NVidia will demonstrate it at SC2016.
People seem to be assuming that GP100 comes first, but I'm dubious that the largest Pascal with HBM2 will come first.
The supercomputing crowd will wait for as long as it takes to get GP100, they aren't going to buy anything else.
That sounds like a reason to ship GP100 as early as possible because it is also delaying their own revenue and giving Intel a chance with Xeon Phi. Not many people are going to commit to a large-scale Kepler installation if they believe Pascal is around the corner.
I'm still betting that this is the generation where NVIDIA stops making "HPC+GPU" ultra-high-end chips and make truly dedicated HPC chips without a rasteriser - GK210 is already a big step in that direction and it makes sense to go all the way (IMO). I'm half-expecting something closer to Fermi where the big GPU tapes-out first, although given the longer certification times the smaller desktop ones might still be publicly available first.
I don't know where the 'longer to make' comes from, but I see no reason why they couldn't do a big die Pascal first:No one cares about Xeon Phi that I know of, and bigger chips take longer to make so they come out later. Not too mention they need a mature process unless you want to waste a lot of money on chips with a lot of defects.
Except Intel is eating small, and now also mid-sized GPUs' lunch with their recent integrated graphics, especially the Iris editions with the eDRAM chip. Profits from these GPUs thusly ought to be shrinking, while big chips drive excitement and are good PR (unless you're AMD and fuck up.)The past two generations both debuted with a smaller part, with the Big chip coming 10-12 months later.
No one cares about Xeon Phi that I know of, and bigger chips take longer to make so they come out later. Not too mention they need a mature process unless you want to waste a lot of money on chips with a lot of defects.
So unless Nvidia is planning on eating its own profits in order to gain market share (possible but probably not) GP100 or Big Pascal or whatever will all but certainly take some time to actually be released. They could always do a paper launch and then ramp up to actual production, but that doesn't seem particularly better.
I don't know where these rumors come from (like Pascal being a 16nm version of Maxwell), but from the information we have, I see Pascal as a big jump. nearly 2 nodes jump, different memory interface, different ALU with optimized 16/32/64fp mixed precision support, Nvlink, and the necessary secret features that will be disclosed at product introduction, it doesn't sound like a walk in the park...I agree with silent_guy that the price of Teslas makes the low yield argument against starting with a big GPU less compelling. But aren't validation/testing requirements for these HPC targeted chips much more stringent than for consumer GPUs? I guess they can get around some of that by leading with single chip products like Intel does with Xeons, so that nvlink and multi-socket cache coherency (if that's supported) doesn't need to work right off the bat. There were also rumors about the Pascal shader architecture being the same as Maxwell, so that presumably helps.
I don't know where these rumors come from (like Pascal being a 16nm version of Maxwell), but from the information we have, I see Pascal as a big jump. nearly 2 nodes jump, different memory interface, different ALU with optimized 16/32/64fp mixed precision support, Nvlink, and the necessary secret features that will be disclosed at product introduction, it doesn't sound like a walk in the park...
One of the great gifts of hardware.fr is this page, where all recent SMs with its internal architecture are placed next to each other and you can immediately see the changes as you mouse over the different links. Compare a GK110 against a GK104 and it's obvious that, in broad strokes, there isn't much different in architecture. There is no reason to believe that something similar can't be done for the Maxwell SM. In other words, I do think that it's mostly a matter of tacking on more FP64 units.mixed precision support can't be done with the current shader array structure on Maxwell. Its not something that is simple to just tack on either. ALU structure might be similar but everything feeding the array and storing of the data necessary for the array to function optimally (cache) will have to be different.
Strange, looks like I've missed the news that Nvidia had doubled the texture cache in GM200.One of the great gifts of hardware.fr is this page,