Nvidia Pascal Speculation Thread

Status
Not open for further replies.
I don't get it, it has to have HBM2. It almost can't be the card quoted with just GDDR5, because otherwise performance would be capped exclusively at the same level as the 980 non ti, which is already stretching 256bit/7ghz of bandwidth as it is. It's almost certainly not going to have a 512bit bus, because between GDDR5x and HBM2 who would waste the die space on that? And it can't have GDDR5x because it's not even in production yet.

I can see them waving a card in your face with sample GDDR5x and then launching some other, mid tier card in June that hits 980 like performance with GDDR5. But not anything worthy of 8gb of ram and/or being on the high end of their numbering scheme.
 
I understand the attraction of HBM, but there is very little evidence that memory is THE biggest performance limiter.

And a 256-bit bus at 11Gbps GDDR5X would have 57% more BW than 7Gbps GDDR5.

That should be plenty for a gm204 successor.
 
I understand the attraction of HBM, but there is very little evidence that memory is THE biggest performance limiter.

And a 256-bit bus at 11Gbps GDDR5X would have 57% more BW than 7Gbps GDDR5.
I think the question is really if gddr5x would be here soon enough.
That said - those maxwell chips are very efficient with memory bandwidth. I'm still quite amazed what your typical gm108 can do with 64bit ddr3... That gm108 has a bandwidth/alu ratio which is not even half that of your typical (full-blown) gm204 implementation. Now I certainly wouldn't recommend that ratio for a high-end chip (those gm108 might be amazing considering it's just 64bit ddr3, but they ARE limited quite a bit by bandwidth after all) . But give that "gp204" 8gbps gddr5, double (or quadruple - gm108 has 4 times as much cache per 64bit partition already too probably to help cope with the low bandwidth) the l2 cache, add 50% more SMM, and that might just be enough (together with some minor arch improvements together with some higher clocks thanks to newer manufacturing node) to beat the GTX 980Ti without anything fancy on the memory side at all.
Albeit using gddr5x would be preferable - if the timeframes match up.
 
Is GDDR5x actually going to be as good as they claim because there is next to no info on products that might use it besides speculation. In contrast both AMD and Nvidia have confirmed HBM at least on high end cards. Also seems like no manufacturers are even jumping onto GDDR5x production other than Micron, who designed the spec. You'd think there would be more industry buzz but the only talk about GDDR5x seems to be media outlets with rumors that don't line up.

If it is as easy to use and as good as it's claiming to be, it would be a nobrainer for a lot of manufactures to jump on board because of how expensive HBM is and will be and how many graphics cards would making use of GDDR5x instead if they could.
 
AMD seems to be ready with Polaris11, to use GDDR5X on GP104 would made a big gap between launches (3 months). So they probably kept GDDR5X for a 2016/17 refresh (GTX 1085/1075).

It may end that booth companies will offer 980Ti/Fury performance + 8GiB/new architectures/high perf/W @ ~$499, which would be great for customers and with ~250mm² Dies + GDDR5 profitable for both.
 
I suspect AMD is aiming for lower performance for the summer release: constrained by whatever performance that a 256-bit memory bus can provide. NVidia is much better at performance per unit bandwidth, so AMD has got to reach that level first.
 
I do wonder if fans of either company will be disappointed until Q3 and later; the information released so far for both companies only seems to suggest low end GPUs initially coming out on 14/16nm.
So makes me wonder if both companies are having problems (whether technical or something else) getting these to work on a larger and more power criteria scale, which would suck.

Cheers
 
I suspect AMD is aiming for lower performance for the summer release: constrained by whatever performance that a 256-bit memory bus can provide. NVidia is much better at performance per unit bandwidth, so AMD has got to reach that level first.


We should know more about it this monday it seems for AMD..
 
Is GDDR5x actually going to be as good as they claim because there is next to no info on products that might use it besides speculation.
There aren't many factors that determine the goodness of a DRAM. It's intended to be an incremental improvement of what came before, so that's what we'll get.

In contrast both AMD and Nvidia have confirmed HBM at least on high end cards.
I almost (but not really) wish that AMD didn't talk as much about Polaris. Disclosing new features is suddenly seen by some as the new normal and not talking is considered suspicious, where the latter was the norm not so long ago, ... except for HBM and some vague peak floating point performance.

I don't see a reason for either company to talk up GDDR5X as something to look forward to. After AMD's HBM-is-the-future show last year, the best reaction you can hope for is 'Meh', which is exactly what you're seeing in many forums.

Actual performance benchmarks will have to convince people that HBM isn't needed for a product of that category.

Also seems like no manufacturers are even jumping onto GDDR5x production other than Micron, who designed the spec. You'd think there would be more industry buzz but the only talk about GDDR5x seems to be media outlets with rumors that don't line up.
That's true, but ...

If it is as easy to use and as good as it's claiming to be, it would be a nobrainer for a lot of manufactures to jump on board because of how expensive HBM is and will be and how many graphics cards would making use of GDDR5x instead if they could.
It could be that expected volumes will be low like HBM, but without their high margins. I don't think you'll see GDDR5X in the low end for the foreseeable future, and the highest end has been taken by HBM. GDDR5 has quite a bit of volume outside of GPUs as well. So it's a pretty narrow market.
 
I do wonder if fans of either company will be disappointed until Q3 and later; the information released so far for both companies only seems to suggest low end GPUs initially coming out on 14/16nm.
The information released so far from Nvidia suggest only high-end GPUs to me: the super high-end big compute Pascal and, if you can consider yesterday's leak a release of information, GP104, which I don't consider low end at all. AFAIK, we haven't heard a thing about anything low end.

So makes me wonder if both companies are having problems (whether technical or something else) getting these to work on a larger and more power criteria scale, which would suck.
As a rule of thumb, it's best to take the position that there are no problems unless there are very strong indications that there are. Both companies now seem to be launching at roughly the same, which suggests that process availability was the limiting factor, just like it was for 28nm. And there haven't been any rumors about 16nm being a problem.
 
They also, once again, mix GDDR5X into the rumour, even though only GDDR5X manufacturer has said that mass production will start "during summer" (and once they specified August)
Mass production =/= no production as far as I know. SK Hynix didn't announce commencement of volume production of HBM until a week before the launch of the Fury X. Given the lead in time required for GPU+HBM+Interposer packaging, test and verification, initial supplies must have been going on for some months. It is already known that Micron was sampling GDDR5X well in advance of JEDEC specification ratification, so it doesn't seem beyond the realms of possibility that Nvidia (and probably AMD) are able to receive chips in advance of volume production.

Of course, whether the volumes required for a presumably larger volume of GP104 cards compared to the Fury series present a challenge I wouldn't care to guess. By the sounds of it, eight relatively simple chips with a little less assembly tolerance would need to be compared with the four more complex HBM stacks and their TSV/microbump yields for each Fiji package. I suppose the other variable would be how long discussions have been taking place between Micron and the IHV's and how complicated revised memory controller logic is.
I still doubt May would be a hard launch unless TSMC are doing something miraculous right now. Seems both IHV's are involved in some hearts and minds PPS campaign of future product one-upmanship.
 
I suppose the other variable would be how long discussions have been taking place between Micron and the IHV's and how complicated revised memory controller logic is.
How would that be a factor? The memory controller either part of the current silicon or it's not. And the difference between GDDR5 and GDDR5X is very limited.

I still doubt May would be a hard launch unless TSMC are doing something miraculous right now.
Why?
 
How would that be a factor? The memory controller either part of the current silicon or it's not. And the difference between GDDR5 and GDDR5X is very limited.
It still requires revision from a standard GDDR5 IMC does it not? When discussions regarding incorporation of GDDR5X tech would affect the timeline for development. I doubt development and logic layout are an overnight job.
If GP104 is due for a hard launch in May doesn't that require TSMC to be fabbing silicon now, and began even earlier if you take into account GPU packaging, board assembly, and shipping. Last time I checked a complex IC takes ~ 2 months to fab. Add in 2-3 weeks to assemble and package the boards and organize shipping (assuming the BIOS QA etc have already passed muster), then TSMC must have begun fabbing when their capacity was 40K a month (assuming no loss of capacity at Fab 14(B) from the earthquake) - of which I am assuming the lions share went to Apple. The Chinese SoC customers also seem to have sizable orders, and Xilinx is already shipping Zynq UltraScale+ using the process.

Does this not sound reasonable even if you discount, as Grall noted, that there have been no leaks of production/pre-production QA boards doing the rounds?
 
The information released so far from Nvidia suggest only high-end GPUs to me: the super high-end big compute Pascal and, if you can consider yesterday's leak a release of information, GP104, which I don't consider low end at all. AFAIK, we haven't heard a thing about anything low end.


As a rule of thumb, it's best to take the position that there are no problems unless there are very strong indications that there are. Both companies now seem to be launching at roughly the same, which suggests that process availability was the limiting factor, just like it was for 28nm. And there haven't been any rumors about 16nm being a problem.
Fair enough, the only solid reports (still rumours though and each of us take our own perspective on these rumours) I remember reading is that NVIDIA is releasing the mobile Pascals first, AMD is releasing a smaller/low-efficient power GPU first.
The way I see this making sense is that if a higher large die/power 14nm GPU was released by AMD then there would be no point in releasing a dual card Fury; it would be an obsolete card very quickly.
For NVIDIA, the news about the 1080 seems to have some unusual spec for a high end card such as GDDR5X (seems this will still take a little while to be implemented) and display output spec lower than 980.
Personally the only solid leads I think to date is the mobile GPUs for NVIDIA and that low power-efficient GPU from AMD, neither ideal for the enthusiast.
Cheers
 
Sorry just thought of this about the memory spec of that 1080....
So the new rumoured 1080 GPU has 8gb with 256-bit memory bus, and yet the only previous Maxwell generation card with greater than 4GB of VRAM utilised 384-bit bus with 6GB or 12GB.
And a 256-bit bus is crippling IMO unless utilising the full capacity of GDDR5X.
Surely they would still go with the 384-bit bus for memory greater than 4GB?

Cheers
 
It still requires revision from a standard GDDR5 IMC does it not? When discussions regarding incorporation of GDDR5X tech would affect the timeline for development. I doubt development and logic layout are an overnight job.
It's not an overnight job. But whatever work that had to be done was done a long time ago. If a GDDR5 GP104 will be introduced soon, a GDDR5X GP104 could be introduced just at the same time as long as there's availability. So the additional complexity of the MC is not something that would impact the introduction of GP104 by itself.

Does this not sound reasonable even if you discount, as Grall noted, that there have been no leaks of production/pre-production QA boards doing the rounds?
For all we know, the GP104 silicon could already be making its way through the fab right now. We're still 2 1/2 months away from the end of "sometime in May". If we haven't seen any real hints about production QA boards a bit over a month from now, I'll agree with you. But if first production silicon comes out of the fab sometime mid April, and you have your just-in-time ducks in a row, a month should be sufficient to get first batches on the shelves.
 
I remember reading is that NVIDIA is releasing the mobile Pascals first, ...
I may have missed that...

The way I see this making sense is that if a higher large die/power 14nm GPU was released by AMD then there would be no point in releasing a dual card Fury; it would be an obsolete card very quickly.
It's another AMD mystery: why is dual Fiji so late? It could be completely obsolete very soon, only marginally faster than something like a single die big Pascal, without enough memory.

For NVIDIA, the news about the 1080 seems to have some unusual spec for a high end card such as GDDR5X (seems this will still take a little while to be implemented) and display output spec lower than 980.
As someone who's not interested in buying any of this, I'm hoping to see a GDDR5 version first, just to see how much performance can still be squeezed out of a plain vanilla GDDR5. We know that increasing the clock speeds on a gm204 doesn't have huge performance benefits, so I still expect decent improvements with shader performance increases alone.

I don't buy the rumor of 2 DP ports only... That doesn't make any sense at all.
 
I may have missed that...
Rumours have been persistent for some weeks.
It's another AMD mystery: why is dual Fiji so late? It could be completely obsolete very soon, only marginally faster than something like a single die big Pascal, without enough memory.
Probably different markets. The scuttlebutt for some time has touted Gemini/Fury X2 as an OEM/VR dev tool rather than a consumer part. Latest rumours have it as a prosumer type product. Regardless of when it released, I doubt that it would be considered highly marketable to the high-end consumer graphics buyers. If the card is 12TFLOPS of single precision - which equates to a 732MHz clockspeed - wouldn't seem to stack up well against two Fury or Fury X's in Crossfire as I doubt overclocking inside thermal limits would make up the deficit.
 
Status
Not open for further replies.
Back
Top