Nvidia Pascal Speculation Thread

Status
Not open for further replies.
Hi, guys, first post here, not really the kind of post I'd have liked to be the first, but couldn't help it anymore after reading some of the conclusions being made here.

So to the point: I'd say it's a 480M. Clearly. I mean it's almost component-for-component identical after all...

http://images.dailytech.com/nimage/14752_large_GeForce_GTX_480M_MXM_3qtr.jpg

I hope you see where I'm really going with this. IMO is quite a bit stupid to make the conclusions that are being made here, based solely on the resemblance of the modules.
 
I still fail to see the real trouble. This kind of certainty.

Going by specs of the PX2, it is beyond obvious that the Pascal chips being used are neither the GP104 or GP100. 8 TFlops combined, for 2 Tegra's and 2 Pascal chips, means that each of those GPUs must have 3.5 TFlops or less, assuming the same performance on those Tegra's as the Tegra X1. It's obviously one of the "smaller" chips unless people are thinking about a severe regression. Is it completely unthinkable that GP106, or whatever, is built in 28nm and using GDDR5(X), becuase it happens to be more economical ATM?
 
So to the point: I'd say it's a 480M. Clearly. I mean it's almost component-for-component identical after all...

http://images.dailytech.com/nimage/14752_large_GeForce_GTX_480M_MXM_3qtr.jpg

It's completely different. The 980M MXM has 6 voltage regulators on top, that one has 4. The 980M has the top voltage regulators right next to the RAM chips, the 480M has a space with solder points inbetween.
While the chip and VRAM placement is similar - probably because that's part of the MXM III/B similarities - the smaller ICs are obviously different.
 
Sorry, I can't edit apparently.

I'm a complete noob so excuse my ignorance if it's also completely unthinkable, but is it, unthinkable for Nvidia to create the prototypes in a readily available node (again 28nm, or 20nm, even vanilla 16FF, but size vs performance wouldn't fit either) so they can conduct QA and certification on the boards/platform earlier, which as far as I've heard is a very lenghty process for this kind of market? As long as everythng is pin compatible and has the same power consumption wouldn't it work? Of course performance of these hypothetical 28nm chips would be lower, but does that matter at all?

I am not saying that they couldn't just be 980M's, but as I said I fail to see any real and definitive proof that they are in fact 980Ms and I definitely fail to see how the current existence or non-existence of clearly-neither-GP100-nor-GP104 Pascal chip at this point in time, spells trouble for GP100 or GP104, which afaik taped out a long time ago. Same timeframe when Maxwell 2.0 launched, GM106 wouldn't have been ready either.
 
It's completely different. The 980M MXM has 6 voltage regulators on top, that one has 4. The 980M has the top voltage regulators right next to the RAM chips, the 480M has a space with solder points inbetween.
While the chip and VRAM placement is similar - probably because that's part of the MXM III/B similarities - the smaller ICs are obviously different.

You missed the point. The 480M illustrates how the design has been practically identical since as far back as the 480M. Probably earlier, it's not like I've been lookng for every single card out there.

You can look for any card in between and you'll see almost identical results. 880M for example has 6 VRMs.

And why would a Pascal module with an almost exact same TDP as the 980M have any different power delivery system? If it has the same 256 bit interface to boot why would the board be any different at all?
 
I'm a complete noob so excuse my ignorance if it's also completely unthinkable, but is it, unthinkable for Nvidia to create the prototypes in a readily available node (again 28nm, or 20nm, even vanilla 16FF, but size vs performance wouldn't fit either) so they can conduct QA and certification on the boards/platform earlier, which as far as I've heard is a very lenghty process for this kind of market?
If you're talking about nVidia producing a 28nm Pascal in order to test it beforehand then yes, it's unthinkable. The chip's design is inherently tied to its process, and even an "optical shrink" available throughout half-nodes (not the case between 28nm and FinFet) is a costly process.

You can look for any card in between and you'll see almost identical results. 880M for example has 6 VRMs.
The 880M MXM is even more different, since it has 3 inductors whereas the fake Pascal 980M MXM has 4.
MXM modules for different chips will look different, period.

And why would a Pascal module with an almost exact same TDP as the 980M have any different power delivery system? If it has the same 256 bit interface to boot why would the board be any different at all?
Because the TDP by itself doesn't determine the several voltages and current levels required to feed the chip, which are bound to be different. Also, making both chips pin-compatible would create a huge hassle because nVidia's engineers would have to design the new chip with dimension restrictions that would surely come in the way of making it area and power-efficient.
 
If you're talking about nVidia producing a 28nm Pascal in order to test it beforehand then yes, it's unthinkable. The chip's design is inherently tied to its process, and even an "optical shrink" available throughout half-nodes (not the case between 28nm and FinFet) is a costly process.

I clearly talked about producing a 28nm Pascal in order to test the platform. To provide system integrators omething to work with, even if performance is not final. Please, explain how that would be unthinkable.

The 880M MXM is even more different, since it has 3 inductors whereas the fake Pascal 980M MXM has 4.

Still missing the point.

MXM modules for different chips will look different, period.

And they do look different. They are not identical.

Because the TDP by itself doesn't determine the several voltages and current levels required to feed the chip, which are bound to be different. Also, making both chips pin-compatible would create a huge hassle because nVidia's engineers would have to design the new chip with dimension restrictions that would surely come in the way of making it area and power-efficient.

The size of the package looks nearly identical to me whether it's a GF100, GF114, GK104 or GM204 die. Please elaborate. And VRMs are quite flexible as far as I can tell from OCing...
 
No, nVidia can't make a Pascal using 28nm because the cheer amount of time and money needed on all the development stages required to make such a chip would render the final design completely non-competitive.

The size of the package looks nearly identical to me whether it's a GF100, GF114, GK104 or GM204 die. Please elaborate.
All I can further elaborate is you need an appointment with an ophthalmologist.

And VRMs are quite flexible as far as I can tell from OCing...

Lol.. not sure if trolling..
 
No, nVidia can't make a Pascal using 28nm because the cheer amount of time and money needed on all the development stages required to make such a chip would render the final design completely non-competitive.

I've heard this before and it's a very weak argument from what I've read several times in these forums about how much the physical implementation really costs (i.e. it's always blown out of proportions). It clearly depends on how much there is to gain in a certain market and this one is big and surely being first is essential. As I said I'm a noob, but I even know that Drive PX 1 is only now starting to be used, because QA and certification takes that long, whereas it's still Tegra 3 that is being used in many places still. Surely having your platform (especially the software side) certificated/implemented a year earlier than you would otherwise be able to, would pay dividends later.

And of course, there's still the fact that GP106/107 that the PX2 is gonna use (no, GP100/GP104 are not going to offer less FP than GM204...) could be made on 28nm, while the smaller GP107/108 takes most of the 16FF+ volume, alongside GP104 and GP100. In fact who's to say they are not going to surprise us with a "750 Ti 2.0" this time around? Pascal 1.0 sorta thing before actual Pascal 2.0, just like they did with Maxwell. Most people in B3D, and elsewhere agree that anything below GP104 is surely going to use GDDR5(x) anyway. Why not 28nm then? A "disproportinate" big 28nm chip to fill a low-end/midrange spot, if we can call $250 midrange. i.e. 256 bit 400mm2 chip, that offers a bit less performance than the GM204 does in order to lower TDP slightly. Which is clearly what Drive PX 2's specs suggests.

All I can further elaborate is you need an appointment with an ophthalmologist.

No, I don't. I messed up saying GF114, but the rest are nearly identical in size. I'm talking about the package that's soldered to the board, not the die, and neither the MXM module. GK104 and GM204 are definitely identical despite GM204 die being larger. What the connections inside the package need to do in order to connect the die to the ball grid (or whatever) has no relation with the connections in the bottom of the package that is soldered to the PCB. A 28nm 400mm2 Pascal and a 16FF+ 220-240mm2 Pascal with identical specs would/could have the exact same connections to the PCB, regardless of how the pins are. Obviously it would be the 16FF+ chip that would determine the pin number and layout, not the other way around. No one said that the chip will be pin compatible with GM204 or something crazy, if that's what you understood.

Lol.. not sure if trolling..

Maybe you are. Every card that I've owned has been able to sustain a very wide range of voltages and currents. Not to mention what GPU Boost itself does, etc. I still require an explanation on how 2 chips with potentially identical TDP and surely not very different operating voltages to the ones seen in current cards, would require very different power delivery systems. To the point they would look entirely different in a photo that is.
 
Presumably a 2 GPU board has a particular mix of bandwidth and compute that's more suitable than a single, much larger GPU that uses HBM2?

Or, the much larger GPU with HBM2 isn't going to happen for a very long time?
 
First off this board, is supposed to have both Parker SOC's and and Pascal GPU's, I don't think Parker is even close to being ready, so it most likely a mock up........ And they all should be on the same package if using HBM.

We don't know what type of pascal GPU's are on this thing ether..... at least I haven't read anything or heard anything about it much, then again I haven't been paying much attention to Drive PX2.

If they aren't on the same package the SOC's and GPU's, then HBM will only be for the GPU's, and they will need to have separate ram for the SOC's. Kinda defeats the purpose of die size savings from memory bus and power savings. Also bandwidth will be a major concern when inter-chip communication is necessary.
 
Last edited:
Hi, guys, first post here, not really the kind of post I'd have liked to be the first, but couldn't help it anymore after reading some of the conclusions being made here.

So to the point: I'd say it's a 480M. Clearly. I mean it's almost component-for-component identical after all...

http://images.dailytech.com/nimage/14752_large_GeForce_GTX_480M_MXM_3qtr.jpg

No it isn't almost identical to the one shown at CES. 980MXM otoh is.

And I'm not missing the point, you don't have a bloody point.

I hope you see where I'm really going with this.

Oh, surely. It's certainly not unthinkable that you're shilling for nvidia?

IMO is quite a bit stupid to make the conclusions that are being made here, based solely on the resemblance of the modules.

What's quite stupid is hearing a complete noob spinning off yarns about how a Pascal chip supposedly ended up on 28nm and has been in production for almost a year.

Pascal is 16nm, those are not Pascal chips.

NVIDIA is not disclosing anything about the discrete Pascal GPUs at this time beyond the architecture and that, like the new Tegra, they’re built on TSMC’s 16nm FinFET process.

http://www.anandtech.com/show/9903/nvidia-announces-drive-px-2-pascal-power-for-selfdriving-cars

There's speculation and then there's la-la land.
 
Also, making both chips pin-compatible would create a huge hassle because nVidia's engineers would have to design the new chip with dimension restrictions that would surely come in the way of making it area and power-efficient.
Making chips pin compatible is not that difficult, and the payoff of being able to reuse most of an already existing PCB design considerable.

In terms of location of the pins, you have 2 levels of indirection: the RDL layer on which the silicon is mounted to form the final die, and the substrate on which the die is mounted to form the package. Given that a GP104 with GDDR5X would have the same IO pins as GM204, give or take a few, making those pin compatible wouldn't be very hard.

The substrate is also what would adjust for the differences in die size. The size of substrate is determined for a large part by the ball pitch: larger pitch allows for each per PCBs.

And in terms of voltage supplies, it wouldn't be very complicated either. Even if the actual voltages are difference, it's just be a matter of swapping regulators or even just reprogramming them to a difference voltage. After all, that's already what happens in DVFS control loops.
 
No it isn't almost identical to the one shown at CES. 980MXM otoh is.

And I'm not missing the point, you don't have a bloody point.

No. You're clearly missing the point. It's weird to pretend that 2 cards with exact same TDP and bus width, necessarily need to look differently, when a completely different chip with twice the bus and much higher TDP from 5+ years ago, had only literally 2 differences. Any 2 modules from the last few years have mostly a 1-2 differences and hardly ever the same one. Everything is always in the same spot and the difference is that it's one or 2 components missing, that later appear on the next generation, where maybe a different component is missing. So far you've (all) failed to tell me why 2 cards which could probably have the exact same requirements, necessarily need to look different. If the hypothetical Pascal needs the same VRMs, same capacitors, inductors, etc, why would they arrange them differently? And why wouldn't it have the exact same requirements? There's no so many combinations actually.

I already said they're likely 980Ms, but unlike what some people here pretend, including yourself, the resemblance or number of VRMs or what have you, is hardly a proof of anything. Just like it's not a proof that, "oh well, dies appear to be roughly the same size", except closer inspection reveals that it appears to be 5% smaller. It's at least something that has me thinking.

Oh, surely. It's certainly not unthinkable that you're shilling for nvidia?

No, why? Are you shilling yourself for someone else and don't want competition? And what would Nvidia win by bringing a complete noob to a place like this, and be ripped appart? And saying that maybe GP106 is 28nm or any other thing I've said, which you think it's stupid? No. I've decided to post because it fustrates me how fast people are reaching to conclusions, while giving absolutely no reasons other than "it's imposible" and like really, Charlie is spot on? Give me a break. He couldn't even get the TFlops of the PX2 right.

What's quite stupid is hearing a complete noob spinning off yarns about how a Pascal chip supposedly ended up on 28nm and has been in production for almost a year.

I said a lot of things which fall within the realm of what's posible, regardless of how crazy or unlikely they are. Never made any claim regarding probability and certainly never made any claim about it being in production for almost a year. It doesn't even have to be 28nm for it to have been manufactured in the 3rd wek. It could be 20nm or even vanilla 16FF and that would have been posible one year ago and it would be far less useless in regards to the lessons learnt about the process.

If the issue is, for example, as I just thought about this for the first time, that only chips in full production and meant for release get that piece of code from TSMC and prototypes/test samples don't, it would have been far easier for you guys to just tell me that, instead of resorting to the equivalent of "shut up", using sarcasm and name calling. I already mentioned I am a noob, so I would appreciate to be given info instead of just dismissing everything I said without any explanation and calling me a complete noob. That's not cool and people outside the industry don't know those things.


Probably not Pascal, I already said that. That doesn't make any of the so-called evidence any more definitive tho, and that's what I've been calling.

As for the quote, they are talking about the final product which I already said it would be 16FF+. But again that doesn't mean they couldn't make 28/20nm prototypes, if they really felt that would help them get to market earlier with the Drive PX, since I trully believe they'd have more to win potentially than what they'd lose from making a 28nm Pascal chip for testing purposes. They had a lot less to win from making Tegra 4i, or the Shield family of products and they still went with it.
 
Making chips pin compatible is not that difficult, and the payoff of being able to reuse most of an already existing PCB design considerable.

In terms of location of the pins, you have 2 levels of indirection: the RDL layer on which the silicon is mounted to form the final die, and the substrate on which the die is mounted to form the package. Given that a GP104 with GDDR5X would have the same IO pins as GM204, give or take a few, making those pin compatible wouldn't be very hard.

The substrate is also what would adjust for the differences in die size. The size of substrate is determined for a large part by the ball pitch: larger pitch allows for each per PCBs.

And in terms of voltage supplies, it wouldn't be very complicated either. Even if the actual voltages are difference, it's just be a matter of swapping regulators or even just reprogramming them to a difference voltage. After all, that's already what happens in DVFS control loops.

That's exactly what I meant. I knew I had the concept right, but I just couldn't even begin to explain it being a total noob.

Now please tell me how wrong I am on everything else. But if you can actually explain me why, that would be cool. Instead these guys just telling me "it's impossible, noob", when I "knew" I wasn't always, thanks in advance or even if you don't reply.
 
If the date on the chips are to be believed, it can't really be 16FF+. That node only went into risk production in mid November 2014, and there NVIDIA are with an A1 (so a respin down the road) in early January 2015, on a chip that shares an awful lot of similarities with another they were already making on another node? Err.
 
Status
Not open for further replies.
Back
Top