Nvidia Pascal Speculation Thread

Status
Not open for further replies.
Run out of time to add as did a quick search, here can see even Intel using it: https://software.intel.com/en-us/ar...mory-systems-based-on-intel-xeon-processor-e5
But going back to my previous post here is an NVIDIA post that gives a bit more of a summary: http://devblogs.nvidia.com/parallel...net-large-scale-visual-recognition-challenge/
Look down to section starting with "GPUs: a Winning Trend", talks how AlexNet won in 2012 the ImageNet Large-scale Visual Recognition Challenge and the design used - easier to read than that paper.
Anyway NVIDIA context recently was just to do with images processed/second in a way that they have been following for years with Alexnet/Imagenet.

Cheers
 
Here is the full slide presentation at CES 2016 relating to Drive PX 2: http://www.slideshare.net/NVIDIA/nvidia-ces-2016-press-conference
TBH for now I do not see the reason for anyone to get hung up on either TFLOPs or DL TOPs; closest "benchmark" to more traditional image/object/environment recognition they provide pertains to the AlexNet/ImageNet and how many objects/images it can process.
Appreciate this is a POV thing and my personal feelings.

Cheers
 
That die area is odd. First the site claims that the GP100 will have a die size almost the same as a Titan X. Which is all but certainly false given that this is a brand new node and last year Apple had to dual source just to get enough 100mm^2 dies, let alone something nigh six times that size. But a 350mm^2 die size would mean one of several things.

A. Nvidia has a separate GPU uncomfortably close to it's top end GPU in terms of die size (less than a third smaller). This could fit though, if GP100 is mostly for compute (seems not unlikely) while 104 is mostly for gaming. B. Yields are somehow way better than anticipated and GP100 is indeed 525mm^2+ in size. This would lend credence to the idea that GP100 is 1:3 for double precision and that it can reach 12 teraflops SP. C. Yields are worse than anticipated and a 350mm die size is the max they feel comfortable going. This would fit with the GP100 being at 1:2 for double precision, and just being something like a highly overclocked Titan X (say 1.4ghz) with extra silicon for compute purposes. D. The yields are bad, and Nvidia is going to eat the costs and/or charge way to much for a 500mm+ GP100.
 
Oh man.. if only someone could search for a picture of a 980M MXM and compare it with the MXM cards in the PX2... Then we could prove Charlie wrong, right?

hLXg95R.jpg


A1COVbN.jpg



Oops..
 
dude its Charlie, it might be a fake board, but everything else there I would take a large, large heap of salt. Just keep this in mind, myself, and another here at B3D KNOW nV has working silicon of Pascal.

OOPS......

Production ready not sure, but they have them. So Charlie can suck his left thumb till its blue.
 
dude its Charlie, it might be a fake board, but everything else there I would take a large, large heap of salt. Just keep this in mind, myself, and another here at B3D KNOW nV has working silicon of Pascal.

OOPS......

Production ready not sure, but they have them. So Charlie can suck his left thumb till its blue.

..."If Nvidia actually used a GTX 980 MXM board for their mockup, it would explain why the Drive PX 2 looks as though it only uses GDDR5. While Nvidia could still be tapping that memory standard for its next-generation driving platform, this kind of specialized automotive system is going to be anything but cheap. We’ve said before that we expect GDDR5 and HBM to split the upcoming generation, but we expect that split in consumer hardware with relatively low amounts of GPU memory (2-4GB) and small memory busses. The Drive PX 2 platform sports four Denver CPU cores, eight Cortex-A57 CPUs, 8 TFLOPS worth of single-precision floating point, and a total power consumption of 250W. Nvidia has already said that they’ll be water-cooling the module in electric vehicles and offering a radiator block for conventional cars. Any way you slice it, this is no tiny embedded product serving as a digital entertainment front-end.

Then again, it is still possible that the compute-heavy workloads the Drive PX 2 will perform don’t require HBM. It seems unlikely, but it’s possible.

Wood screws 2.0?
These issues with Pascal and the Drive PX 2 echo the Fermi “wood screw” even of 2009. Back then, Jen-Hsun held up a Fermi board that was nothing but a mock-up, proclaimed the chip was in full production, and would launch before the end of the year. In reality, NV was having major problems with GF100 and the GPU only launched in late March, 2010."


http://www.extremetech.com/gaming/2...otype-allegedly-powered-by-maxwell-not-pascal

This is not Charlie or I'm drunk (may be both)
 
It's not Charlie and it's not news, it's old news, it was spotted the same day or the next when they showed it, that it's using 2x 980M or GM204-Quadro MXM modules
 
Of course... but it seems that for @Razor1 are new enough and from Charlie... May be he is from a distant planet(this will explain the delay in receiving data) :)
 
Oh man.. if only someone could search for a picture of a 980M MXM and compare it with the MXM cards in the PX2... Then we could prove Charlie wrong, right?

n9C9hdA.jpg


From what I can measure, the die size is virtually identical to the one on the 980M module.

The GPU on the PX2 MXM module reads A1 revision, manufactured in the third week of 2015.
 
That's still not enough for Razor1 because he and "another here at B3D KNOW nV has working silicon of Pascal."

He's still thinking it's all made up by Charlie. Charlie made up nVidia's presentation at CES using a computer-generated Jen-Hsu Huang and a made-up PX2 board. Charlie made up all those photos of a 980M MXM module you can find using google and uploaded them himself. Charlie made up the comparison between GPUs. Charlie made up Anandtech, who confirmed those were GM204 cards. Charlie made up Extremetech.

In the end, everything in the Internet that points to nVidia using GM204 cards and saying they're Pascal chips from January 2015 is just a very elaborate hoax from Charlie.
 
Than, this guy Charlie is very skilfull. I mean the 3D model of Jen-Hsu Huang was very well rendered.Almost full me he was real, I give you that. How many poligons? How many layers? Dx12 Api? I bet this Charlie use an NVIDIA Pascal GPU (the Big Dady GP100 more likely ) to achieve such amazing result...
 
He's still thinking it's all made up by Charlie. Charlie made up nVidia's presentation at CES using a computer-generated Jen-Hsu Huang and a made-up PX2 board. Charlie made up all those photos of a 980M MXM module you can find using google and uploaded them himself. Charlie made up ...

I know we live in an era where we wouldn't want to consider an opposing opinion in the middle of a good rant, but when I read:
dude its Charlie, it might be a fake board, but everything else there I would take a large, large heap of salt

...I did not come away thinking that Charlie had concocted extremetech, or that someone else thought so. Perhaps a well-place preposition would have helped, but I came away thinking that Charlie saying that nVidia clearly had Pascal chips 18 months ago, as a satirical way to claim that Jen-Hsu needs to be jailed by the SEC without putting himself at risk for libel, was another tiresome tirade from a source known for it. That others are starting to pick up the old news ("those aren't the chips you're looking for") as evidence that nVidia is in trouble seems also a little weak. There's been an awful lot of kimono teasing from both sides, and I'm not buying any of it until I see specs, tests, prices, and available stock. YMMV, this is the speculation thread, etc. etc.
 
Why all the salt? I mean, we all know Charlie, right? Normally, he's got some truth at the core of his articles decorated with lots and lots of negativity on companies he does not like. So, while it's pretty much certain that the person responsible for handing Jen-Hsun the wrong board* to show will be fired soon enough or has been already, does it really lend greater credibility to assumptions Nvidia is in trouble (again)?

*still pondering how to best display an enormously large sarcasm-tag
 
Status
Not open for further replies.
Back
Top