Nvidia Pascal Announcement

Quite the certainty you have with your assumption while the talk from the horse's mouth is way different.

You can get those numbers from their financial reports, what you think nV is another enron? They are just making those numbers up to make their investor's happy? Maybe AMD should do the same, so they can "look" good.

hmmm yeah we have the numbers for the past few quarters, its pretty easy to deduce what is going on.


Good reasons in your mind, perhaps. GM204 vs. Hawaii/Grenada had people claiming doom on AMD"s head because of higher PCB costs while conveniently forgetting about the R&D for the new arch. plus the fact AMD sweeped the consoles as well which in turn asks more of nvidia to spend on gameworks to keep up.

R&D as I stated is a fixed cost that is a reoccurring cost as is Gameworks, AMD even though it has less budget for R&D it has one too, and those are not factored into the end margins, if you want to look how accounting is done, I suggest this book.

Corporate Controller's Handbook of Financial Management

Now Even with nV spending more on both those, they are still making money hand of fist over AMD. So those don't even factor into anything at the end, just makes AMD's position even more tenuous.

Lets look at this deeper shell we? What about nV's heavy push into neuronets? the hundreds of millions they have done for that, I wouldn't be surprised if that budget alone is around AMD's R&D budget.....

Now if we understood anything from the past, the way nV is doing their neuronet program its going to an extension of their cuda based program where they are going to get majority of the colleges involved and then close the gate on the rest of the industry. I can't see how they would spend hundreds of millions unless they are doing this. And its not a far stretch for them to do since their cuda program has a great foot hold at the academic level.

Also AMD has been trying to leverage their console wins the same way (well not at an acedemic level but you get the picture), and this is why they have pushed their budget up for their marketing and game dev programs.
 
Last edited:
Quite the certainty you have with your assumption while the talk from the horse's mouth is way different.
Well, the easiest way is to reduce the problem is to look at profits. Or, in the case of AMD, the persistent lack thereof.
No need for assumptions, it's all there.

Good reasons in your mind, perhaps. GM204 vs. Hawaii/Grenada had people claiming doom on AMD"s head because of higher PCB costs while conveniently forgetting about the R&D for the new arch. plus the fact AMD sweeped the consoles as well which in turn asks more of nvidia to spend on gameworks to keep up.
Nobody forgets about R&D. It's plainly stated in the financial reports.

The disparity in R&D between AMD and Nvidia is fairly recent, due to heavy cuts on the AMD side. Back when GM204 was released, Nvidia spending on R&D was $337M and AMD was at $278M. Not a whole lot less. But it's interesting to see what that R&D has created. Or, better, in the case of AMD, what it has not. What has AMD actually been doing with all the R&D spending? Because they have almost nothing to show for.

Since September 2014, they released Tonga, Fiji, and Polaris. They're all flawed products.

Tonga is a perf/mm2 disaster. Fiji couldn't exploit the benefits of kickass new expensive technology. And Polaris is just as power and perf/mm2 flawed as before. And that's a problem if the only way to make people buy your stuff is by screaming "we are cheaper": a corner stone of their Polaris campaign.

In the same timeframe, Nvidia has released gm204, gm206, gm200, gp100, gp104, gp106. And except gp100 (which has no competition), they all kick AMD's butt in terms of perf/cost, market share/volume, and the price they command.

Yes, developing Maxwell and Pascal must have cost a pretty penny. It also resulted in a historic peak in terms of market share, revenue, and profits. And you're calling them out of doing that? What are you going to claim next? That losing money is great because you don't have to pay taxes?
 
His information about the wafer for the 16-nm process by 2015 used in Pascal rose to 2016 by 70 percent - from $ 4,557.25 to $ 7,779.22. This enormous increase also does not catch on the fact that the GP106 with 200 mm² is slightly smaller than the GM206 with 228 mm². The Pascal-chip is still more expensive than Maxwell, relatively speaking $ 33.3 dollars for Pascal compared to $ 19.9 for Maxwell. According to analyst, the price increase of the base material is the main reason for the high prices.
....
The question is ultimately whether the analyst is right and how AMD solves the problem, because it becomes increasingly difficult to offer more attractively priced gpu packages - together with the fact that AMD rarely sees black numbers and the lack of money to be invested in research. On the other side Nvidia announced on towards its investors, efforts to try to enhance margins.
Quite possibly why we see FE's setting an artificial price ceiling for the partners.
http://www.pcgameshardware.de/GTX-10606G-OC-Grafikkarte-264696/News/Preis-teurer-Amalyst-1202092/
 
Last edited:
Quite possibly why we see FE's setting an artificial price ceiling for the partners.
http://www.pcgameshardware.de/GTX-10606G-OC-Grafikkarte-264696/News/Preis-teurer-Amalyst-1202092/
Those are pretty scary numbers if true, to achieve parity in price the chip should be 130-140 sq.m, that should allows pretty much for a redone of the GM206 along with a 128 bit bus (nb i o get that the improvements bring by architectural as well as higher clock speed allows for higher selling point yet the figure is not rosy). So I could see such a chip back-up by GDDR5x dishing quite some pain on AMD line-up. They may pray that Nvidia is not too agressive with their use of GDDR5x as both a GP107 and GP 108 cold prove unstoppable foes on the lower segment of the market.
 
That is nothing new. It was laid out by TSMC and other that new process will only be cheaper if your transistor counts remains the same. If it grows and the cheap size in mm² remains about equal the new process will be more expensive. Numbers were reaching from 30-80%.
 
The specs of the new Titan X are online from NVidia.


Also a blog post from Nvidia. Summary:
  • 11 TFLOPS FP32
  • 44 TOPS INT8 (new deep learning inferencing instruction)
  • 12B transistors
  • 3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
  • Up to 60% faster performance than previous TITAN X
  • High performance engineering for maximum overclocking
  • 12 GB of GDDR5X memory (480 GB/s)

3584 cores is the same as P100 Tesla.

No clues yet if it's architecture sm_52 like GP100 or sm_50 like GP104.

The core count is the same as P100. this could be a new GP100 variant with GDDR5X instead of HBM2.
Or it could be GP102, far earlier than expected.
 
Looks like the original Titan all over again, the amount of cores, VRAM(usually it would have been 24/at least 16) and the name, lel, we're probably having another one in less than a year, that's why it's so early, and Ti card might even be better than this, Nvidia is deep into milking :D
 
It was announced at an AI meetup at Stanford. Here's the stream: http://www.ustream.tv/channel/fWbQyaEMfbh

Ok i see, thanks.

titanxpascal1.jpg
 
No surprised by GP102, everything going as planned for Nvidia.
I predict the insane INT8 perf alone will bring this card out of stock for a looooong time. Deep learning people will kill to get it...
 
I don't know what to take of this, looks to be a preemptive strike..... or they might have a good idea of what Vega is bringing to the table (if they are basing predictions on P10)

Quite out of the blue which is is unusual by itself. I just don't see a stimuli to announcing something like this. Well outside of Intel. So I'm going to think for now its more a focus on neuronets and what Knights landing can do as opposed to what it does for gaming.
 
So worst case is that NV has the whole line-up out before AMD has Polaris 11. Not goo, really not good. Vega better kill this clearly.
 
Back
Top