Nvidia Pascal Announcement

Are we to believe there are invisible and incorporeal chips residing there?
The matter at hand was the PCB. The presence of DRAM zones means they have tracks that lead to the substrate. Those tracks exist and increase the PCB's complexity.

The GTX 1060's PCB has a couple of things that makes it look like the GPU was rushed into the market, like the extra RAM slots and the pigtailed 6-pin connector.
 
The hollow DRAM zones are connected to tracks that lead to the substrate.
Didn't I just write that in the previous post?

What's the point in butting into an argument without reading what's before?
 
And now you're suggesting those DRAM zones are not connected to anything?
Huh? I'm not suggesting anything. My statement stands completely by itself, and it's not even a suggestion.

I've seen people bring up the argument that the price of a PCB is higher when you have more *unpopulated* component sites and that's just not true. Not for the 1060 PCB, not for any PCB. Ask for a PCB production quote from any vendor and as long as you meet DRC rules, they don't care how much copper is still on there in the end: they all start with a solid sheet anyway and etch away what's not needed. You would have known this if you've ever etched your own PCBs in your youth.

As for why those DRAM empty site are there, I've already made suggestions (you probably missed it while digging up delusional arguments about why a 1060 should be more expensive than a 480) : the GP106 die either has a 256-bit bus that Nvidia decided not to populate when it discovered that AMD is still unable to extract sufficient performance out of a given amount of BW, or the GP106 package is ball compatible with GP104.

Both options are painful for AMD, since it leaves the option for Nvidia to make a faster refresh SKU with almost no engineering work, while the 480 is already stretched to the limit.

Yes, go on, roll those unhinged eyes baby!
 
As for why those DRAM empty site are there, I've already made suggestions (you probably missed it while digging up delusional arguments about why a 1060 should be more expensive than a 480) : the GP106 die either has a 256-bit bus that Nvidia decided not to populate when it discovered that AMD is still unable to extract sufficient performance out of a given amount of BW, or the GP106 package is ball compatible with GP104.

You seem confused.
Never once did I suggest the 1060 is more expensive to make than the RX 480.
I just called you out on the bullshit of claiming the RX480 is way more expensive to make than the GTX 1060, without even a shred of the necessary proof to make such a claim.


The big difference is that you're pulling assumptions out of your ass hat, whereas I'm not.
 
You seem confused.
Never once did I suggest the 1060 is more expensive to make than the RX 480.
It seemed that you were suggesting this to be the case which must be where the confusion is coming from. I suggest we let this rest since nobody seems to know the true BOM for either card.
 
It seemed that you were suggesting this to be the case which must be where the confusion is coming from. I suggest we let this rest since nobody seems to know the true BOM for either card.
Yes. Other than very concrete information about the 3 most expensive components of the BOM we're completely in the dark. For all we know, AMD could have scored a killer deal on 201 size resistors.
 
Eh, yes the 480 is clearly more expensive to make than the 1060. Since this is established but we don't know the hard numbers I don't know what there is to debate. We could take educated guesses (personally I reckon the 480 is significantly more expensive to make) but this is meaningless really.
 
Eh, yes the 480 is clearly more expensive to make than the 1060. Since this is established but we don't know the hard numbers I don't know what there is to debate. We could take educated guesses (personally I reckon the 480 is significantly more expensive to make) but this is meaningless really.
Really can't read that much into it because even the BOM won't tell you development costs for Pascal in general. It should be cheaper (physically smaller), but there could be other scaling factors in play.
 
Really can't read that much into it because even the BOM won't tell you development costs for Pascal in general. It should be cheaper (physically smaller), but there could be other scaling factors in play.
Yes, so there's little reason to go into it.
 
Really can't read that much into it because even the BOM won't tell you development costs for Pascal in general. It should be cheaper (physically smaller), but there could be other scaling factors in play.
R&D costs are irrelevant in this kind of discussion. They are treated as a separate item on the accounting statements and not part of gross margins.

With Nvidia expected to release a top to bottom product line all using the same architecture, and with Nvidia having much higher volume, one could make the argument that its average R&D per chip is much, much lower than AMD (and you'd be right), but if Wall Street doesn't see this kind of thinking as useful, I don't think we should either. In the end, what matters is how much money Nvidia makes per extra board it produces. R&D plays no role in that.
 
With the billions spent on Pascal's clocking(according to the great leader, so a fair bit of salt needed) I think AMD still have a competitive edge when it comes to R&D.

The previous gen. with all the talk of die sizes and PCB complexity totally forgot the R&D needed for the new architecture.
 
R&D costs are irrelevant in this kind of discussion. They are treated as a separate item on the accounting statements and not part of gross margins.

With Nvidia expected to release a top to bottom product line all using the same architecture, and with Nvidia having much higher volume, one could make the argument that its average R&D per chip is much, much lower than AMD (and you'd be right), but if Wall Street doesn't see this kind of thinking as useful, I don't think we should either. In the end, what matters is how much money Nvidia makes per extra board it produces. R&D plays no role in that.

Yep R&D is considered a fixed cost and is never put into margin calculations as it can't be qauntified in those terms since its split over the entire generation of GPU's.

Actually manufacturing of GPU's and boards and putting them all together as a complete package is concidered variable costs as they are bound by the amount of product that is produced.
 
With the billions spent on Pascal's clocking(according to the great leader, so a fair bit of salt needed) I think AMD still have a competitive edge when it comes to R&D.
Nvidia spent $346M in the last quarter. AMD spent $242M.

If we generously assume that only 50% of AMD is GPU related and pessimistically that 100% is GPU related at Nvidia, you get $346M vs $121M, or a ratio of roughly 3 to 1.

Nvidia's GPU revenue is most certainly more than 3x larger than AMD's GPU revenue, and Nvidia's GPU market share is about 3x that of AMD as well. (As for net profits... Well...)

There's no question that Nvidia's ROI is much higher than AMD.

Is that all of this a useful way of looking at things? I don't think so. Not when you're looking at the incremental cost to produce one more GPU and the price for which you can sell it.

The previous gen. with all the talk of die sizes and PCB complexity totally forgot the R&D needed for the new architecture.
For good reasons.
 
Nvidia spent $346M in the last quarter. AMD spent $242M.

If we generously assume that only 50% of AMD is GPU related and pessimistically that 100% is GPU related at Nvidia, you get $346M vs $121M, or a ratio of roughly 3 to 1.

Nvidia's GPU revenue is most certainly more than 3x larger than AMD's GPU revenue, and Nvidia's GPU market share is about 3x that of AMD as well. (As for net profits... Well...)

There's no question that Nvidia's ROI is much higher than AMD.

Quite the certainty you have with your assumption while the talk from the horse's mouth is way different.

Is that all of this a useful way of looking at things? I don't think so. Not when you're looking at the incremental cost to produce one more GPU and the price for which you can sell it.


For good reasons.

Good reasons in your mind, perhaps. GM204 vs. Hawaii/Grenada had people claiming doom on AMD"s head because of higher PCB costs while conveniently forgetting about the R&D for the new arch. plus the fact AMD sweeped the consoles as well which in turn asks more of nvidia to spend on gameworks to keep up.
 
Back
Top