Nvidia shows signs in [2023]

  • Thread starter Deleted member 2197
  • Start date
Status
Not open for further replies.
Anyways, 4060 reviews are out and it's predictably another shameful, greedy attempt by Nvidia to sell us lower tier parts at much higher prices through transparent naming exploitation. This is a straight up budget part being sold as a midrange GPU.
9 years ago nVidia sold a GTX960 card for $199 with a 200mm^2 die, 128bit, 2GB. Alone CPI put this card at $260 in 2023.

Now you get a 156mm^2 5nm die, 128bit and 8GB for $299. There is nothing wrong about the price. Its just average and domain specific inflation which makes this card so expensive.
 
9 years ago nVidia sold a GTX960 card for $199 with a 200mm^2 die, 128bit, 2GB. Alone CPI put this card at $260 in 2023.

Now you get a 156mm^2 5nm die, 128bit and 8GB for $299. There is nothing wrong about the price. Its just average and domain specific inflation which makes this card so expensive.
It's memory bandwidth and amount make it a bit below average actually, it's a crappy deal.
 
Oracle is spending billions on NVIDIA GPUs this year.

Oracle Corp (ORCL.N) is spending "billions" of dollars on chips from Nvidia Corp (NVDA.O) as it expands a cloud computing service targeting a new wave of artificial intelligence (AI) companies, Oracle founder and Chairman Larry Ellison said on Wednesday.

 
The AI hype is putting the crypto hype to shame.

"We will buy GPUs from Nvidia, and we're buying billions of dollars of those. We will spend three times that on CPUs from Ampere and AMD. We still spend more money on conventional compute."
 
9 years ago nVidia sold a GTX960 card for $199 with a 200mm^2 die, 128bit, 2GB. Alone CPI put this card at $260 in 2023.

Now you get a 156mm^2 5nm die, 128bit and 8GB for $299. There is nothing wrong about the price. Its just average and domain specific inflation which makes this card so expensive.
AD107 is the successor to GA107. NVIDIA is pretending it is the successor to GA106. They've done this across the entire product stack excepting the 4090. But I guess it does look okay if you compare it to a card from 8 years ago.

It's actually way worse when you look at something like the 4070Ti. GA106 is roughly the same size as AD104. The fully enabled GA106 (RTX3060) MSRP for $370 (adjusted, it was $330 in 2021). Fully enabled AD106 MSRP is $800. Even if the BOM for AD104 is 2x GA106 (it probably is, Samsung 8nm vs TSMC 5nm) that doesn't come close to accounting for a >2x increase in MSRP. The cost of the GPU die itself is a fraction of the BOM for the entire GPU. But the 4000 series cards are priced as if everything on the whole board has become >2x more expensive to produce.

And remember the 4070Ti was originally to be priced at $900 (y)
 
Last edited:
Or if you’re going to disagree, give detailed arguments with clear assumptions which can be proven or disproven. It doesn’t look like that’s going to happen on this topic from you so…

I’ve honestly lost track of whether we are saying only NVIDIA is asking for ridiculous prices or AMD as well but slightly less so? Because RX 7600 vs RTX 4600 is instructive…

~204mm2 6nm @ $269 for AMD vs ~146mm2 4nm @ $299 for NVIDIA (both 128-bit GDDR6).

If we assume 4nm is 50% more expensive than 6nm (it’s very likely a bit worse than that) then NVIDIA’s die would be very slightly more expensive to manufacture at TSMC. And it is very slightly more expensive at retail…

AD106 and above clearly has higher margins for NVIDIA than AD107, but I don’t really see the problem relative to AMD for AD107.

DegustatoR’s argument that die size has nothing to do with cost/price is absurd, it has a clear *indirect* connection to cost via die cost, which must also include cost per wafer (and yields but that’s mostly irrelevant for chips like that with yield recovery). The cost per wafer has more than tripled since 16nm for the same point in the process lifetime… So while NVIDIA’s margins for AD106 are clearly higher than their historical levels and I personally agree they are being a bit greedy to the detriment of the long-term health of the PC gaming market, it doesn’t really look like a “budget” part to me either given 4nm wafer costs.
 
AD107 is the successor to GA107.
By what metric?
AD107 is 18.9B transistors. This is 1.5B Ts more than in GA104.

DegustatoR’s argument that die size has nothing to do with cost/price is absurd, it has a clear *indirect* connection to cost via die cost, which must also include cost per wafer (and yields but that’s mostly irrelevant for chips like that with yield recovery).
Die size has nothing to do with pricing of a product which is using this die. The die itself is not the product and you're not buying the die. The price of such product is set by the market way more than by the cost of die production.
Also any comparisons of die production costs between two completely different processes (and two different production foundries to boot) on die sizes alone are what is absurd here.

To highlight the point about die sizes being mostly irrelevant to final market prices see A750's current price. That die is twice the size of N33 in 7600 while using the same N6 process.
 
Last edited:
AD107 is the successor to GA107(y)
That is inflation for you. It makes everything more expensive and put it a category up.
A 5nm wafer is at least 3x more expensice than Samsung's 8nm. I bet it is even more. You have a higher R&D costs to develop a 5nm chip. And that is just the domain specific inflation.

This is one point why DLSS is so great. You get a whole (and today even more) generation jump from using it.
 
By what metric?
AD107 is 18.9B transistors. This is 1.5B Ts more than in GA104.
Of course the new chip has more transistors. TU106 had way more transistors than GP104. In fact TU106 had nearly as many transistors as GP102. This is the nature of progress.

TU106 - 10.8B
GP104 - 7.2B
GP102 - 11.8B

All that aside, whether the prices are justified or not, they are laughable to the average gamer who can get a PS5 for $400. NVIDIA and AMD have to come up with a better way to improve performance or whoever wins the in the gaming GPU market will be king of the ashes.
 
Of course the new chip has more transistors. TU106 had way more transistors than GP104. In fact TU106 had nearly as many transistors as GP102. This is the nature of progress.
That "progress" is completely different now with cost of each transistor not scaling much on new processes.
And if what was said about N3 and N2 will turn out to be correct the density improvement side of the progress will also take a huge dive.
People seem to be unable to understand that the situation with new processes is different than with the old ones.
 
That "progress" is completely different now with cost of each transistor not scaling much on new processes.
And if what was said about N3 and N2 will turn out to be correct the density improvement side of the progress will also take a huge dive.
People seem to be unable to understand that the situation with new processes is different than with the old ones.
It's okay if density progress slows down or stops for a while as long as cost/transistor goes down. With Ada the performance is there, the price is not. Gamers would be thrilled with a $400 4070. Whether this will ever happen when TSMC has a full monopoly on high performance silicon remains to be seen.
 
Cost/transistor hasn't been going down for quite some time now - at least by as much as people seem to think.
It hasn't been going down at all from what I've read. That needs to change. But I don't know whether it's a technical reality or a consequence of no competition. Making tiny transistors is hard and getting harder, but it this was always the case.
 
It hasn't been going down at all from what I've read. That needs to change.
Advanced nodes are only getting more expensive to develop so the opposite will happen. It is quite possible that at some point cost/transistor will actually start to rise on more advanced nodes in comparison to some older one leaving only density (maybe) and power (likely) advantages to even using them.

AD107 being more complex than GA104 is very unlikely to cost Nv as little as GA107 did. So in no way is it a successor to that chip - unless we're using the digits in chip codenames to figure that out but in this case we can also use Moon phases or Tarot cards for this. 4060 on AD107 selling for about as much as the cheapest GA104 card was selling makes a lot of sense financial wise - even if it doesn't make sense to people who are "expecting" each new generation of GPUs to provide +30/+60% perf/price gains. We are looking at expectations which are rooted in good old days of good scaling in both price and density - but these days are gone now. We will be lucky if the top end will be able to still provide such gains for another couple of generations, and then it's either going to become more expensive or there will need to be something new in place of silicon for chip production.
 
For all the talk of more expensive transistors being the driver for higher GPU prices it's curious that NVidia and AMD chose to spend bags of costly transistors in order to narrow the memory interfaces compared to previous generations.
Transistor cost is going up in the sense that wafer price is going up faster than transistor density. But transistors ain't transistors. Cache will be the densest feature of a device; IO the least dense. Thus a transistor that makes up part of an SRAM cell is far cheaper than a transistor that drives a PHY. Replacing a GDDR interface with a bunch of cache will certainly drastically increase the transistor count, but you could well end up with a smaller (and therefore cheaper) die at the end of the day

Which is also something people should pay attention to when naively comparing transistor counts. X billion on one chip to Y billion on a different architecture and node is not an apples to apples comparison.

Also, obligatory David Kanter: https://www.realworldtech.com/transistor-count-flawed-metric/
 
Die size has nothing to do with pricing of a product which is using this die. The die itself is not the product and you're not buying the die. The price of such product is set by the market way more than by the cost of die production.
It's not the first time you went quite far into arguing for this statement.
But the arguments might be over semantics.

When someone says that die size impacts the price they don't mean that is the most important aspect (althought it might even be at times, but that's not under discussion). It's just one of the aspects that plays a role.


Saying that the product's price is set by the market is like saying nothing. That's always the case for (almost) anything, so this just terminates any discussion. It's a correct view point, but it's just a view point, which doesn't allow us to inspect any details
We can just as well say that any product is always priced by the company that makes it. Also true, but given a price, why is it priced like that?

So, no one is saying we can basically compute prices for products based on die size.
Do further note that often times when multiple variables affect an outcome, when we refer to one of the variables' value it is implied that "all other things being equal". Comparisons across processes, generations and vendors might violate this "all other things being equal" but if you feel this is the case, that's a different problem and the comparisons are misleading because die size doesn't matter.

Die size obviously is a factor, since it will plainly correlate positively with price. That's it
If you take any gpu on a competitive process and shrink it in half or double its size, these new variants will have a different price
 
For all the talk of more expensive transistors being the driver for higher GPU prices it's curious that NVidia and AMD chose to spend bags of costly transistors in order to narrow the memory interfaces compared to previous generations.
What Qesa said. Memory PHYs hasn't been scaling that well on new processes for some time now. So if you want a wider bus you're either burning wafer area on dark silicon while still paying for it or adding more processing logic both of which puts such chip a step higher in price than you'd want.

SRAM scaling also seem to be dead btw. Which is why AMD had this strategy of putting LLC on separate dies made on a cheaper N6 as putting them on N5 wouldn't give much benefits in area while costing more to produce. Next gen this will hopefully be offset by GDDR7 but beyond that...

When someone says that die size impacts the price they don't mean that is the most important aspect (althought it might even be at times, but that's not under discussion). It's just one of the aspects that plays a role.
It is an aspect but you as well as others are missing the point - die size is an aspect only when we're looking at the same production process in the same time period and likely made on the same architecture. So it matters in comparing AD107 to AD106 for example but it doesn't when you try to compare AD107 to GA107.
 
It is an aspect but you as well as others are missing the point - die size is an aspect only when we're looking at the same production process in the same time period and likely made on the same architecture. So it matters in comparing AD107 to AD106 for example but it doesn't when you try to compare AD107 to GA107.
No. It always is a variable. You just have to make the case in any given comparison that it is of a lesser importance than whatever you alledge to be more relevant. Because it might be, or it might not be
 
Status
Not open for further replies.
Back
Top