NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
But see my price increase post - https://forum.beyond3d.com/threads/nvidia-ada-speculation-rumours-and-discussion.62474/post-2268121

If prices were primarily driven by cost factor increases why is the distribution not anywhere close to evenly spread? Did the costs for sub 600mm² class GPUs increase by the order of 1.5x more compared to 600mm² class GPUs? What would be the underlying reasons for that?
I never claimed that cost increases were the only factor. I was responding to a poster who thinks the 4080 12GB should be priced like a 3060 Ti!

The BOM provides a price floor and above that floor prices are free to move due to market conditions (eg. market being flooded with 3000 series cards).
 
You're ignoring that:

1.) 4N is substantially more expensive than 8N per mm^2 (likely more than double)
2.) GDDR6X is more expensive than GDDR6
3.) Power delivery requirements have gone up, which increases board costs

Given the rapidly increasing cost of new nodes, we are likely to see transistor increases stall in each price bracket, with the main boosts coming from clock speed/architecture.
This is true and would increase GPUs cost by a small amount. But it does not justify the amount Nvidia is charging for.
A price increase of 100-150$ due to cost increase would be reasonable, but 500$ is just pure greed.

A more realistic reason would be full storage of old cards and Nvidias "master plan" to sell those -> Increase the 40-series. Make 80-tier cards most unappealing so that customers either buy 4090 or clear stock of older cards.

It doesn't. You get the same prices which you've been getting for the past ~10 years. 4080/12=3080/12, 4080/16=3080Ti, 4090~3090/Ti.
Why are you taking the 3080Ti as reference? 3080Ti hast performance close to 3090, the 4080/16 is way slower than 4090. Sure price is the same, but again performance to the top GPU of the series is not.
 
This is true and would increase GPUs cost by a small amount. But it does not justify the amount Nvidia is charging for.
A price increase of 100-150$ due to cost increase would be reasonable, but 500$ is just pure greed.
If Nvidia lowers the price further they will risk cannibalizing 3000 series sales, and pissing off their partners. Once the stock has cleared, I suspect the price will fall to 3080 levels, which would be roughly in line with a 50% increase in BOM (that youtuber MLID claims from partner sources).
 
It's good then that I've said 3090 and not 3090Ti in my original post.
Yes, but you said 4080 12GB would be considerably faster than 3090.
When the difference between 3090 and 3090 Ti isn't big (3-8% at TPU) to begin with, and even NVIDIAs benches put it under 3090 Ti, how could it still be considerably faster than 3090?
 
If Nvidia lowers the price further they will risk cannibalizing 3000 series sales, and pissing off their partners. Once the stock has cleared, I suspect the price will fall to 3080 levels, which would be roughly in line with a 50% increase in BOM (that youtuber MLID claims from partner sources).
Yes they would cannibalize their 3000 series cards. And without the false 80 nametag on both of those cards, there would be absolutely no reason to justify their price tag.

And don't forget Intel is currently selling their A770/16 for 350$, sure they won't make any profit on those cards as they try to enter the market but I think there are laws that prohibit selling below production cost (but maybe I'm wrong here?).
Sure prices increased as you said in your earlier post, but:
400mm² 6N TSMC (7nm), 225W and 16GB GDDR6 shouldn't be too far off in production compared to 295mm² 4N TSMC (5nm), 285W and 12GB GDDR6X
 
Read my post.
I did. Quoting again:
You get the same prices which you've been getting for the past ~10 years. 4080/12=3080/12, 4080/16=3080Ti
But this thime, after correcting you two times, i no longer call it bullshit, but a straight lie. For whatever reason.

So if you refuse to explain any eventual misunderstanding still, i'm fine with the discussion being done.
Otherwise you could at least link to whatever other post you meant, or just answer my questions straight, to prevent another derailing of 'enthusiast level discussion'.
 
Yes they would cannibalize their 3000 series cards. And without the false 80 nametag on both of those cards, there would be absolutely no reason to justify their price tag.

And don't forget Intel is currently selling their A770/16 for 350$, sure they won't make any profit on those cards as they try to enter the market but I think there are laws that prohibit selling below production cost (but maybe I'm wrong here?).
Sure prices increased as you said in your earlier post, but:
400mm² 6N TSMC (7nm), 225W and 16GB GDDR6 shouldn't be too far off in production compared to 295mm² 4N TSMC (5nm), 285W and 12GB GDDR6X
You could make the same comparison with the A770 vs 3060, considering the 3060 is smaller on a cheaper process with less power consumption. Clearly Nvidia are looking for healthy product margins, not fire sale margins to offload a new GPU with buggy drivers and inconsistent performance. Again, the 4080 is called an *80 likely because that's where its intended to sit in terms of price. Against a 3080 it's probably going to be ~25% faster, which is small by historical trends, but what we would expect to see generationally as the cost/transistor keeps increasing.
 
It's again very telling of the poster when the only comparison of pricing can be made against the crypto ridiculously priced cards that were only made to reap massive profits during the mining craze. Cards that were 10% faster but up to 50% more expensive MSRP.

Also I don't see the prices falling after Ampere stocks run out. Nvidia aren't really known for dropping prices to IHVs, even if they're not selling.
 
This is true and would increase GPUs cost by a small amount. But it does not justify the amount Nvidia is charging for.
A price increase of 100-150$ due to cost increase would be reasonable, but 500$ is just pure greed.
So is the price of a current Tesla Model 3 (MSRP $46, 000) justifiable when compared to the price of the same model 3 years ago (MSRP $35,000)?
 
Yes, but you said 4080 12GB would be considerably faster than 3090.
When the difference between 3090 and 3090 Ti isn't big (3-8% at TPU) to begin with, and even NVIDIAs benches put it under 3090 Ti, how could it still be considerably faster than 3090?
Easily. These are games without RT and DLSS running in 4K (which is likely to be an issue for the 192 bit AD104).

It's not that hard to see really and the only reason why your wouldn't is you personal bias.

No, you did not. Because if you would you wouldn't be asking such stupid questions.
 
Easily. These are games without RT and DLSS running in 4K (which is likely to be an issue for the 192 bit AD104).

It's not that hard to see really and the only reason why your wouldn't is you personal bias.
REV uses RT even if sparingly.
We so far have nothing indicating it's bandwidth starved, it's just assumption. And you of all calling others personal biases, really ? :rolleyes:
 
Easily. These are games without RT and DLSS running in 4K (which is likely to be an issue for the 192 bit AD104).

It's not that hard to see really and the only reason why your wouldn't is you personal bias.


No, you did not. Because if you would you wouldn't be asking such stupid questions.
This is getting tiring Degustator. Reported you for "stupid questions" (which are not) and gas lighting. On the present you are contributing zero to the discussion. He has a point and all you did was compare 2 generation of cards while stating 10 years. We didn't have only two generation of cards in 10 years!
 
REV uses RT even if sparingly.
It's using it in a way where it manages to run "fine" on AMD h/w which means that it is likely shading bound even on Ampere which in turn means that whatever advantages Ada has in RT won't show up there.
We so far have nothing indicating it's bandwidth starved, it's just assumption.
How can we assume that a 192 bit GPU will be more bandwidth starved than a 384 bit one? I mean there are no precedent of such issues in 4K on GPUs with a large LLC, right? (sigh)

And you of all calling others personal biases, really ?
Really. Because all your posts here are colored by your personal bias to a point where an obvious lie is preferable to you - like the one where 4080/12 is somehow slower than a 3090 in all games shown thus far.

This is getting tiring Degustator. Reported you for "stupid questions" (which are not) and gas lighting. On the present you are contributing zero to the discussion. He has a point and all you did was compare 2 generation of cards while stating 10 years. We didn't have only two generation of cards in 10 years!
So you didn't read my posts either and decided to report me because you didn't read my posts? Okay.
 
It's using it in a way where it manages to run "fine" on AMD h/w which means that it is likely shading bound even on Ampere which in turn means that whatever advantages Ada has in RT won't show up there.

How can we assume that a 192 bit GPU will be more bandwidth starved than a 384 bit one? I mean there are no precedent of such issues in 4K on GPUs with a large LLC, right? (sigh)


Really. Because all your posts here are colored by your personal bias to a point where an obvious lie is preferable to you - like the one where 4080/12 is somehow slower than a 3090 in all games shown thus far.


So you didn't read my posts either and decided to report me because you didn't read my posts? Okay.

More gas lighting and evasion. I read your posts and actually liked at least one of them, where you finally cut the bullshit about nothing changing and mentioned that NVIDIA can do these prices because AMD is not competing on RT and ML. Things did change massively in the last three years, stop throwing sand to people's eyes, it doesn't work. And you know what? It doesn't even need to work, you don't need to shill so hard as long as AMD doesn't get their shit together.
 
Status
Not open for further replies.
Back
Top