NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
its officially a 4080, if it performs like one we will see because TF doesnt mean much when considering new architectures and features.

Comparing TF isn't really that far of, because in Ada or other architectures all the new features just scaling with it. RT and tensor cores are just part of the SMs, so if the SM count and frequency increases the RT, tensor and Compue power (TF) increase by the same factor. At least in the past performance scaled quite well with the TFLOPs within the same architecture, so what changed this gen?

Btw. the performance gap is already shown in Nvidias launch benchmarks, I'm not sure why it should get magically better in official benchmarks. Because the only way this is possible would be if these benchmarks were cherry-picked and the 4090 actually delivers less than 75% of the fps/TF compared to the other Ada cards. Sounds strange, doesn't it?

If it looks like a duck, swims like a duck, and quacks like a duck, but is called a goose then it probably is still a duck.
 
Comparing TF isn't really that far of, because in Ada or other architectures all the new features just scaling with it. RT and tensor cores are just part of the SMs, so if the SM count and frequency increases the RT, tensor and Compue power (TF) increase by the same factor. At least in the past performance scaled quite well with the TFLOPs within the same architecture, so what changed this gen?

Btw. the performance gap is already shown in Nvidias launch benchmarks, I'm not sure why it should get magically better in official benchmarks. Because the only way this is possible would be if these benchmarks were cherry-picked and the 4090 actually delivers less than 75% of the fps/TF compared to the other Ada cards. Sounds strange, doesn't it?

If it looks like a duck, swims like a duck, and quacks like a duck, but is called a goose then it probably is still a duck.

Well, time will tell if NV fucked up. If these lovelace GPUs still sell like the previous generations then they made the right choice. Again, gives AMD the oppertunity to come back right?
 
But for whom did they do the correct thing? Actually only the share holders, because the margins on those cards have to be gigantic. I mean we are talking about 570-800$ price increase compared to similar Ampere cards (die size, cut chips). For us customers it is just a punch in the face, not to call it rip-off. If people buy this like crazy, it just tells me they are very bad at math or just plain stupid, sorry don't have other words for it.

Yes, let's hope AMD is not following the same trend and is using the opportunity. At least the current sale of the old stock sounds promising.
 
Actually, they have already shown the 4070 and 4060Ti cards, they just put the 4080 sticker on both of them. I mean just look at the die size and TFLOP difference to the 4090 and compare them to previous gens.
Maybe it makes more sense if you look at the 90 as higher specced than usual, so the lower cards appear somewhat downgraded only in relation to that, but not really if we compare to prev gen?
However, i notice that the biggest cards currently seem to have the best price / value ratio in general, also looking at how prices of Ampere / RDNA2 go down.
Bigger cards decrease faster. In other words: Little demand. It's interesting to see how desparate NV tires to change this. They even try to build up a proprietary modding community, driven from machine learning lacking any inspiration, sigh.
let's hope AMD is not following the same trend and is using the opportunity.
Would be too good to be true.
They always use the worng strategy at the wrong time.
AMD has offered more power at lower prices before, but it did not help with market share. Problem was that devs did not utilize high compute power but focused on just rasterization. This could not saturate GCN, so benchmarks looked bad.
Now they are tired of selling at lower price, although at current times i'm sure it would help with market share. Problem is that devs again just focus on raster and now RT, and we'll see how AMD improves on the latter.
But maybe it's exactly this problem, plus Intels aggressive pricing which makes them change their mind indeed.
If people buy this like crazy, it just tells me they are very bad at math or just plain stupid
They are used to the idea NV alone gave us everything, and all innovation and progress came from them.
Now they stand loyal to their captain, nip champaigne and congratulate to the great achievements of RT and ML, and stand proud on the biggest ship ever made. Unsinkable, with golden ornaments, and mighty horse power below.
Of course this has some cost. But obviously it's woth it, they are sure.
So they pay to stay aboard, and ignore the little iceberg in front.

I know, i know. It's just that under such circumstances, technical discussion is not really possible. Because even math becomes wrong if it disagrees with convictions so deeply founded. ; )
 
That's exactly what I've got to do if I'm trying to understand what the best value product that I can get today is though. Comparing back to the launch price of the 3090 might make sense if we want to understand the relative value of the products at launch but then we have to factor in the fact that we should expect better value at the same price point 2 years down the line. How much better value though isn't something that we can unambiguously measure. The 4090 definitely offers more performance/$ in its launch window than the 3090 did, but after two years, that's expected.
I don't disagree that it's probably the best period to grab great deals on previous gen cards, but it doesn't validate my point. It's very temporary and market conjuncture driven, so not fair to make the comparison as you do, as if it's the norm when it's absolutely not. It's like bringing a rule's exception to a talk, it cannot be used to validate this said rule.
That being said, I'm with the vast majority who thinks 4090 is the best value for money of the first wave of Ada cards. The worst being 4080 12Gb that carries an unfortunate name, purely for marketing reasons. IMHO the best range should be:
4090 AD102 as is, at $1499
4080 AD103 16GB as is, at $999
4070 AD104 12Gb (aka current 4080 12GB), at $799
This would be less controversial and I believe it would generate more sales. But I know Nvidia has a master plan below 4080 and this inflated range brings lot of options to counter AMD...
 
But for whom did they do the correct thing? Actually only the share holders, because the margins on those cards have to be gigantic. I mean we are talking about 570-800$ price increase compared to similar Ampere cards (die size, cut chips). For us customers it is just a punch in the face, not to call it rip-off. If people buy this like crazy, it just tells me they are very bad at math or just plain stupid, sorry don't have other words for it.

Yes, let's hope AMD is not following the same trend and is using the opportunity. At least the current sale of the old stock sounds promising.

The 4080 might be a rip off but it has nothing to do with the die size. The die size isn’t a feature of the card that customers care about. The $200 increase over the 3080 10GB is the main problem.
 
But for whom did they do the correct thing? Actually only the share holders, because the margins on those cards have to be gigantic. I mean we are talking about 570-800$ price increase compared to similar Ampere cards (die size, cut chips). For us customers it is just a punch in the face, not to call it rip-off. If people buy this like crazy, it just tells me they are very bad at math or just plain stupid, sorry don't have other words for it.

Yes, let's hope AMD is not following the same trend and is using the opportunity. At least the current sale of the old stock sounds promising.
You're ignoring that:

1.) 4N is substantially more expensive than 8N per mm^2 (likely more than double)
2.) GDDR6X is more expensive than GDDR6
3.) Power delivery requirements have gone up, which increases board costs

Given the rapidly increasing cost of new nodes, we are likely to see transistor increases stall in each price bracket, with the main boosts coming from clock speed/architecture.
 
For the broader discussion of why pricing is this we I think we have to come to terms that it is in large part (and likely primarily due to) market conditions than any underlying actual unit cost factors. Using the 3080ti (instead of 3090) and 3070ti since this normalizes the VRAM increase proportion (doubles), and both prices were set in 2021 to more reflect changed market conditions -

ProductGPU DieMSRPDie SizeTransistor CountSMs EnabledMemoryMemory Speed
4090AD102$1600608 mm²76.3b128/14424 GB21 Gbps
3080tiGA102$1200628 mm²28.3b80/8412 GB19 Gbps
4080 16GAD103$1200379 mm²45.9b76/8416 GB23 Gbps
3070tiGA104$600392 mm²17.4b48/488 GB19 Gbps
4080 12GAD104$900295 mm²35.8b60/6012 GB21 Gbps
3060GA106$330276 mm²12.0b28/3012 GB15 Gbps

Die Size DifferenceTransistor Count IncreasePrice Increase
4090/3080ti0.97x2.70x1.33x
4080 16G/3070ti0.97x2.64x2.00x
4080 12G/30601.07x2.98x2.72x
Note - In fairness the hardware gap is larger for the 408012G/AD104 vs 3060/GA106 than the others, and the $330 price (3060) was set a bit earlier.

The easiest comparison is to just look at 4090/3080ti vs 4080 16G/3070ti. If price increases were proportional that would make the 4080 16G equal to $800. Now in fairness 4090 is cut more relative to 3080ti and the 4080 16G does have faster memory. However can that realistically account for such a large cost difference requiring a $400 price differential? The 4080 16G is also cut down, while the 3070ti was not at all.

If prices being up was primarily due to actual costs than some things simply wouldn't make sense. As I'm not aware of any inherent underlying reasons that would cause smaller die GPUs to increase the underlying production costs much more to the extent illustrated compared to the largest die one.

This also shows the issue from the consumer perspective again to the pricing on the current stack. Effectively prices increased more further down stack.

That being said, I'm with the vast majority who thinks 4090 is the best value for money of the first wave of Ada cards. The worst being 4080 12Gb that carries an unfortunate name, purely for marketing reasons. IMHO the best range should be:
4090 AD102 as is, at $1499
4080 AD103 16GB as is, at $999
4070 AD104 12Gb (aka current 4080 12GB), at $799
This would be less controversial and I believe it would generate more sales. But I know Nvidia has a master plan below 4080 and this inflated range brings lot of options to counter AMD...

4090 being the best value is to some extent the problem with the pricing structure. We're conditioned that especially with the top end halo part that value typically is worse value and value increases as you go further down. The 4080 12G being the third best product down carrying the lowest value essentially runs counter intuitive to pricing expectations (and this doesn't just apply to GPUs).

The 4090's price in itself isn't even an issue. For people who were truly buying $1500 GPUs last generation it actually serves as the biggest value jump for the high end since Maxwell (980ti) to Pascal (1080ti), you can even argue it's better as Ada has a larger feature set advantage over Ampere than Pascal over Maxwell. But using you're scale there'd still be issues with the pricing lower down. AD104 would need to $750 just to have linear scaling but would need to actually be really $600 just to keep more so with past pricing expectations relative to where it sits in the stack in terms of offering better value.
 
Last edited:
Die sizes, marketing names, memory bus widths, transistor numbers - these are things which has absolutely nothing to do with a product's market positioning.

Performance and features related to competition (including your own previously released products) - this is what determines a price of a product.
 
You're ignoring that:

1.) 4N is substantially more expensive than 8N per mm^2 (likely more than double)
2.) GDDR6X is more expensive than GDDR6
3.) Power delivery requirements have gone up, which increases board costs

Given the rapidly increasing cost of new nodes, we are likely to see transistor increases stall in each price bracket, with the main boosts coming from clock speed/architecture.
Dont forget inflation. Nobody is picking up GPUs directly from the factory. So there are increased shipping and storage costs, too. A retailer may have 10% highers costs than last year....
 
Die sizes, marketing names, memory bus widths, transistor numbers - these are things which has absolutely nothing to do with a product's market positioning.

Performance and features related to competition (including your own previously released products) - this is what determines a price of a product.

I feel it's a fair discussion as Nvidia's marketing/message in part is going with the concept that the prices are due to higher underlying costs to manufacturer and those factors are what constitutes the underlying cost. We also see people bringing up the underlying cost issue as well in terms of discussions.

If we want to just look at the end user functional performance difference a large problem is still again how the stack is priced flies contrary to all past precedent/expectations in terms of the value scale.

Using Techpowerup's numbers (since they have easy averages) the 3070ti offers 1.5x the fps/$ value over the 3080ti and the 3060 offers 1.6x. In order for this to hold true to Ada, the 4080 12G would need to have 0.92x the perf of the 4090 to hit 1.6x value. The 4080 16G actually mathematically would need to be 1.13x faster o_O than the 4090 to bring 1.5x more value.
 
You're ignoring that:

1.) 4N is substantially more expensive than 8N per mm^2 (likely more than double)
2.) GDDR6X is more expensive than GDDR6
3.) Power delivery requirements have gone up, which increases board costs

Given the rapidly increasing cost of new nodes, we are likely to see transistor increases stall in each price bracket, with the main boosts coming from clock speed/architecture.

Dont forget inflation. Nobody is picking up GPUs directly from the factory. So there are increased shipping and storage costs, too. A retailer may have 10% highers costs than last year....

But see my price increase post - https://forum.beyond3d.com/threads/nvidia-ada-speculation-rumours-and-discussion.62474/post-2268121

If prices were primarily driven by cost factor increases why is the distribution not anywhere close to evenly spread? Did the costs for sub 600mm² class GPUs increase by the order of 1.5x more compared to 600mm² class GPUs? What would be the underlying reasons for that?
 
If we want to just look at the end user functional performance difference a large problem is still again how the stack is priced flies contrary to all past precedent/expectations in terms of the value scale.
It doesn't. You get the same prices which you've been getting for the past ~10 years. 4080/12=3080/12, 4080/16=3080Ti, 4090~3090/Ti.

The rest are people inventing reasons out of thin air why some product should cost another sum of money (usually less) when there are no such reasons and never were - the only reason why a product cost some sum is the competitive landscape. If AMD or Intel will launch cards which will have the same perf/features at a significantly lower price then Nv will have to react and lower the prices. If not then their costs proposition here is the best anyone can do in this market.
 
But see my price increase post - https://forum.beyond3d.com/threads/nvidia-ada-speculation-rumours-and-discussion.62474/post-2268121

If prices were primarily driven by cost factor increases why is the distribution not anywhere close to evenly spread? Did the costs for sub 600mm² class GPUs increase by the order of 1.5x more compared to 600mm² class GPUs? What would be the underlying reasons for that?
How many cards are sold under and over $1000? For example storage and shipping costs identical but retailers have to make money with the cheaper products. So their prices will be increased first.
 
It doesn't. You get the same prices which you've been getting for the past ~10 years. 4080/12=3080/12, 4080/16=3080Ti
Again this BS?

Here the price increase of x80 agin, for the past 10 years:
GTX 480: (2010): 500$
GTX 680: (2012): 500$ <- 10 years agao
GTX 980 (2014): 550$
GTX 1080 (2016): 600$
RTX 2080 (2018): 700$
RTX 3080 (2020): 700$
RTX 4080 (2022): 1200$ <- we are here.

I have removed my sarcastic comments, so maybe this time toy can read more than two lines before you have to close your eyes from the truth you ignore to see.
It shows: 4080 != 3080.

So how do you mean this claim of 'same prices'?
 
Status
Not open for further replies.
Back
Top