Nvidia GeForce RTX 40x0 rumors and speculation

AD104 has 35.8 billion transistors (assuming TPUs numbers are correct). GA102 had 28.3 billion.

Cost per transistor is increasing (or flat at best), and it’s made worse by the fact that Nvidia went from a cheap Samsung process to a very expensive TSMC process. So it’s very likely that it costs Nvidia *more* to make that AD104 chip, which is completely the opposite of historical trends. We’re going to feel that pressure on the consumer side whether we like it or not.

I don’t fully understand the entire economics behind it all but as a relative layman it seems Nvidia may have been better off snipping Ada off at AD103 and continuing to make Ampere below that, taking advantage of gradually improving yields and cost-reduced board designs to provide cheaper GA10x products, perhaps with a rebadging. It’s been done before. I suspect the reason it’s not happening is because foundry contracts and other decisions were made years in advance and all that capacity was pre-sold.
If the chips are so expensive that the BOM for the 4070Ti is higher than the 3080Ti, this is not a trend that can continue. They might as well just stop making gaming GPUs.
 

 
If the chips are so expensive that the BOM for the 4070Ti is higher than the 3080Ti, this is not a trend that can continue. They might as well just stop making gaming GPUs.

It's already obvious that it gets harder and harder to release high end gpus that perform better without significantly increasing power consumption. Nvidia had a huge improvement from 3090 to 4090 because they switched to a cutting edge node, but next time around they won't have that luxury. Basically at a point where it'll be very hard to release cost effective lower power gpus at a low price. There's no real secret why DLSS, FSR and frame generation are the way forward. Basically gpus have to get smarter about using their power, because raw rendering power won't scale. A theoretical 5060 might not have vastly more rendering power than a 4060, but it'll probably have even better frame generation, upscaling etc.
 
It's already obvious that it gets harder and harder to release high end gpus that perform better without significantly increasing power consumption. Nvidia had a huge improvement from 3090 to 4090 because they switched to a cutting edge node, but next time around they won't have that luxury. Basically at a point where it'll be very hard to release cost effective lower power gpus at a low price. There's no real secret why DLSS, FSR and frame generation are the way forward. Basically gpus have to get smarter about using their power, because raw rendering power won't scale. A theoretical 5060 might not have vastly more rendering power than a 4060, but it'll probably have even better frame generation, upscaling etc.
If that’s true I really hope they standardize this stuff. Otherwise the future might be multi GPU after all. Need an NVIDIA card to play this, AMD to play that etc. I'm just old enough to remember how shitty that is.
 
So they literally are just changing the name of the 4800 12GB to the 4070ti? Same price and everything?

Insanity.
 
Not sure if I understood that correctly but Nvidia "relationship" with TSMC has never ended since it's began in the 90s. GA100 is an N7 GPU.

Which is not current. TSMC is notorious for trying to lock companies into their chip fabs, giving big "discounts" to continued high volume customers. Nvidia went away to get a better deal with Samsung, so going back to TSMC will cost them. They'll be paying a good deal higher per wafer than AMD does.

That's me out then.

Of course they are. No competition, no price change. Anyone wanting a "new" (arch) GPU should probably wait a while.
 
What is not "current"? Nvidia has been producing chips at TSMC without any gaps. Volta/Turing then Ampere (GA100) then Hopper/Lovelace. They've used Samsung for "gaming Ampere" but that doesn't mean that they've "went away" from TSMC.
^^This ^^
And if we follow Frenetic Pony own thinking, then Nvidia should get better wafer prices than AMD as it's a much older customer (AMD had his own fab, then using Glofo when Nvidia was already 100% at TSMC)
Besides, below, 2021 customer market share at TSMC on advanced nodes (7 and 5nm)
TSMC-Apple-Chart.jpg
Unlike many people think, Nvidia is very close to AMD in TSMC market share... It will be very stupid for TSMC to not treat these 2 customers with similar pricing...
 
I mean it is highly likely that both Nvidia and AMD get some "special" pricing from TSMC because of volumes and their long running commitments to each other. This however means that whatever wafer pricing estimations we have seen disclosed are likely to be considerably off the mark for either of them, not that one of them is in a different position pricing wise.
 
Which is not current. TSMC is notorious for trying to lock companies into their chip fabs, giving big "discounts" to continued high volume customers. Nvidia went away to get a better deal with Samsung, so going back to TSMC will cost them. They'll be paying a good deal higher per wafer than AMD does.

I don't think you have the right picture how tsmc or other big contract manufacturers work. All that matters for them is money and the difference in discounts are based on wafer numbers, not on some romantical loyality.

The second important metric is timeframe depending on the utilization of the node. Based on wafer numbers and prizes you pay, you get more or less wafer allocation. Amd probably payed less on 7nm because they bought more wafers. With nvidia beeing back at tsmc at 5nm there will only be a real difference if amd is willing to wait longer for 5nm wafer allocation, which might be the case as they're using 6nm a lot.

The qualcomm, Apple Story at 20nm? should give you a better picture of tsmcs treatment of long term customers. QC was TSMC long term No1 customer, Apple fabbing at Samsung asked for higher Volumes and from one Moment to the other QC got nearly no wafer allocation in the first 6 months despite beeing it's loyal No1 customer. That's the reason QC went to Samsung in the next years.

Where amds tsmc exclusivity matters more are the payment terms. Nvidia might need to prepay more to get the same wafer allocation and price as amd.
 
TSMC doesnt run a charity. Why should they give AMD a discount when they know that AMD desperately needs them more. On the other hand nVidia has shown that they can switch manufactures and still being able to produce good products.
 
AD104 vs GA102 is interesting, 26% higher transistor count but with half the bandwidth, way more L2 + ADA improvements/features and about the same performance for half the power draw? If only the price wasn't that ridiculous, the power efficiency is very good this gen, unlike Ampere. With the specs suggesting 285W limit I expect the 4070 Ti will consume around 200-220W on most titles, which would place it in the same ballpark as 1070 Ti (which is good).
 
With the specs suggesting 285W limit I expect the 4070 Ti will consume around 200-220W on most titles, which would place it in the same ballpark as 1070 Ti (which is good).
200-220W is stock 3070 average power consumption under full load for another reference point. Maybe 0.75-0.8x of the 4080's performance which is about 1.3x 3070/3070Ti at 1440p, 1.4x at 4k. Although those numbers are under the efficiency gains of Ada vs Ampere so it could be higher
 

Some sense at last. Still very expensive for its tier but if performance is good then I think its the best were going to see for a while.
 
Some sense at last. Still very expensive for its tier but if performance is good then I think its the best were going to see for a while.
I don't think that a $100 drop will do much for its price/perf comparison to a 3080-10 MSRP. Coupled with a lack of reference model this likely won't affect retail prices at all in comparison to where they would be if they'd just launch the thing as 4080-12 a couple of weeks ago.

All in all this still seems like a delay the only purpose of which is to re-adjust the positioning against 7900XT.
 
I don't think that a $100 drop will do much for its price/perf comparison to a 3080-10 MSRP.
Eh... I could be convinced. It's exactly the same MSRP as 3080-10 once you account for inflation. According to TPU the 3090ti is 22% faster on average than the 3080-10, so if the 4070ti matches the 3090ti then for the same price (as a 3080-10) you're looking at a 22% generational improvement (possibly more in RT-heavy titles) + DLSS3 + somewhat improved power efficiency. There's a story here. Perhaps not as compelling as prior generations, but such are the times we live in. We're also comparing vs. a rather unusually-upscale xx102-based 3080, that's a tough incumbent.

Coupled with a lack of reference model this likely won't affect retail prices at all in comparison to where they would be if they'd just launch the thing as 4080-12 a couple of weeks ago.
That's a bigger problem. And scalpers. But the 3080-10 had the same problem. :/
 
Last edited:
Back
Top