Dedicated gaming GPUs bottom of the priority list?


Supposedly Nvidia is delaying Blackwell gaming as well. It would seem dedicated gaming GPUs have gone all the way to the bottom of the priority list for both AMD and Nvidia. I understand that there's an AI goldrush on, that throwing all supply at this is guaranteed money. Yes even for AMD, who's supposed lead for MI300 is "Just" 25+ weeks, less than half of Nvidia's!

But going by seemingly reliable leaks, RDNA4 would be functionally ready to launch in August. Patches in place, sample boards shipped out to board partners, etc. But if Strix Point, made on the same process node, is popular enough that RDNA4 just gets booted down to no priority and "launches" months later that's an interesting indication of where the dedicated GPU market is going.

Until the chip market crashes and supply vastly outstrips demand, I do wonder how much of a future dedicated GPUs have. There's already a lot of active complaints that the GTX1060/Rx580 of last generation, in terms of dedicated GPU to console performance per $, just hasn't appeared. If every company with demand issues has better things to do with their supply regardless that means it's not going to appear anytime soon, especially as "Moore's Law" (already technically dead for a decade or something) slows to a crawl.
 
But I was told that wouldn't happen. Apples and Oranges.

5616.jpg
 
All of the gaming chips seem to be on 4/5nm. Haven’t seen any reports of supply issues with those nodes.

This is shaping up to be a boring refresh generation anyway. AMD is looking to bring discounts in price and power consumption with no change in absolute performance. Intel is playing catch up. And there are no games to take advantage of whatever Nvidia comes up with as current consoles are maxed out.
 
RDNA4 would be functionally ready to launch in August. Patches in place, sample boards shipped out to board partners, etc. But if Strix Point, made on the same process node, is popular enough that RDNA4 just gets booted down to no priority and "launches" months later that's an interesting indication of where the dedicated GPU market is going
That's just AMD, NVIDIA is not delaying RTX 5000. AMD on the other hand is suffering from being cornered on the dGPU market (low market share), low interest in the brand (due to lagging in tech/features), their gaming revenue dropped 50% in Q1 2024, and they are forecasting an even bigger drop throughout the rest of the year, which means very low sales, which means lots of unsold stocks of RDNA3 GPUs. It's only logical they delay RDNA4 until they get rid of the stocks and attempt a restart of the interest in the brand.
 
I wonder where the notion that RDNA4 "would be functionally ready to launch in August" coming from tbh. Sure hope it's not the local well known "leaker" who seem to be unable to get anything correct.
 
IIRC the rumors says Nvidia will release ultra high end Blackwell gpu first, with no info for lower segments.
 
IIRC the rumors says Nvidia will release ultra high end Blackwell gpu first, with no info for lower segments.
That's nothing new though. Nvidia has been releasing their lineups like this for about a dozen of years. Lower priced SKUs will follow in 2025.
 
Gaming/dedicated GPU's are still an ever reliable, profitable source of money for these companies, so they're not going to completely abandon it. Also remember they use these same GPU's for laptops as well, and the actual higher performing Strix Halo is only expected to be a very high end option. Strix Point itself might eat up some of the lower range discrete GPU market in laptops, but remember, this is just AMD. Having an Nvidia sticker on the box will still be a desirable marketing tool for laptop manufacturers, so that'll still likely be a popular thing.

Certainly things are not in a good place for discrete GPU's, and consumers are understandably unhappy, but they're still gonna buy. Even with the huge cost of living issues, consumerism still seems as crazed as ever. They may moan at the prices, but they're still gonna hand over that credit card at the end of the day anyways.
 
All of the gaming chips seem to be on 4/5nm. Haven’t seen any reports of supply issues with those nodes.

This is shaping up to be a boring refresh generation anyway. AMD is looking to bring discounts in price and power consumption with no change in absolute performance. Intel is playing catch up. And there are no games to take advantage of whatever Nvidia comes up with as current consoles are maxed out.

Yes 7nm and 5nm are currently underutilized as the 3nm node is ramping and some of the major customers, i.e. Apple, Qualcomm, AMD, Mediatek are shifting some orders to 3nm. TSMC is also supposedly cutting prices on 7nm and 5nm as the equipment is depreciated/depreciating. So there is plenty of capacity and favourable pricing which is logically the better option for consumer GPUs. So with no node jump, this generation is likely to be only a marginal improvement with perhaps better price/performance at certain tiers.

I don't expect consumer GPUs to move to 3nm until 2026 when 2nm starts ramping and even 3nm is not a huge jump as density does not improve significantly. I suspect we will not see a significant increase in GPU performance until they move to 2nm/GAAFET where we will see significant improvements in power/performance (Similar to Pascal with FINFET)
 
So with no node jump, this generation is likely to be only a marginal improvement with perhaps better price/performance at certain tiers.
Or they push die sizes in order get the desired performance improvements. Rumors so far of both RDNA4 and Blackwell dont suggest this, but it's exactly what Nvidia did for Maxwell and Turing.
 
This is very likely what we will be getting. The alternatives are: a) no performance gain, just value improvement (not gonna happen due to competition); b) no value improvement, performance gain only (not gonna happen because of competition also).
 
Hopefully we don’t get both a mediocre performance jump and a mediocre value improvement.
We will never, ever get properly good value GPU's ever again. This past generation was the last opportunity to fight for this as consumers, and we gave in instead. Precedent is set and people are now fully normalized to much higher pricing.

Our only chance would be if Radeon somehow actually got back to being properly competitive again, and that's clearly not on the horizon. But even then, Nvidia would probably be glad to bleed some small market share to keep their sky high margins and high prices. DIY market is simply a much smaller percentage of their income now, so there's less motivation to fight hard for it and they would simply bank on their mindshare advantage anyways.
 
I have nothing against using a less than cutting-edge process if it brings prices down. It worked well for Ampere.
It put Nvidia at a very close risk of being beaten in perf/watt by competition though, and it is unlikely that they'll feel like repeating that gambit any time soon.

Also technically N5 isn't a cutting edge process anymore so it is arguable if going to an even less advanced node would be that beneficial even in terms of perf/price. The lowest segment where this is key is being served by Ampere still suggesting that there's nothing between that and N3 which would be a better fit from that point of view.
 
It put Nvidia at a very close risk of being beaten in perf/watt by competition though, and it is unlikely that they'll feel like repeating that gambit any time soon.
Maybe it would upset NVIDIA but I don't think it matters very much to their gaming customers. They seem intent on buying NVIDIA regardless. Losing some perf/W but gaining significant perf/$ would be a great tradeoff IMO. 4070 Super level performance for ~$450 would be a huge seller even if it pulls 250W.
 
Also technically N5 isn't a cutting edge process anymore
Well according to you literally just a month ago, it is cutting edge.

But I guess it's kind of a Schrodinger's process - cutting edge when you observe it to be useful for your argument, and not cutting edge when not useful. smh

That said, I generally agree that going to anything less than 5nm TSMC process would not be useful at this point in time. Samsung 4nm is probably the only other 'at scale' node that is remotely in the same ballpark(and available for customers), and I just dont think there's much reason to go backwards like that. Samsung 8nm wasn't exactly a fantastic node, but it was at least still an efficiency/density improvement on what came before. Samsung 4nm would not be.
 
They seem intent on buying NVIDIA regardless. Losing some perf/W but gaining significant perf/$ would be a great tradeoff IMO.
Losing perf/watt would mean losing top perf which in turn would remove their "halo" status and likely significantly affect the sales of the whole lineup.
Contrary to a popular weird idea that people would just buy Nvidia "regardless" they certainly would not, just like the very same people aren't buying as many Intel CPUs now as they did previously.

Well according to you literally just a month ago, it is cutting edge.
Hence the "technically" part which you've decided to just skip for some reason.

But I guess it's kind of a Schrodinger's process - cutting edge when you observe it to be useful for your argument, and not cutting edge when not useful. smh
No, it's not "a kind of a Schrodinger's process".
It is still cutting edge if we talk about what processes are available in general for production worldwide and will likely remain so for several years still.
But since we're not talking about that and are instead talking about onto what process Nvidia can put their next consumer GPU lineup specifically then this is obviously a different landscape - as they can't put them on anything below that which they are using now (which is N5 family). So in this conversation a suggestion to use a less advanced process point to their current choice already being that i.e. "not cutting edge for Blackwell production".
And you're just taking a cheap shot instead of reading and engaging into the converasion, as usual.
 
And you're just taking a cheap shot instead of reading and engaging into the converasion, as usual.
I literally responded with actual commentary about the actual subject matter, which you are conveniently pretending didn't happen.

If you really cared about keeping things on topic, you wouldn't have ignored that.

And yes, it is 'technically' cutting edge when convenient, and not cutting edge when not convenient. Same thing. You are simply arguing whatever is most useful in the moment.
 
Back
Top