AMD Execution Thread [2023]

Status
Not open for further replies.
Yeah vanilla Raphael has a 230W PPT or therein (it kinda does not much there Vs 160W PPT but I digress).
I thought we were referring to just CPUs as opposed to APUs.


Outside of the 2 highest end AMD products, high power draw is limited to Intel for little in the way of any actual benefit.
 
Sure but photorealism is still a long ways off. The immediate problem is price / performance stagnation in all market segments so even incremental IQ gains are increasingly expensive.
Stagnation on HW is no excuse for bad performance. Instead path tracing and Lumen they need to develop software which runs well on the affordable HW of the time.
This does not mean enforced stagnation on IQ at all.
It's just that game devs are used over decades to base all their progress on HW progress which they assume for granted and for free. It's rooted so deeply that now everybody thinks SW and HW progress are tightly coupled, but that's not the case.
Gaming at 1080p medium settings is getting more expensive not less. I don’t see how APUs solve that problem.
HW is not meant to solve or even compensate failure on SW development. Devs will learn it the hard way sooner or later.

The problem APUs can solve is establishing an affordable mainstream, that's all. And devs will target this mainstream as usual.
If this would mean falling back to static / inferior lighting for example (not the case), it's not the end of the world. But if devs fail to support an affordable mainstream, it is the end.
Currently the mainstream is undefined. You can get a 4060 for 300, which is ok. But anybody tells you should not buy a 4060, because it's crap. New games also often have minimum specs above that.
It's like the whole industry - IHVs, devs and press - sat on a table to figure out how to kill their business most effectively. And the reason is inflated expectations on all ends.
A move towards APU in the name of cost and power efficiency is an opportunity to fix this broken state. But all 3 parts of the industry need to promote and support this new goal, which diverges from the former reliance on Moores Law.

I don't say that's super awesome, but what else could we do?
When iGPUs came up, i did not take this idea serious, nor did i want it. iGPU was not meant for gamers, but for office PCs. For gaming dGPUs were affordable and awesome. Everybody wanted them.
But things have changed a lot since then, so i had to change my mind too.
 
But the increasing majority will say goodbye to chasing photorealism for premium costs. They just want to play some games for fun.
That's not even remotely historically accurate. PC gaming has always thrived on state of the art hardware, people quickly forget the days of the 90s and 2000s, where APIs, video accelerators and GPUs were changing at a breakneck pace, you had to change hardware every two years just to keep up, now you want potato hardware to run latest games and last you a decade. That's just not possible, never has been.

If Switch2 looks better than Switch1, it makes them just as satisfied as comparing PS5 to 4.
Even consoles don't agree with your assessment, PS4 alone wasn't enough, PS4 Pro, Xbox One X were created to push things further, and now we are having a PS5 Pro, people want more, devs want more, potato hardware is no longer enough, even in console space.

HW is not meant to solve or even compensate failure on SW development. Devs will learn it the hard way sooner or later.
If devs learned to make Starfield run on a 1060 at 4K max settings through breakthroughs in software, why would you think they will settle for this? I mean they have 4090s to play around with, they will just jack up game complexities to make use of all that power. We would be talking about gaming at 8K and 16K, or 4K120 Path Traced games. You would be back at square 1, your potato hardware becoming irrelevant image quality wise.

The problem APUs can solve is establishing an affordable mainstream, that's all. And devs will target this mainstream as usual.
I think you are forgetting a serious fact here, PC is scalable, no one forced anyone to play at Ultra settings, if you can't afford it, then play at medium settings, if you can't afford that then buy used hardware and play at medium settings. Your PC experience is highly customizable to your budget. APUs will solve what exactly? cheaper hardware? we already have that, low resolution/low settings gameplay? we already have that!

Bro wtf is Strix-Halo then?
I will believe that when I see it. But even if it's gets released with it's rumored specs, you are looking at a sub 3060 class GPU arriving in late 2023.
 
Last edited:
PC gaming has always thrived on state of the art hardware, people quickly forget the days of the 90s and 2000s, where APIs, video accelerators and GPUs were changing at a breakneck pace
You talk about historical accuracy, but then your arguments are personal conviction, ideals, and perception.
I come from the same camp. I understand what you mean. But it's dead, Jim. We can no longer effort fast paced progress, period. Groundbreaking progress on chip tech would be needed, which is not in sight.
now you want potato hardware to run latest games and last you a decade.
Basically yes, although i hope we still see enough HW progress to justify an upgrade cycle of say 5 years.
And i do not expect it to run the latest games, because they are too inefficient (or 'ambitious', if try to be nice), and they can't scale down.

Scaling is the keyword, and we have to achieve it anyway. Because we want to run and make the same games on Switch, Steamdeck, up to 4090.
That's a ratio of 1 : 250 in teraflops, and i agree such scaling probably not possible. The lowest and highest ends need specific treatment, which is bad and adds up costs.
Thus we want more powerful handhelds, but also less powerful high end dGPUs. (If AMD indeed decides to ditch high end GPUs, that's actually good news to me.)
The trajectory is clear: We seek for an affordable sweet spot (currently consoles), develop primarily for that, and try to achieve good enough scaling to support the extremes as well.
It's also clear that current PC + big dGPU is above the sweet spot, if costs is the measure.

That's a reasonable conclusion, and any counterargument is either based on inflated expectation or experience from inefficient software. And sadly games represent both of those issues very well, cough.

PS4 alone wasn't enough
It was never maxed out at all.
potato hardware is no longer enough, even in console space.
Game consoles are not potato HW. They are optimized for the gaming workload. And they illustrate what a PC APU can achieve at least, if they overcome the BW limit imposed by dated platform standards.
In other words: We already have proof that APU is actually enough from consoles (or even better, Apple).
You would be back at square 1, your potato hardware becoming irrelevant image quality wise.
This entirely depends on the person who looks on the image.
You want 8K@120 and PT? Fine, get RTX and enjoy with max settings.
I want 1080p@60 and realtime GI on a cheaper mini PC. And i will be fine running this on its iGPU.
Which one of the two becomes irrelevant remains to be seen.
My guess is neither becomes irrelevant, honestly.
But i'm convinced you need my proposed efficient and affordable mainstream to base your enthusiasm upon its stable economy, which does not go both ways.
And i don't say this to attack your ideology, or to foretell your enthusiasm camp is going to extinct, no.

What i try to say is: The PC platform needs refinement. It needs to split up into performance segments. And we need social awareness about those segments, so people can pick their camp.
Once we have this, we wont waste time on discussions like this. Because the differing goals and desires are already clear and separated. Similar to console vs. PC gamers now.
I really think a lot of current dissatisfaction and rant from PC gamers comes from the fact that PC gamers are no longer a single community. We still assume this, but gaming became just too big, and we can no longer expect agreement on how games should look like, what's the specs, what's the genre ofc., etc.
So, technically i propose a split like this:
1. Affordable APU. Mini-PC and handhelds, costs <1000, components not as easy to replace or upgrade. Same games, but no max settings.
2. High end traditional PC, costs 2000 or more. Same games at max settings.

I know a PS4 can do visuals very close to Portal RTX. Because i did this 10 yeras ago already. I'm currently really at the edge of failure with tools development, but i know it is possible. So i know the APU potato is good enough easily.
If that's the base, extending it with PT is the natural next step, and it will just work. But only for the second camp willing to spend much more.

As an enthusiast you're on the lucky side, because scaling up is always easy.
But you need the platform, and you need your camp to remain big enough to be supported.
Thus a rise of APU does not impose any thread on your ideology. It's the opposite - it only helps you, and you should not call the hand which feeds you a potato. :p


APUs will solve what exactly? cheaper hardware? we already have that, low resolution/low settings gameplay? we already have that!
But i don't want used, power hungry, and bulky crap. So no, we don't have it. We are far from it, if we look at the traditional PC.
We still talk about IBM PC with replaceable components, which was a really good feature back then.
But now not anymore. Because Apple has shown they can put an entire mainboard including CPU, GPU and RAM on a single small chip, consuming just a fraction of power and space.
This, and only this is modern computing.
You tell me i should be happy with steam engine driven locomotives, but i want a slick e-car.
I don't want history, i want a future.
And yes, APUs will (and already do) solve this. The only question is if it's still an open platform with a Windows logo on it.
 
4060 is $300. They also make a bunch of older stuff like 1050 and 3050.
Doesn't change much, 1 mid-range vs 6 high-end. Even if we account for the 2022 restart of Turing production, 1630, 3050 and 3060 fit your criteria of sub-$400 MSRP, and 3060 Ti and 3080 are over that - so 4 mid-range vs 8 high-end models.

AMD's h/w has been trailing Nv's in perf/transistor since Kepler/Maxwell so they had to compensate by selling bigger dies at similar price points, not smaller.
I don't see that, AMD used similar or smaller die sizes across the entire range, and similar or smaller manufacturing nodes as well.
AMD also reused older GCN generations in low/mid range until Navi, while NVidia reused older generations only in low-end and OEM until Maxwell 2.

GCN1/2: 90, 118, 123, 160, 212, 352, 438
GCN3: 359
Fiji: 596

GK1/GK2: 79, 118, 221, 294, 561
GM1/GM2: 148, 227, 398, 601


Polaris: 101,123, 232
Vega10: 486
Vega20: 331

Pascal: 74, 132, 200, 314, 471
Volta: 815


Navi1: 154, 251
Navi2: 107, 237, 335, 520

TU11/TU10: 200, 284, 445, 545, 754
Ampere: 200, 276, 393, 628


Navi3: 204, 350, 531

Ada: 159, 188, 295, 378, 609


AMD had an advantage in processing TFLOPS and memory bandwidth, but Nvidia had an advantage in fillrate until AMD released Fiji (R9 Fury) with HBM2 memory.

Then Nvidia improved both memiry bandwidth and processing power with Pascal, and reduced die sizes with Ampere and Ada, while AMD didn't even release any sub-200 mm2 class die with Navi3, like they used to do for many years.

It is also a bit moot if some dies being smaller means lower production costs in comparison as they tend to use different production processes.
GCN/Fiji and Kepler/Maxwell were produced on the 28 nm TSMC process node.
AMD moved to GloFo 14 nm in the next cycle with Polaris/Vega10, but returned to TSMC 7 nm with Vega20/RDNA when GloFo cancelled their 7 nm node.
Nvidia stayed on16/14/12 nm nodes for Pascal/Turing until Amper, which used 8 nm node


GPUs like GM200 certainly weren't made for DC or AI.
Also it's not like Nv is making only 600 mm^2 GPUs right now.

It's been going up much faster for high end because high end doesn't have an upper pricing ceiling.
Meaning that you can make a fully enabled 600 mm^2 GPU on N4 and sell it to gamers for some $2500.

That's the problem. Kepler/Maxwell and GTX Titan line introduced 561 mm2 GPUs and $699 to $999 price tags for a single-GPU board, and then it never stopped - 601, 815, 754, 628, 609 mm2... So 600+ die sizes became the norm in the consumer segment.

When NVidia lists these high-end cards at $800 to $3000, they don't expect much sales (except when people submit to the cryptominng craze, AI craze, or whatever is the next big thing craze), but it works to make the regular customer accept a gradual price hike even in lower segments with each new genaration - because when they see these $1000 top-end cards, suddenly $500 for mid-range doesn't seem so ridiculous anymore.

And now your old model, when there is the "sub $400" tier where each new generation of GPUs stays at the same price range but brings 1.5-2 times better performance due to higher frequencies and more gates on more advanced process, is effectively broken.

people want fast GPUs to be cheaper because... they want it. That's the extent of reasoning.
They want it because it's how it used to be during the entire 27 years of 3D graphics revolution.


Well, yeah, nobody says otherwise.
...
 
Last edited:
Stagnation on HW is no excuse for bad performance. Instead path tracing and Lumen they need to develop software which runs well on the affordable HW of the time.
This does not mean enforced stagnation on IQ at all.

Oh I completely agree with this. I think there’s a ton of untapped potential in existing hardware and it would be great if software were better optimized for it.

The problem APUs can solve is establishing an affordable mainstream, that's all. And devs will target this mainstream as usual.

I don’t understand this point. There are already cheap APUs on the market (with poor performance) and they haven’t established any sort of mainstream. Why would this change?
 
I know a PS4 can do visuals very close to Portal RTX. Because i did this 10 yeras ago already.
Nothing comes close to Portal RTX. Dont know why people cant accept that Pathtracing its the solution to every problem and not a reason to create new problems.

I can play games just fine on my Switch or a PS3. There doesnt exist a reason to doubt that a "console" can play games.

/edit: Ironically my 4090 has 9x more compute performance than a PS5. Yet i dont have 9x more performance in PS5 ports. Obviously our problem is not hardware, it is the inefficient and unoptimized software layer between hardware and application.
 
Last edited:
But i don't want used, power hungry, and bulky crap. So no, we don't have it. We are far from it, if we look at the traditional PC.
Now that's just pedantic. You can easily have efficient CPUs and GPUs on PC right now if you want to have it. Build a 7800X3D + 4060 and play at 1080p DLSS3 and you will have the highest performance in path tracing/ray tracing/rasterization at minimal power draw than any system on earth. DLSS3 already decreases power consumption by the way and Ada is more power effecient than anything else.

We can no longer effort fast paced progress, period. Groundbreaking progress on chip tech would be needed, which is not in sight.
That's irrelevant, consumers were willing to pay frequently back then, today it's slowed down, but it's the same principle, PC people want good hardware.

PS4 was never maxed out at all.
Such is the case for every peice of hardware.

In other words: We already have proof that APU is actually enough from consoles (or even better, Apple).
Base consoles are not enough, that's why you have higher tier consoles, because APUs gets outdated fast. They are fixed and can't be upgraded to last longer.

Apple silicon is expensive to make and wouldn't fit your cost model, they are built on the latest nodes, with billions and billions of transistors (67 billion for M2 Max) with huge bus (512-bit). So no, that doesn't apply either.

The PC platform needs refinement. It needs to split up into performance segments. And we need social awareness about those segments, so people can pick their camp.
That split is already there, in hardware, in settings and everything.

I am all for extracting the full potential of hardware, and I am all for advances in graphics algorithms, I just don't see how APUs fit into this, you couldn't even do this for first party console exclusives, how would APUs on PCs force developers to do anything?
 
Nothing comes close to Portal RTX. Dont know why people cant accept that Pathtracing its the solution to every problem and not a reason to create new problems.
PT is not efficient because it lacks a radiance cache, thus you need to recalculate all bounces every frame.
If you have a cache, you get infinite bounces for free. Examples: Some pre RTX Minecraft mod which was great, but called it self 'PT'. Final Metro Exodus DXR, which also calls itself PT but isn't. And Lumen.
For a realtime application it is just stupid to miss out on this essential optimization, and that's why classical PT is no good realtime solution.
NV also aims to add caching, working on Neural Radiance Cache.

That said, we will continue to use the PT term for games, but we will extend it and diverge from the classical definition.
Classical PT as is is not the solution. If you think so you're just wrong, or you're willing to spend more than needed to achieve photorealism.

Build a 7800X3D + 4060 and play at 1080p DLSS3 and you will have the highest performance in path tracing/ray tracing/rasterization at minimal power draw than any system on earth.
That's what should be said, yes. But it's not. What they tell us instead is:
* CP2077 lists minimal specs for PT above 4060. (Although it runs quite fine with it, according to benchmarks.)
* You should not buy any 8 GB GPU at all anymore, because that's no longer enough for gaming.
So i have to pay 1000 for a gaming PC which isn't fully capable. At this point i'm frustrated, quit PC gaming, and buy a PS5. Which is indeed what people do.
If APU becomes widespread, devs will support weaker HW better, and i guess it will replace the current GPU entry level like 4060 for good.

That's irrelevant, consumers were willing to pay frequently back then, today it's slowed down, but it's the same principle, PC people want good hardware.
No, because the end is nigh. Upgrading also was an investment into a dream about awesome future games in our imagination. This was key for motivation to invest.
But now we already see PT games, so the journey is almost over. And we also see the cost is so high, we're obviously beyond the point of diminishing returns.
To many, that's more frustration than excitement, so they're no longer willing to invest unto visuals which do not benefit their experience enough.
We're in a phase of stagnation. That's no problem, but some things will change. Less HW sells. More progress on game design rather than gfx, longer upgrade cycles.
And after some oscillation between Steamdeck and 4090 we should find an economic sweet spot, and stick at this for quite some time. HW industry will have a hard time to convince us otherwise.


Apple silicon is expensive to make and wouldn't fit your cost model, they are built on the latest nodes, with billions and billions of transistors (67 billion for M2 Max) with huge bus (512-bit). So no, that doesn't apply either.
It's more expensive than a x86 CPU, but less expensive than CPU+GPU+RAM. Costs for GPUs really went through the roof. It's no longer some cool gadget people want to have - it's a requirement they hate to have.

That split is already there, in hardware, in settings and everything.
Not everything. There is no separation between high end tech enthusiasm people and others who don't care about tech but just want to play games at smooth enough framerate.
They both see the same sites, visit the same forums, etc., and they wonder why they can't agree on anything. Some of them get upset if they hear they need a 4090 to game, others are upset if people say RT is useless.
This could be avoided easily if both groups are more aware about the other. (Just some noisy idea of mine)

I just don't see how APUs fit into this
It fits well into this once APUs reach console level performance, to say it practically.
The problem it can sole is a shrinking PC user base. Currently that's really worrying. It's no longer speculations and predictions - it's real. (but can't link statistic i've seen)
 
Rumor: Kepler is claiming RDNA4 is not going to have any high end options, just like RDNA1.


Until some specific details of what exactly could go wrong are leaked, I find it hard to believe that AMD would just skip Navi 41/42 altogether and effectively give up mid-range and high-end market for the entirety of 2024-2027. If true, they would have to start again from the ground up with maybe 5% total market share.


With recent reports that NVidia overcommitted the entire TSMC wafer capability for their AI chips and even reduced production of the RTX 4000 series, AMD could have decided to go the same route and cash on money-making APUs like Instinct MI300/MI400 accelerators, mobile Strix Point APUs with integrated RNDA3.5 graphics, and PS5 Pro 'Trinity' update, instead of trying to win back the gaming enthusiast segment...


Either way, if the rumour is true, even the small base of remaining Radeon users would be hard tempted to invest in a gaming GPU from AMD again, considering their plans to cut remaining GCN dGPUs (Polaris/Fury/Vega) and APUs (Radeon Graphics/Vegs 3/6/8/10/RX 11) from the latest driver releases.

https://videocardz.com/newz/amd-rumored-to-be-skipping-high-end-radeon-rx-8000-rdna4-gpu-series

originally RDNA4 consist of four chip configuration ? I still remember original rumours, about RDNA3 being chiplet arch and so that high end solution is made of two GCD , which still mean that there´s three different die config (N31,N32,N33)
With RDNA4 4 I would expect the same number of die configurations ( N43 , N42 and N41 with two GCD´s )

There were plently of rumours about which Navi 3x variant would be using chiplets, but in the end only two GCD designs are released:
  • top-end Navi 31 - 96 / 84 CUs (6144 / 5376 shader ALUs), RX 7900, ~300 mm GCD + six ~36 mm2 MCDs, total 531 mm2; 80 CUs (5120 shader ALUs) and five MCDs, RX 7900 GRE
  • mi-range Navi 32 - 60 CUs (3840 shader ALUs), RX 7800, ~200 mm2 GCD + four ~36 mm2 MCDs, total 350 mm2

Navi 33 with 32/28 CUs (RX 7600/7500) remains monolithic, and the rumours about a Navi 3X part having two GCDs (2x 64 CUs for a total of 8192 shader ALUs) did not materialize so far.


Correct me if I'm wrong. AFAIK the incentive for Navi 3x to go with chiplets was to reduce the size of individual chips by separating the Graphics Compute (GCD) and Memory/Cache dies (MCD). Then partially disabled dies could be packaged with less MCD chips to create a lower SKU. To connect the chiplets, they employ an organic substrate "high performance fanout" interconnect similar to that in Ryzen CPUs, which isn't nearly as expensive as silicon interposers used in Fiji/Vega or stacked 3D designs like Aldebaran (Instinct MI250).


RDNA4/Navi 4x would be a step further towards complex chiplet configurations, which would be impossible or too costly to make as a monolithic die according to AMD design philosophy. Thus the entire RDNA4 family would use one single 'GCX' (Graphics Compute Die) chiplet with 48 CUs, and individual chips would be packaged by vertically stacking several GCX dies into a single GCD on the 3 nm node, then combining this GCD with several MCD dies produced on a 5/6 nm node. According to RedGamingTech, this would result in a top-end Navi 41 chip with 129 TFLOPS, and the entire lineup consisting of
  • Navi 41 with144 CUs (9216 shader ALUs), 129 TFLOPS,
  • Navi 42 with 96 CUs (6144 shadel ALUs),
  • Navi 43 with 48 CUs (3072 shader ALUs ).

Navi 44 would remain monolithic with 32 to 40 CUs (2048 to 2560 shader ALUs).

RDNA4 architecture would also have improved scalar ALUs with FP32 pipeline and faster WaveMMA V2 (the 'AI' processing blocks), faster raytracing block with hardware-accelerated traversal of bounding volume hierarchy (BVH), as well as improved geometry engine and Infinity Cache v3. And RDNA3+ (RDNA 3.5) would have improved scalar ALUs but not the faster raytracing blocks and geometry engine, and is designed for desktop APUs only.



Then in July these plans to vertically stack multiple GCX dies were scraped, so each GCD would become a monolithic (and scaled-down) die, with just 120 CUs (7680 shader ALUs) for Navi 41 and 80 CUS (5120 shader ALUs) for Navi 42; performance targets were lowered as well. And suddenly in August rumour had it that RDNA4 suffers from some unspecified 'architectural problems' and wouldn't meet performance targets again, so the high-end Navi 41 / 42 chips with 2 and 3 GCX chiplets respectively are cancelled entirely even before tapeout, and only Navi 43 with 48 CUs (3072 shader ALUs) and Naci 44 (32CU} will be released. That's according to RedGamingTech.


FYI AMD already successfully packaged several graphics chiplets for CDNA2/Aldebaran (Instinct MI250/MI250X) which integrates two GCD dies for a total of 220 CUs (14080 compute ALUs) in a monstrous 1540 mm2 package that comes with 128 GB HBM2 memory - see the AMD CDNA2 Architecture White Paper https://www.amd.com/en/technologies/cdna2 (PDF direct link)

MI300 (CDNA3) goes even futher by integrating several GCD dies with Zen4 CPU dies and 128 or 96 GB of HMB3 memory, and comes with several variants with 0, 152, 228 and 304 CUs - so one GCD has 76 CUs (4864 compute ALUs). And MI400 has already been mentioned during the recent earnings call.

There are no separate MCD chiplets in CNDA2 though, each GCD has 32 'slices' hosting memory controller/cache. CDNA3 memory controller details have not been revealed yet.
 
Last edited:
Makes no sense to me with their expected move into multi chip modules for GPUs unless they canceled these plans.

seems they did https://forum.beyond3d.com/threads/amd-execution-thread-2023.63186/page-9#post-2308957

to resume RTG strategy :

*they don´t want to build huge die GPU aka Fiji
*they don´t want their GPU become too complex (more GCD´s chiplets, interposer and expensive interconnect)
*they don´t need direct competition to all Nvidia GPU
*they can build faster GPU, but they don´t want to
*they can fix RDNA3 but they won´t bother
*they only target max. $999 price as acceptable by gamers
*they don´t care about dGPU market share because they have console

That pretty much sums up what RedGamingTech and Moore's Law is Dead were saying, if you believe them.

Maybe RDNA4 won't have high end because it will do just that - put high end performance into mid range segment?
If Navi 43 remains on 48 CUs (3072 ALUs), then it should be around RX 7900. Navi 23 / Navi 33 only have 32 CUs (2048 ALUs) though, and a 32 CUs would be similar to RX 7800. Unless they also doubled the FP32 blocks again with 4-issue ALUs, however it's not the same as doubling the number of CUs (WGPs).


BTW there were leaks on a possible RX 7950 XT / XTX SKU using the same 'gfx1100' arch as Navi 31, with no details besides product names; and the old rumor had two GCD dies in some Navi 3x variants.

Could AMD revive this idea, and package two partially disabled Navi 31 GCD dies into 144 to 160 CUs chip using the existing fan-out interposer connectivity and tune the frequencies down to stay within reasonable TDP? This configuration could achieve 100 TFLOPS and 1200 GT/s.

Or they could scale down existing RDNA3 GCDs to 4 nm or 3 nm for increased frequencies, maybe add more WGPs for a total of 120-128 CUs (7680 or 8192 shader ALUs), and call it Navi 31 XL / RX 7950 XTX.

Or take PS5 Pro APU with its improved raytracing hardware (BVH from RDNA4), redesign the graphics part as a separate GCD and call it Navi 41 / RX 8900.


Anyway I can hardly believe AMD would go on for 4 years without a high-end GPU, when they have plenty of time to either redesign RDNA4 or upgrade RDNA3.
 
Last edited:
AMD is slowly becoming irrelevant
slowly ? they managed from 50% to 10% market share just in 17 years...

c3f2585cb15eabc8a88499b09a39fcbd40ddde04c9a92c0464f13aa07626cab1.jpg
 
Until some specific details of what exactly could go wrong are leaked, I find it hard to believe that AMD would just skip Navi 41/42 altogether and effectively give up mid-range and high-end market for the entirety of 2024-2027.
It's possible that they weren't able to beat their own previous high end parts - without blowing up their costs at least. This is already somewhat true for the upcoming N32, and the stagnation of perf/price of lower pricing tiers was always bound to expand into the higher ones due to production realities.

If true, they would have to start from the ground up again with maybe 5% market share.
AMD's high end is so small in market share that them leaving it won't have much effect on their overall market share.
 
That's what should be said, yes. But it's not.
If you want efficiency, you should see this. A 4060 desktop GPU beating a mobile RDNA3 GPU in efficiency by two times.

* You should not buy any 8 GB GPU at all anymore, because that's no longer enough for gaming.
And you think APUs will give you any more than 8GB of VRAM?

* CP2077 lists minimal specs for PT above 4060. (Although it runs quite fine with it, according to benchmarks.)
Why are you even considering PT? APUs won't get you anywhere near RT performance. Especially AMD APUs. You are locked out of the entire ray tracing market, Lumen is not for you either, whether in software or hardware.

It's more expensive than a x86 CPU, but less expensive than CPU+GPU+RAM.
I don't believe it is, It's probably more expensive than all of those combined.
 
AMD's high end is so small in market share that them leaving it won't have much effect on their overall market share.
I did mean total dGPU market share, not just high-end.

Top-end gaming parts are kind of halo products that drive the sales of lower segments. Intel Arc 700 series are quite good yet don't sell much with no high-end counterparts. That perception stems from the early days of 3D graphics on the PC, when each company only had a top-end GPU and maybe one reduced peformance model.

AMD has already fallen to 10% in 2022Q3 sales figures, and these are mostly sales of mainstream parts, so 5% by 2027 could even be overly optimistic if they only have low-end dGPUs on offer.

It's possible that they weren't able to beat their own previous high end parts - without blowing up their costs at least.
That's what these leaks are about. The question is why they wouldn't reach their performance/watt targets.

The gate shrinking part of the equation is quite straightforward, and so are fast wide memory buses on a silicon interposer. They've made it all before.

Vertical stacking of GCD dies is new and this could be problematic, and more so is trying to rework existing 3D stacked designs into a larger monolithic die with a different interposer design.

The leak assumes that silicon interposer contained some display controller logic, therefore it probably also connects GCDs wiith 3D stacked MCD dies, while Navi 31 uses a high density fan-out interposer on organic substrate.
 
Last edited:
Status
Not open for further replies.
Back
Top