AMD Execution Thread [2023]

Status
Not open for further replies.
You can't argue the discussion. There is no argument as whatever their beloved Nvidia says is the price/tier then there is no questioning. That's literally their argument, Nvidia says the price is this therefore that's the segment. Crypto cash grab products shifted the tiers up immensely. That's how we ended up with a 3080 12Gb for $1200 MSRP, and why we started the next generation with a $1200 4080.
 
WTF does "beloved Nvidia" has to do with discussing the segmentation of upcoming AMD GPUs?

What you can't do is argue with AMD fanboys apparently since anything which they don't like is because of Nvidia, somehow.
 
Yes.
Because the segment is decided by the price, not a name or performance or what you would like it to be.
It would fail of course because there are much better and faster GPUs in the same segment right now. Which is coincidentally why there is no 750Ti at $1000 under any name.


Because I am reasonable, and getting mighty tired of your personal insults which appear every time you're incapable of arguing your point with anything else.
Incapable of arguing my point? You literally responded to the part specifically where I make my argument. So what are you even talking about?

And the fact that you'd honestly call a $1000 750Ti a 'high end GPU' really demonstrates how weak the argument is that we can only 'range' GPU's by price and nothing else. Accepting this sort of reasoning is exactly how Nvidia is exploiting consumers, by selling lower end cards at higher prices. And AMD is probably very happy for this situation as it lets them tag along with similar, albeit less extreme exploitation.

It's especially annoying as the main point of doing a chiplet strategy for N32 and 31 was reduced costs. N32 is not an 'enthusiast class/high end' GPU, no matter how much AMD might try and brand it like that. Consumers can and should be allowed to see through the BS and call it out.
 
MSRPs did not change. This is the market positioning from the manufacturer.

The manufacturer's prices did change, more so for enthusiast level cards. Nvidia hiked their prices during the 2018-2019 release cycle, mostly due to lack of good competition from AMD and the cryptomining boom, and AMD hiked the prices during the 2020-2021 release cycle (see $550-650 for R9 Fury vs. $400-500 for RX Vega vs $700 for VII vs. $1000-1100 for RX 6900/6950 vs $900-1000 for RX 7900), also due to the ecnonomic situation of product shortages and the continuing cryptomining boom as well.

Their market segment positioning did not really change though, both companies used numbers 8-9 for enthusiast / high-end, 6-7 for mainstream / mid-range, and the rest for entry-level / low-end products within the same generation since the early 2000s, and changes to MSRP or street prices do not move these products to another category - so GeForce PCX 5950 remains a high-end card from 2004 and Radeon X1950 XTX remains a high-end card from 2006.
 
Last edited:

New AMD Pro cards illustrating the point above:

AMD%20Radeon%20PRO%20W7600%20W7500%20Deck_10_575px.png


Note the price ranges assigned to segments.

Incapable of arguing my point? You literally responded to the part specifically where I make my argument. So what are you even talking about?

And the fact that you'd honestly call a $1000 750Ti a 'high end GPU' really demonstrates how weak the argument is that we can only 'range' GPU's by price and nothing else. Accepting this sort of reasoning is exactly how Nvidia is exploiting consumers, by selling lower end cards at higher prices. And AMD is probably very happy for this situation as it lets them tag along with similar, albeit less extreme exploitation.

It's especially annoying as the main point of doing a chiplet strategy for N32 and 31 was reduced costs. N32 is not an 'enthusiast class/high end' GPU, no matter how much AMD might try and brand it like that. Consumers can and should be allowed to see through the BS and call it out.
So your point is that a card is definitely "mid-range" because you want it to be mid-range. Am I getting it right?

Anyway, the original claim was that upcoming 7800 and 7700 aren't "enthusiast" but "mid-range".
In gaming GPUs "mid-range" segment is up to ~400 USD.
So the answer to whether they will be "mid-range" or not depends on if they'll get <$400 MSRPs from AMD.
As easy as that.
 

New AMD Pro cards illustrating the point above:

AMD%20Radeon%20PRO%20W7600%20W7500%20Deck_10_575px.png


Note the price ranges assigned to segments.


So your point is that a card is definitely "mid-range" because you want it to be mid-range. Am I getting it right?

Anyway, the original claim was that upcoming 7800 and 7700 aren't "enthusiast" but "mid-range".
In gaming GPUs "mid-range" segment is up to ~400 USD.
So the answer to whether they will be "mid-range" or not depends on if they'll get <$400 MSRPs from AMD.
As easy as that.

Which is associated with the performance of the product. Looking at that same graph. The price for High End used to be 250-350 USD while Entry level was around 50 USD and midrange was the gap in between. Performance for those segments can't be offered at those prices so the prices have had to be adjusted so the category still matches the performance of those products.

If price governed the market segment there would be no midrange products anymore, everything would be high end or ultra-high/enthusiast. If price were the governing factor it's absolutely laughable that 950 USD would be considered mid-range. 950 USD is considered mid-range because it's mid-range performance and not mid-range pricing.

It doesn't matter what NV or AMD want to have consumers believe. Many if not most consumers are seeing mid-range performance at high-end/enthusiast level pricing. That's a major part of why dGPU sales are down so much. It's likely also contributing to PS5 having the best sales for Sony since 2006-2008.

Regards,
SB
 
Which is associated with the performance of the product.
No it's not. Performance change with generations but we don't keep the ranges at Polaris 10 performance adding new ones above these.
What is "mid-range" now have completely different performance to what was "mid-range" 10 years ago and it likely is faster than what was "high end" back then.
It doesn't matter what NV or AMD want to have consumers believe.
It doesn't matter what consumers believe here. A product is selling in a price range which is defined by its performance - hence this product is from said segment.
What you "believe" this product should be selling at doesn't matter in the slightest - unless your belief is so widespread that everyone stops buying said product and the manufacturer has no other option but to adjust it pricing thus possibly moving it from its previous segment into a new one. But that belief again doesn't make a product which is currently "high end" a "mid range" one - until it actually drops in price to become that.
 
Red Devil RX 7800 XT has been revealed by PowerColor

The new Red Devil graphics card boasts a powerful GPU equipped with 3840 Stream Processors. It operates at an impressive clock speed of 2255 MHz for gaming and 2565 MHz for boost, surpassing AMD’s reference clock speeds of 2210 MHz and 2520 MHz, respectively

Navi32 in full "glory" , this would be the biggest LOL so far in RDNA3 Lineup. 60CU 7800XT not beating 72CU 6800XT ...
 
Last edited:
To me the discussion about is it midrange or not is a seems like arguing about semantics. It's more fluid in my view yeah it depends on performance, price and even those branding numbers a little

If anything, I'd define midrange according to the buying habits of the customers; between midrange and enthusiast/ high-end there needs to be a noticeable drop in number of units bought. So those SKUs that are not part of that drop; I'd say they're midrange.

But to touch on us getting not-fully enabled parts or lower clocked for say $500 or even $800+, which is the point I actualy care about.
I don't think that should have any say in what "class" or range of product that GPU is. It's simply a consequence of the economics. Manufacturing processes are more expensive, maybe yields are not as high. AND crucially, proffesional, AI, HPC et al are (far) larger markets than they used to be.
IHVs no longer need to sell the highest end parts to gamers necessarily. Or they kinda do it on a (comparative) loss
 
it´s not that unexpected move, becase AMD dGPU market share shrinking for years, high cost of GPU development, skyrocket waffer cost, R&D and drivers issues . It was only a matter of time ... With RDNA3 gen they have screwed up big in time and there´s a little hope for RDNA4 now.Nvidia has practically a monopoly here. What a shame for Radeon Group

 
More evidence that AMD expected much higher clocks from its RDNA3 parts but could not hit them?
Hmm, i was hoping N32 would go 3GHz. It comes last, so there might have been time to do some fixes. Could make a big difference.
But PowerColor says 2.5GHz, same as 7900 XTX. :(
Maybe factory overclocked models will go higher.

Though, somehow i feel neither customers nor manufacturers care a lot about GPUs currently. It's really odd.
 
it´s not that unexpected move, becase AMD dGPU market share shrinking for years, high cost of GPU development, skyrocket waffer cost, R&D and drivers issues . It was only a matter of time ... With RDNA3 gen they have screwed up big in time and there´s a little hope for RDNA4 now.Nvidia has practically a monopoly here. What a shame for Radeon Group
It looks different if you assume the future of PC gaming is APU. Some points speak against that, but it's the only practical future i see.
If so, NV can't contribute to mainstream and their monopoly will fall with time.
All they can do is serving the enthusiast dGPU niche, and rumors about next gen top model having 512 bit memory bus shows that's the the direction they aim for.
Affordable and reasonable entry level GPUs makes no more sense to them, since APU will cover this better and mid range too.
I do not really blame NV for not offering a cheap 4060 16GB either, like anybody currently does. Why should they?
I would rather blame devs for requiring 16GB, and i would also blame press for telling people 8GB is no longer enough, because max settings are suddenly a must have for anybody.

Only APU can fix those problems. But then AMD and Intel are responsible to fix the broken PC platform. Nobody else can, nobody else has the opportunity.
Their failure, or even giving up on on GPU arch would mean the death of PC gaming. Which means we have to be careful with calling out their results a failure. This way we only support the mindset of expected disappointment even more, hurting the platform.
Why is RDNA3 a failure? Because it got only 80% of expected clocks? Because AMD does not invest heavily into RT R&D yet, which is still not ready for mainstream anyway?
The only failure i see is products are too expensive, but this applies to the competition just as much. It even applies to food, power, and just anything.

Personally i'm mostly disappointed about the slow progress towards APU. We have consoles, we have M1, and the component swapping but bulky PC platform just feels outdated.
Though, software is not ready either. That's probably the real problem and bottleneck.
Currently i do not want any dGPU, but i do want the Strix Halo APU. I want it. Take my money and give it to me.
But this means half of GPU power than my current Vega56, which is already too weak to run any UE5 stuff well.
So i'll end up being disappointed as a gamer again.

Personally i'm 100% sure APU is powerful enough fro next gen games, but i'm not there yet with software development. And it seems the whole industry isn't either.
I expect more years of disappointment we need to go through, and i can only hope the platform survives this dark period of PC gaming. Amen :D
 
To me the discussion about is it midrange or not is a seems like arguing about semantics.
It's about the same as arguing around names and "classes" (like the "60 class GPU" - a thing which doesn't exist anywhere but in marketing constructions) - people want fast GPUs to be cheaper because... they want it. That's the extent of reasoning.

Maybe RDNA4 won't have high end because it will do just that - put high end performance into mid range segment? Then again Polaris wasn't exactly "high end performance" against the competition. RDNA1 was closer to that but it lacked key features.
 
whether they will be "mid-range" or not depends on if they'll get <$400 MSRPs from AMD.
If $400 is the mid-range crossover, then Nvidia is only making high-end cards today.

Once RX 7700 and RX 7800 are formally announced with specs and prices, I will post a chart / table for Radeon / GeForce pricing trends over the last 10 years. It should be obvious that the main factor in consumer pricing is economics of silicon wafer manufacturing above anything else.

Navi32 in full "glory" , this would be the biggest LOL so far in RDNA3 Lineup. 60CU 7800XT not beating 72CU 6800XT ..

Preliminary specs were already leaked last year, including the 60 CUs. RX 7800 XT should be on par or slightly faster than RX 6900 XT (and RX 7900 GRE), if you consider higher turbo clocks, faster memory and on-die cache, and dual-issue shader blocks (or double the shader blocks, if your prefer).

As for competition with NVidia, they can introduce as many $1200 graphics card models ast they want - I'm not buyng one any time soon, thank you very much. Please call back when a 300-400 mm2 high-end GPU becomes the norm again.
 
Last edited:
The impediments to the "future is APUs" idea remain the same--bandwidth, power, and cooling. If all those can be solved for an APU cost effectively, then a dGPU can likely scale those solutions as well to higher performance levels.

It harkens back to a quote from Jensen during the Kepler era (I think, I'm going from memory) when asked about the threat of APUs (especially at the low and mid levels of performance), and he said something to the effect of, "we'll just build bigger and better GPUs."
 
AND crucially, proffesional, AI, HPC et al are (far) larger markets than they used to be.
IHVs no longer need to sell the highest end parts to gamers necessarily. Or they kinda do it on a (comparative) loss
Yes, AMD has always been making smaller GPU dies and thus setting lower prices. but once Nvidia catched on the demand for AI computing, they designed really big dies for their datacenter products, and so these GPUs also ended up in high-end cards, and you can't just make a reasonably priced consumer card with a 600 mm2 GPU. That's why the trend has been going up much faster for high-end cards.

It's also the reason why AMD leveraged their CPU expertise to make big stacked APU dies, and that's where their chiplet strategy pays off.

It looks different if you assume the future of PC gaming is APU.
It's quite interesting to see how Instinct MI300 will influence the consumer market.

On small server and high-end desktop (HEDT), it could start a trend of big universal GPU / CPU sockets or slots, where you can install either several integrated APUs or CPU and GPU on separate daugher cards or LGA packages.

Or it could accelerate adoption of quad-channel memory and PCIe 6.0 and CXL 3.0 on the mainstream desktop motherboards, CPUs and GPUs, to provide uniform cache-coherent access to graphics card memory and main memory over industry-standard fabrics interface - so this would use the same technology as in datacenter APUs, at a much lower cost but also significantly smaller transfer bandwidth comparing to stacked HBM.
 
If $400 is the mid-range crossover, then Nvidia is only making high-end cards today.
4060 is $300. They also make a bunch of older stuff like 1050 and 3050. So no.
What can be said though, and was a thing for years now, is that everything below high-end is stagnating in perf/price.

It should be obvious that the main factor in consumer pricing is economics of silicon wafer manufacturing above anything else.
Well, yeah, nobody says otherwise.

AMD has always been making smaller GPU dies and thus setting lower prices
That I don't agree with.
AMD's h/w has been trailing Nv's in perf/transistor since Kepler/Maxwell so they had to compensate by selling bigger dies at similar price points, not smaller.
It is also a bit moot if some dies being smaller means lower production costs in comparison as they tend to use different production processes. When was the last time both have used the same process for competing lineups? Pascal vs Polaris/Vega?

but once Nvidia catched on the demand for AI computing, they designed really big dies for their datacenter products, and so these GPUs also ended up in high-end cards, and you can't just make a reasonably priced consumer card with a 600 mm2 GPU
Sure you can. Both did it in the past, and GPUs like GM200 certainly weren't made for DC or AI.
Also it's not like Nv is making only 600 mm^2 GPUs right now.

That's why the trend has been going up much faster for high-end cards.
It's been going up much faster for high end because high end doesn't have an upper pricing ceiling.
Meaning that you can make a fully enabled 600 mm^2 GPU on N4 and sell it to gamers for some $2500.
You can't do that in the mid-range and low-end because current production realities won't allow you to fit such product into respective pricing segments.
Thus we have a perf/price stagnation in these segments while the high-end provide good enough perf/price jumps - but at the cost of much higher prices.
 
Status
Not open for further replies.
Back
Top