AMD RDNA3 Specifications Discussion Thread

Deck 2 timeline probably implies Strix premium FF swimlane part and not %redacted%.

Being especially more powerful isn't the top priority for "Deck 2". A better screen is on the list, beyond that there's stuff like better battery life, a slimmer and lighter total package, higher base level storage, any number of other things that could be on the wishlist as well. Raising the base model price is fine, so more can be afforded elsewhere.

Combined with launching on time and actual availability it would have a compelling sales pitch.

Anyway, the 7600 should be a compelling package. $349ish for a 50%(+) performance bump over the 6600xt should be great for sales. The return of the better than console settings "cheap" card!
 
Last edited:
Being especially more powerful isn't the top priority for "Deck 2".
It is, VGH is fairly anemic especially CPU-wise my modern standards.
A better screen is on the list
You need a better SoC to drive a better, higher res screen.
better battery life
You. Need. A. Better. SoC. To. Get. Better. Battery. Life.
and lighter total package
You can make it slimmer by using a more performant, lower power SoC why yes.
Anyway, the 7600 should be a compelling package. $349ish for a 50%(+) performance bump over the 6600xt should be great for sales.
People not gonna buy Radeon until they iron out their messaging.
 
It is, VGH is fairly anemic especially CPU-wise my modern standards.

You need a better SoC to drive a better, higher res screen.

You. Need. A. Better. SoC. To. Get. Better. Battery. Life.

You can make it slimmer by using a more performant, lower power SoC why yes.

People not gonna buy Radeon until they iron out their messaging.

Was thinking about better GPU performance, but sure CPU is a better upgrade target.

And a better SOC is a given, some Zen4/RDNA3+ mobile part is fine. That's the boring part though, of course it'll be an upgrade.

The interesting part is what they want out of the better SOC. 50%+ performance improvement just isn't a priority, and so waiting for RDNA4 isn't either.

They know the Deck is a bit big, they know the batter life is an issue, a better screen can just be an OLED (which is cheap enough these days)

A bump to say, 900p might be on the cards, but 720p is still an option.

Point is if they get a 25% increase in GPU power that'd be more than adequate with all other other concerns.
 
Point is if they get a 25% increase in GPU power that'd be more than adequate with all other other concerns.
I think that's the bare minimum. Next gen games won't do so well.
Currently 1.6 tf is really meh. Some high end mobile SoCs have 2 already.
And it would look bad if a new Switch is actually more powerful as well.
So i'd hope for >3tf at least.

People not gonna buy Radeon until they iron out their messaging.
No need to invest in marketing. NV pricing does it better and for free.
 
Do you have specific examples in mind of useful api extensions? The DXR interface is essentially “build an acceleration structure with this bag of triangles”. It does not mandate that the structure is a BVH or anything else. This gives maximum flexibility to the IHV and more room to innovate rapidly on the hardware side. The downside is that it’s completely opaque to developers.
How about Microsoft starts giving us an API to let us generate reusable chunks of the acceleration structure for streaming ? That would be a decent starting point ...
We’ve had this debate before. In order to provide developers with more access Microsoft would have to dictate the data structure that all IHVs must use. This is not a guaranteed win. Just like DirectX evolved over time to gradually expose more flexibility so will DXR. History dictates that giving developers free reign from day one is probably not a good idea anyway. DX12 is clear evidence of that.
Maybe instead of others here accusing AMD of participating in bad faith that they should demand a certain IHV to stop dragging their own feet towards making a working solution in this area. If Nvidia are so concerned about developers abusing APIs in ways they don't want then maybe Microsoft should start unilaterally giving either AMD or Intel an API advantage for once instead so that it will totally motivate Nvidia to implement a real solution or else they lose out ...

By letting any one IHV be complacent, the industry loses altogether since developers have unmet needs/requirements while they abandon or pare-back usage on HW features from IHVs. Both AMD and Intel had to implement sub-par abstractions like RTPSO or inline RT respectively on their HW so by principle for the industry to advance once more it should be Nvidia's turn to be forced by Microsoft in a hard spot to implement traversal shaders or acceleration structure streaming ...

The industry isn't going to advance in certain respects with one player constantly dominating the playing field and whenever the next cycle in graphics technology does come, AMD might conclude that they're better off undermining the concept of ray tracing by not implementing any HW design improvements for the next generation console platform if Microsoft keeps thinking that Nvidia's way is the only way and by giving them the power to force everyone into a deadlock until they have it their way ...
 
How about Microsoft starts giving us an API to let us generate reusable chunks of the acceleration structure for streaming ? That would be a decent starting point ...
Yes.
But imagine my rant, after seeing that now the blackbox can open the blackbox, while i still can not. lol
But yes.
 
Yes.
But imagine my rant, after seeing that now the blackbox can open the blackbox, while i still can not. lol
But yes.
What do you believe are the biggest road blocks to getting you most of what you want from the API?
 
I think that's the bare minimum. Next gen games won't do so well.
Currently 1.6 tf is really meh. Some high end mobile SoCs have 2 already.
And it would look bad if a new Switch is actually more powerful as well.
So i'd hope for >3tf at least.


No need to invest in marketing. NV pricing does it better and for free.

That's certainly an admirable goal, but the minimum requirements for Stalker 2 are listed as an RX580. Even if AMD SOCs can match an M1 in perf/watt, an admirable and reachable goal, that would still be a good deal below requirements @10w, as it'd need another 50% fps somewhere. Now maybe with platform specific optimizations and low enough settings it could work.

So if there's enough Steam Deck platform optimization it could hit minimum requirements. But otherwise Valve wants to cut down on weight, which means not adding anymore heavy and bulky batteries, as well as improve battery life on heavier games where it can be sub 2 hours, which could mean matching a 10wish tdp. A lower TDP would also save weight in cooling systems.

So there's just a bunch of tradeoffs they're looking at, and aiming for minimum next gen requirements while hitting everything else feels like a big stretch.
 
What do you believe are the biggest road blocks to getting you most of what you want from the API?
Biggest road block is arguably a political one since Microsoft seemingly refuses to let either AMD or Intel enjoy an API advantage at the expense of Nvidia ...
 
well duhhhhh NV owns pretty much the entire dGP market at this point so lol.
Hence why we should propose Microsoft to extract concessions out of Nvidia to get whatever they want from them since they clearly have the power to reverse this trend otherwise AMD should feel free to kill off adoption of hardware ray tracing if DXR continues to be a dead end (been 3 years since the last update) or keep going in the direction they don't want ...

It's that simple, Microsoft should introduce an API so that Nvidia loses in order for the others to be able to have a chance in a temporary spotlight so that they can all innovate on their own solutions. There's literally nothing stopping Microsoft from making these unilateral moves since they took the initiative to ignore whatever AMD or Intel had to say regarding HW features ...
 
I wish it was more easily possible for developers wishing to push the industry in different directions to demonstrate to the public (in this case gamers) just how much better things could be than they currently are. I know the best thing for me, is to visually be able to see a clear and distinct advantage demonstrated... as something which I could get behind and help push for.

Of course in the world of tech, things move slow, and politics get in the way of practical advancements every day... but you'd be amazed what a large base of vocal gamers can get done if they have something they can unite behind.
 
Being especially more powerful isn't the top priority for "Deck 2". A better screen is on the list, beyond that there's stuff like better battery life, a slimmer and lighter total package, higher base level storage, any number of other things that could be on the wishlist as well. Raising the base model price is fine, so more can be afforded elsewhere.

Combined with launching on time and actual availability it would have a compelling sales pitch.

Anyway, the 7600 should be a compelling package. $349ish for a 50%(+) performance bump over the 6600xt should be great for sales. The return of the better than console settings "cheap" card!
Deck 2 would focus on more performance at the same wattage because there are always newer games coming out and at some point they will run poorly on the deck 1. however the extra power should allow the deck 2 to play the games the deck 1 currently plays but with better battery life.

An RDNA 4 part might actually allow them to have ray tracing turned on in a lot of games. We might see series s performance
 
How about Microsoft starts giving us an API to let us generate reusable chunks of the acceleration structure for streaming ? That would be a decent starting point ...

Maybe instead of others here accusing AMD of participating in bad faith that they should demand a certain IHV to stop dragging their own feet towards making a working solution in this area. If Nvidia are so concerned about developers abusing APIs in ways they don't want then maybe Microsoft should start unilaterally giving either AMD or Intel an API advantage for once instead so that it will totally motivate Nvidia to implement a real solution or else they lose out ...

By letting any one IHV be complacent, the industry loses altogether since developers have unmet needs/requirements while they abandon or pare-back usage on HW features from IHVs. Both AMD and Intel had to implement sub-par abstractions like RTPSO or inline RT respectively on their HW so by principle for the industry to advance once more it should be Nvidia's turn to be forced by Microsoft in a hard spot to implement traversal shaders or acceleration structure streaming ...

The industry isn't going to advance in certain respects with one player constantly dominating the playing field and whenever the next cycle in graphics technology does come, AMD might conclude that they're better off undermining the concept of ray tracing by not implementing any HW design improvements for the next generation console platform if Microsoft keeps thinking that Nvidia's way is the only way and by giving them the power to force everyone into a deadlock until they have it their way ...
Hence why we should propose Microsoft to extract concessions out of Nvidia to get whatever they want from them since they clearly have the power to reverse this trend otherwise AMD should feel free to kill off adoption of hardware ray tracing if DXR continues to be a dead end (been 3 years since the last update) or keep going in the direction they don't want ...

It's that simple, Microsoft should introduce an API so that Nvidia loses in order for the others to be able to have a chance in a temporary spotlight so that they can all innovate on their own solutions. There's literally nothing stopping Microsoft from making these unilateral moves since they took the initiative to ignore whatever AMD or Intel had to say regarding HW features ...

If AMD has a better vision for RT that is being held back by DXR shouldn’t we have Vulkan demos showing us what we’re missing? Or AMD would have their own superior version of Optix by now. It seems you have some insight into IHV politics but you’re asking us to simply take your word that AMD knows better. Sorry but it just sounds like back seat driving unless you can provide evidence of this superior API that should have launched in 2018 instead of DXR.

Maybe we can start with the specific things AMD and Intel asked for that Microsoft ignored?
 
Yes, it is needed. But your comparison of Lumen SDF for GI is not suited, as for GI geometric or material precision is neither needed nor good.
Shadows is the best example. Say we want area light shadows of a tree. This means soft shadows on the ground and sharp self shadows on the tree. Shadow maps can't do this well, but RT can.
Though, if we use proxy geometry for the tree different from the visual mesh, we get self shadows and peter panning, so exactly the problems RT promises to finally solve.
Thus, shadow maps will remain the only robust solution for Nanite models. RT can be used only for GI or reflections (to some degree), where error is acceptable.
(That's my personal conclusions and assumptions, maybe somebody can confirm or correct me.)
UE5 shadowing solution makes all of this very much irrelevant though. So again a problem which doesn't really exist in practice.
Also the very same issues are there in other games which use RT shadows right now without any Nanite so it's not a Nanite + RT h/w issue either.

But current RT does not work for resolution adaptive triangle meshes, which are the only option to achieve fine grained LOD using triangles.
It works good enough to produce better visual results than a s/w path.

Which one? Just don't say Avatar again, which won't require RT. So which game will be the first with RT on minimal specs? (just out of interest)
Was there a change to Avatar's requirements?

If you say Lumen sucks, that's two of us.
I'm saying that the oh so wonderful s/w Lumen which we apparently should "accelerate" by exploding the GPU die sizes (and costs) by factors of 2-3X doesn't in fact do any better at any of RT tasks than the h/w Lumen does.
This whole argument has started with the idiotic idea (which just can't die for reasons unknown) that RT h/w isn't needed and we should just use moar FLOPs to do the same thing.

Misunderstanding? The first UE5 demo (Land of Nanite?) used heavy kitbashing. They mentioned up to 20 layers of models, like onion skin. This can't be optimized away for a shipping game, because all those 20 models are somewhere visible.
It can if the whole terrain would be generated differently. Nobody does terrain in games this way.

To support this for HW RT, the only current option would be to rebuild the BVH from scratch every frame, for each instance of every model in the whole scene.
It's hardly the only current option which is illustrated already by the use of proxy meshes which provide a good enough base for RT in this case.
You can also have these meshes in same detail level as maximum Nanite mesh and just RT against it. This way you'll have higher RT data than on screen visual Nanite data.
Hey you can even try to use Lovelace micromaps for this.

No. Because PC holds back consoles just as much as consoles hold back PC. It isn't worth to make specific solutions just for one platform.
PC has never been holding back consoles in any way or form and there are a crap load of solutions developed only and specifically for console platforms.
If some console h/w would benefit from such solution in either quality or performance - it would have already been used there. It's not because they don't.
 
Back
Top