AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Now that RDNA has been officially announced by AMD, people are saying PS5 gpu is based on RDNA 2. I have some reason to doubt that. This is a tweet by ex Pricipal Software Engineer for PS5:

AMD's RDNA2 GPU architecture confirmed. Will it mean multi-bounce ray tracing in native 4K at 60fps? with transparency? At what distance from the frustum, and angle/clip percentage in the viewport, will VRS apply? Very curious to see how directly these questions get addressed.

If PS5 uses RDNA 2, he wouldn't be asking there questions about RDNA 2, because he would know the answers,
 
Last edited by a moderator:
If PS5 uses RDNA 2, he wouldn't be asking there questions about RDNA 2, because he would know the answers,

Hard to parse anything from that convo but it would be very interesting if there isn't feature parity between the 2 big consoles. War is afoot.
 
I dunno. The way the tweet of Matt Hargett is worded, it seems like he's saying "Will the inclusion of RDNA2 mean, in terms of performance, x, y, and z" not "We both have RDNA2, but what is the performance of x, y and z compare to each other".
 
most of the focus i suspect will be around graphics (a very solved case for AI). But there are other uses that we are getting increasingly better at
audio processing, natural language processing, interpolation, etc.

the puppy AI demo is about 43 minutes.
- no animation for the dog
- completely physics based
- trained to walk etc
- all the fumbling is a result of the training artifacts

50 dogs at liike 43/45 minutes - colliding with each other, bodies interacting, dogs falling over etc.
fairly intensive I suspect. without tensor cores, that would be painful on to keep the frame rate up alongside AAA graphics.

the other demo is about super resolution.

other example AI Agents made in Unity

Check out wobbledogs on tigsource. The game does just that with no tensor cores. I'd wait untill a siginificant portion of devs are actually using ML AI in their games to actually single out a portion of our precious sillicon to accelerate that kind of algo specifically. When that happens, it might even turn out that Tensors are not even the best tool for the job.
 
If PS5 uses RDNA 2, he wouldn't be asking there questions about RDNA 2, because he would know the answers,


Go read the Console threads around the time he tweeted that to see the painfully extensive discussions around it.
 
Now that RDNA has been officially announced by AMD, people are saying PS5 gpu is based on RDNA 2. I have some reason to doubt that. This is a tweet by ex Pricipal Software Engineer for PS5:

AMD's RDNA2 GPU architecture confirmed. Will it mean multi-bounce ray tracing in native 4K at 60fps? with transparency? At what distance from the frustum, and angle/clip percentage in the viewport, will VRS apply? Very curious to see how directly these questions get addressed.

If PS5 uses RDNA 2, he wouldn't be asking there questions about RDNA 2, because he would know the answers,

He talks about performance and if you see the discussion later, I think he talks about API efficiency. And it is the case, it means you can solve it.

 
Last edited:
Is there additional information on AMD's agreement with Samsung, or is this another product? It wasn't described as incorporating a future main-line product, and it sounded like Samsung would be working on the IP in ways that were distinct from semi-custom, which has a more well-known history of cross-pollination with mainline.

AFAIK it's Samsung building their own derivative specifically for stuff they make, so android phones and tablets. The agreement is definitely not semi-custom, but looks more along the lines like what AMD did with sublicensing Zen for the Hygon Dhyana: https://www.anandtech.com/show/15493/hygon-dhyana-reviewed-chinese-x86-cpus-amd

Though I imagine unlike that the basic layout won't be crimped and there won't be any need for weird financial shells and such. Not to mention making their own takes on the arch, I imagine we'll be seeing custom 2-12 CU GPUs for super low tdps, not something AMD even has as of right now, so assumedly something they'll be doing themselves.
 
Last edited:
Is there additional information on AMD's agreement with Samsung, or is this another product? It wasn't described as incorporating a future main-line product, and it sounded like Samsung would be working on the IP in ways that were distinct from semi-custom, which has a more well-known history of cross-pollination with mainline.
They're licensing the IP and "adapting it for low-power usage" is the official description. All that is known is that Samsung is putting a lot of the engineering effort forth themselves (they're very obviously synthesising it themselves, that's a given), the only big question that remains unanswered is what the scope of changing the RTL is, as far as I understand that *is* taking place in some form, in coordination with AMD though.
 
AMD was aiming at higher freq & higher "ipc".

Navi10 is pretty small die (251mm^2) and if you make a chip the size of vega20 (331mm^2), using rdna2 architecture you have essentially a 2080ti minimum frames. Using about 150% less power than Radeon Vii.
 
Status
Not open for further replies.
Back
Top