Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
Navi is now confirmed to be GCN.
https://www.resetera.com/threads/ne...a-dont-want-none.112607/page-54#post-20150101
So the 64 CU limit should more or less still be present. Now looking at that new PS5 devkit leak with a gpu clock at 1850, it would give us 14.2 TF assuming using 60 CUs, which matches with previous leaks of 14.2 TF. And with the low TDP seen from AdoredTV "in the range of 120-180W", the lowest I can predict using that clock is 11.3 TF with 48 CUs. A more likely 12.3 TF at 52 CUs all said and done. Sue me.
 
With a CU limit on Navi, doesn't that also firm up Anaconda having a secondary graphics chip, either RT assist or dedicate GPU alongside a soc. How can they achieve 'most powerful' otherwise?
 
With a CU limit on Navi, doesn't that also firm up Anaconda having a secondary graphics chip, either RT assist or dedicate GPU alongside a soc. How can they achieve 'most powerful' otherwise?
Use more CUs. If they use 48CUs, use 56, if they use 56 CU, they could use 64. I highly doubt Sony will use more than 56 CUs BTW. Seems to me 56 is the sweet spot with GCN.

And overclock the thing, of course. And 1850 Mhz is just a leak, that seems way too high.
 
Last edited:
Doesn't that Gonzalo APU leak say the graphics part is Navi lite and wasn't there a leak somewhere saying Navi Lite has 44 CU's ?
 
Use more CUs. If they use 48CUs, use 56, if they use 56 CU, they could use 64. I highly doubt Sony will use more than 56 CUs BTW. Seems to me 56 is the sweet spot with GCN.

And overclock the thing, of course. And 1850 Mhz is just a leak, that seems way too high.

I was, perhaps mistakenly, assuming that Sony would go for the max reasonable CUs against that 64CU cap. If they go for 44 then agree, enough scope for MS to win TF wars. :)

If Sony go high though, that doesn't leave MS anywhere to go with a SOC.

Doesn't that Gonzalo APU leak say the graphics part is Navi lite and wasn't there a leak somewhere saying Navi Lite has 44 CU's ?

I though it was still ambiguous as to whether Gonzalo was PS5?
 
I though it was still ambiguous as to whether Gonzalo was PS5?

For sure but it's either next PlayStation or Xbox, what else could it be?

Maybe higher clocks smaller chips let's them add more unique features like Ray tracing hardware and the 3d audio chip?
 
Last edited:
Doesn't that Gonzalo APU leak say the graphics part is Navi lite and wasn't there a leak somewhere saying Navi Lite has 44 CU's ?

IIRC from rumored PC specs, Navi-10 32CUs max (budget/low-end card), Navi-12 40CUs max (midrage-card) and Navi-20 64CUs max (high-end). Lastly, Navi-11 is reportedly game console oriented and has 38CUs max. Mind you these were just rumors throughout 2018.

If these rumors are true, don't expect Navi-20 in any of these machines. If anything, PS5/XB-Next are looking at Navi-11 or an unlisted series with something 48-52CUs max at best.
 
Navi is now confirmed to be GCN.
https://www.resetera.com/threads/ne...a-dont-want-none.112607/page-54#post-20150101
So the 64 CU limit should more or less still be present. Now looking at that new PS5 devkit leak with a gpu clock at 1850, it would give us 14.2 TF assuming using 60 CUs, which matches with previous leaks of 14.2 TF. And with the low TDP seen from AdoredTV "in the range of 120-180W", the lowest I can predict using that clock is 11.3 TF with 48 CUs. A more likely 12.3 TF at 52 CUs all said and done. Sue me.

Why should the limit of 64 CUs be present? Not saying it isn’t, just asking why should we assume that?
Amd GCN has a 4 compute engines limit, with 16 CUs per CE, resulting on the 64 CU limit.

vega10_block_diagram.png


But...
“Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work.”

So I ask again... why can’t Navi remove the limit? From what I read, they had at a time a limit of 48CUs in GCN, that changed to 64. So they added the 4th compute engine for the extra 16 CU.

With Navi beeing in development for so long, and beeing redesigned, they shure had the time and the opportunity.
 
Last edited:
Vega 64 is requiring 64 working CUs to compete with Nvidia. If AMD still needs 64 for Navi, thinking about the extra Compute Engine to add some extra CU for yield purposes just seems logic. And it would allow them to reach 80 CUs.
 
Presumably it's a size thing. 80 CUs is a big, expensive chip. The cost to architect such a design that'll sell niche numbers, I'm guessing, makes it a poor economy. Ergo, no attempt to design for more CUs, ergo, no more than 64 CUs on a GPU.
When you say niche what product are you referring too?
 
Presumably it's a size thing. 80 CUs is a big, expensive chip. The cost to architect such a design that'll sell niche numbers, I'm guessing, makes it a poor economy. Ergo, no attempt to design for more CUs, ergo, no more than 64 CUs on a GPU.

I think this is correct. It’s more of an economics limit then a t technical limit. The internet has made this 64 limit some sort of boogeyman.

Radeon 7 already has 64 CU and I would say it’s already at the area limit of what an SOC can reasonably cost for a console on a brand new process.
 
When you say niche what product are you referring too?
A theoretical 80 CU part. Given the 60 CU Radeon VII is already £600, and 64 CU Vega64 is nigh £400, an 80 CU GPU should be notably more costly, pricing it very much as a niche GPU. Heck, AMD is inherently niche as it is! ;)
 
Why should the limit of 64 CUs be present? Not saying it isn’t, just asking why should we assume that?
Amd GCN has a 4 compute engines limit, with 16 CUs per CE, resulting on the 64 CU limit.
But...
“Talking to AMD’s engineers about the matter, they haven’t taken any steps with Vega to change this. They have made it clear that 4 compute engines is not a fundamental limitation – they know how to build a design with more engines – however to do so would require additional work.”

So I ask again... why can’t Navi remove the limit? From what I read, they had at a time a limit of 48CUs in GCN, that changed to 64. So they added the 4th compute engine for the extra 16 CU.

With Navi beeing in development for so long, and beeing redesigned, they shure had the time and the opportunity.
I take it with Navi being 7nm all they needed to do is increasing the clock speed without adding more CUs (more expensive) and bam there's extra performance, it could be much better optimized than Radeon VII too. But by all means AMD could always do a novelty 2080 TI killer Navi Extreme by going 80 CUs, as far as consoles are concerned I think they can make do well within the 64 CU limit.
 
AMD launched a 64CU card on a 28nm process already at 2015, it was their biggest chip so far and they haven't launched a card topping that 64 figure since then, so I believe that even if it's not a hard limit, there are significant reasons for them not going past that yet. Power consumption could be among the main reasons. Vega 64 and Radeon 7 are both at 300W, so there is that. Radeon 7 at 330mm2 is not a big chip. It's a new process, but it's still a medium size for a PC chip. nVidias RTX 2060, a 350$ card is 445mm2. The 16GB of HBM2 probably isn't cheap though. Once nVidia starts making 7nm chips, they are most likely going to have variants far bigger than 330mm2, because they need to be in order to exceed the capabilities of their current 12nm chips that go all the way to 750mm2 in the RTX 2080 Ti. Titan V is even bigger, but not really a consumer card.

Point is that I think the main reason for AMD not going over 64 CUs is mainly technical, not economical.
 
Last edited:
Well I've thumb sucked a new theory based on absolute hearsay :LOL:

With the possibility of 1.8ghz GPU part it might mean they crunched the numbers and came to the conclusion that a higher clock and smaller chip for the GPU worked out better.

Leaving them more silicon for other special sauce such as possible Ray tracing tech.
 
Status
Not open for further replies.
Back
Top