Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Or the GitHub leak was an APU with the minimum CU's required to test hardware based BC. Any configuration above that (e.g. 40 CU's) can always have additional CU's disabled. Any configuration above 36 CU's would also be a waste of silicon when you're going through the process of testing some hardware that may fail the test and need scrapping.

I really don't buy the 36CU part of the rumour attached to this leak. 40 or more makes sense, because it makes a greater percentage of defective chips viable for usage in PSNow servers, as PS4 or PS4Pro, depending on the number of defective CU's.

Maybe it will be 36CU's, I've seen businesses behave more stupidly, but I'm yet to be convinced.
That was my thought as well, but then why run it at 2.0GHz? And why have a comment in one of the tests with 18WGPs that "full chip is used", and this was not BC test. Then there is a 288GT/s number which again, clearly indicates 36CUs.
 
Wait, are people thinking these are more than just placeholder values?
Come on people.
TPU always creates low-effort placeholders for every new GPU codename that comes up. They then update it as it goes.
People shouldn't read anything into it.
 
Can someone chim in and let me know what is this Cerny philosohy.
Wide and conservatively clocked, of which 36CUs at 2GHz is neither.

This are your respectable toughts. But many others love to have theyr own previous gen games library available. Also ps now, plus and such push in this direction.

The best is to see the previous gen games run better on the new console with little or no effort from developers. This I'm sure the ps5 goal as it was for ps4pro.

Sony never compromised new-gen performance to favour BC.
It's a strategy that has had them winning for a while now. Changing that wouldn't make a lot of sense.
Sony isn't Nintendo.

There are 36/40/56/64CU chips. 64 being GCN.

Big Navi is rumored to be 80CU (4*10WGP), Arden is supposed to be 60CU chip (3*10WGP with 4 deactivated) while Oberon is 40CU chip (2*10WGP with 4 deactivated). There is a pattern here...
No, there is no pattern of AMD being only able to scale RDNA at 10xWGP.
Navi 14 is 24 CUs / 12 WGP total, with most implementations using 11 WGP enabled.


I'm not saying it isn't good for BC, but I want to see technical justification. ;) Why can't you just throw an arbitrary number of CUs at the problem and have the GPU schedulers handle to workloads?
Why didn't the PS4 Pro ever use the full 36CUs in PS4 Boost mode?

Probably because CU allocation is finely tuned in a great number of titles, and throwing more CUs wouldn't be effective.
 
Last edited by a moderator:
I'm wondering why there are no AMD GPUs with 48-52CUs as people are proposing for next gen?

There are 36/40/56/64CU chips. 64 being GCN.

Big Navi is rumored to be 80CU (4*10WGP), Arden is supposed to be 60CU chip (3*10WGP with 4 deactivated) while Oberon is 40CU chip (2*10WGP with 4 deactivated). There is a pattern here...

How feasable is something like 48/52? Did Sony think, based on every other generation that has passed, that 60CU design is not realistic as they wouldnt be able to clock chip high enough to make sense of so much bigger die? Perhaps termals for Navi were meant to be considerably better in design phase of PS5. Perhaps Sony never thought MS is making mini PC inside your living room, and if they werent going to, even with 56CU GPU, they would not be able to clock it above 1.5GHZ duo to thermals, thus making 9.2TF relatively close and chip considerably smaller.

In a sense what I am trying to say is, if Navi was to have better thermals in design phase (so that expensive die can give you most bang for your buck), Sony perhaps had two choices :

36*2.0GHz and 56*1.5GHz console.

36CU one would be smaller, would get you much more bang for your buck from your silicon and would be easy way to keep perfect hardware BC intact. It would also mean using 256bit bus is perfect fit, and would deliver higher pixel fillrate then bigger chip.

56CU part at these clocks would give them 10.7TF instead of 9.2TF, but would also result in a bigger die, requiring wider bus as well.

Now some would say, why not clock it 200-300MHz further? Well, because back when they designed it, they designed it as a console and these clocks on 56CU part required far too much energy to be feasable in console. If knee for this hypotetical Navi was higher above, then clocking it much lower then necessary is pretty much wasting your silicon.

In any case, I am going by github data provided and thinking out loud why would they go with narrow and fast. It does scale better then wide and slow and it does mean you get more out of your silicon, that is getting more and more expensive. Also, AMD cards have shown that there is clear "hole" between 40 and 60CU cards, and there has not been one to fill that space out. Perhaps its for a reason, and looking at how Navi blocks work, it does make sense that there is none.
Narrow and fast means more expensive cooling solution, louder noise, shorter gpu life span, not to mention overall less performance achieved than wide and slow. I hardly think it gives you most bang for your buck. Remember Sony collaborated extensively with AMD on Navi design (I would guess including any big Navi chip in the pipeline) so they would have known the inside out, the thermal limit, CU limit etc of a console spaced design. Either they prioritized BC to utmost importance or there's a full chip we still haven't seen tested yet or rather hidden from us.
 
Wide and conservatively clocked, of which 36CUs at 2GHz is neither.



Sony never compromised new-gen performance to favour BC.
It's a strategy that has had them winning for a while now. Changing that wouldn't make a lot of sense.
Sony isn't Nintendo.


No, there is no pattern of AMD being only able to scale RDNA at 10xWGP.
Navi 14 is 24 CUs / 12 WGP total, with most implementations using 11 WGP enabled.
Doesnt sound like his design at all. PS4 was 800MHz, same as Xbone with 50% less CUs. It was pretty much a sweetspot for Pitcairn, and it was 8% lower then 7850.

PS4Pro was very conservative, I agree with that, but then Xbox One X came and pushed clocks almost 30% higher (!!!) with 4CUs more. Why, when they could have gone wide and slow?

Wide and slow is not valid in PC space anymore, where you have a niche of players who are willing to fork 1200$ for 500mm² GPU, but in consoles? This trend will likel get a whole new meaning. You want to maximize your silicon more then ever, and the best way to do it is to up to clocks. It does not fit current 5700XT cards, but Sony is still shooting for 2020 on improved node and much bigger budget behind actual silicon vs PC counterpart that was released 18 months prior.

BC this time around is of utmost improtance. All the games I have on my PS4, I want to be able to play, digital or physical, on my PS5. If they cannot do it, while their competitor even boosts old games (all the way from OG Xbox) then there is a problem there.

You can say "Why do I need solid online at all? Nintendo's one is borderline non functioning and they are doing pretty well". Not an argument really. Its all going digital, they better have BC working 100%.
 
Why didn't the PS4 Pro ever use the full 36CUs in PS4 Boost mode?

Probably because CU allocation is finely tuned in a great number of titles, and throwing more CUs wouldn't be effective.
Effectiveness isn't a problem; it's whether it can run without issues or not. As long as the game run correctly, there's no point limiting the main console experience.

Also, do games have that level of CU allocation control? I thought the ACEs handled all that and games just sent task to do. Furthermore, wouldn't it be easier and better overall to customise the controllers to just use a limited number of CUs? Cap them in firmware?

Just looking through the DF Cerny PS4P interview. It's got some curious points for consideration...

"When we design hardware, we start with the goals we want to achieve," says Cerny. "Power in and of itself is not a goal. The question is, what that power makes possible."

What becomes clear is that Sony itself - perhaps unlike its rival - does not believe that the concept of the console hardware generation is over. Cerny has a number of criteria he believes amounts to a reset in gaming power: primarily, a new CPU architecture and vastly increased memory allocation. And of course, a massive revision in GPU power
...
"For variable frame-rate games, we were looking to boost the frame-rate. But we also wanted interoperability. We want the 700 existing titles to work flawlessly," Mark Cerny explains. "That meant staying with eight Jaguar cores for the CPU and pushing the frequency as high as it would go on the new process technology, which turned out to be 2.1GHz. It's about 30 per cent higher than the 1.6GHz in the existing model."

Nothing insightful on RDNA compatibility though. PS4P went the safest possible route, using the same architecture. Nothing on how a different architecture impacts beside the talk of x64 having potential timing problems.
 
Microsoft had a lot of momentum coming into this gen from the 360. I think they had a great opportunity with One to seize majority of the market, had they released a "spiritual predecessor" of the Xbox One X model back then, which would have been around 2.7 - 3TF and Kinect only as an accessory. It could have been done and it would have stood better in their lineup of consoles, all the other models from the first Xbox to the Series X have been pushing the envelope with capabilities. The One was the anomaly and I believe it set them back a decade.

edited typos...
MS was losing momentum the last couple years going into this gen. Too much focus on Kinect games and PS3 really hitting its stride the reasons.

Personally I don't think that bc is important. Based on my PS2 and PS3 experience not even one person in my circle of friends used it. It is good thing but by no means should it be deciding factor in setting the priorities for next generation. Also, forum warriors opinions do matters, as they are setting the tone for the others. I am typical early adopter and during the courses of generations tens of people took my advice regarding which console should they buy.

I think that apparent Sony design may not be good enough if xsex is 12 TF monster they claim it to be.
I used to agree, but this time we have really popular online games which people will want to take their progress with them - or if they can’t then it makes switching sides a lot easier. Hence BC this gen is very important. Also my backlog ;)
 
2Ghz does sound a tad high, but a 36CU RDNA configuration aligns pretty well compared to Pitcairn and PS4 in 2013. It's just that MS isn't coming with a 24CU puny chip this time, at least not with the X-series.

Navi 10 chips have pretty high clocks as well, so while 2Ghz does sound high, perhaps their work has allowed it. I have to say though that I did not expect them to clock it higher than 1.8Ghz...
 
Yeah I'm not touching a 9.2 TF PS5 be it RDNA or not, especially if the competition has a 12 TF RDNA on offer. Yes I'm willing to pay the extra dollar for more value. There's always the option of waiting 3 years for a Pro when more exclusives become readily available. That said the jury is still out for a Super Saiyan God Super Saiyan Kaioken x 20 PS5 at 13 TF if Klee, BlackOsiris and Matt are to be trusted :mrgreen:.

Why not just wait until you can see what PS5 can do before writing it off? It’ll have almost instant loading, play all your PS4 and VR games with quicker loading and potentially better performance. Play PS5 exclusives at a level more than 2x pro with RT.

Games will look awesome and a real step up,
 
I'm not saying it isn't good for BC, but I want to see technical justification. ;) Why can't you just throw an arbitrary number of CUs at the problem and have the GPU schedulers handle to workloads?

Sony have not said explicitly, but from Mark Cerny's interview with Gamasutra:
Mark Cerny said:
We doubled the GPU size by essentially placing it next to a mirrored image of itself, rather like the wings of a butterfly,” he said. “That gives us an extremely clean way to support the 700 existing titles, because we can turn off half the GPU and just run something that's very close to the original GPU.
Maybe compatibility of some titles is predicated on the behaviour and arrangement of the grouping of caches shared by the CUs:

ps4-gpu-architecture1.jpg
 
Why not just wait until you can see what PS5 can do before writing it off? It’ll have almost instant loading, play all your PS4 and VR games with quicker loading and potentially better performance. Play PS5 exclusives at a level more than 2x pro with RT.

Games will look awesome and a real step up,

To be honest the only thing that is enticing me to buy PS5 is the confirmation of compatibility with PSVR. Would be pretty pissed if it wasn't.
 
MS was losing momentum the last couple years going into this gen. Too much focus on Kinect games and PS3 really hitting its stride the reasons.

Well X360 had it's best years in 10-11 and 12 was still good, better than most of the early years. 13 was already the launch of current gen, so disagree that they lost momentum in their later years. 360 was a late bloomer. Kinect and the Slim model boosted the sales by a lot. Prior to them, it looked like PS3 was going to cruise past the 360, but it ended up a close fight till the end.
 
It would be ridiculously inefficient and costly relative to its poor 9.2 TF performance to design a 36 CU chip running at 2Ghz purely for the sake of BC, it's borderline retarded thinking and a waste of 7nm die shrink oppertunity. It's simply not innovative, forward thinking to maximize performance for a new gen, much less offering the best multiplat experience. I don't think Cerny would fully endorse this design all by himself.
I know.
But probably Sony cannot do differently... [emoji6]
 
Why not just wait until you can see what PS5 can do before writing it off? It’ll have almost instant loading, play all your PS4 and VR games with quicker loading and potentially better performance. Play PS5 exclusives at a level more than 2x pro with RT.

Games will look awesome and a real step up,
Sure I'll wait and see the games and the real specs first. But I just can't help to shake off the feeling that 9.2 TF would compromise the artist's next gen vision, the image quality of the exclusives, much less setting new visual benchmarks for the industry. I don't know, just a gut feeling.
 
That was my thought as well, but then why run it at 2.0GHz? And why have a comment in one of the tests with 18WGPs that "full chip is used", and this was not BC test. Then there is a 288GT/s number which again, clearly indicates 36CUs.
40 CU@2.0 ghz with the selection of best chips->ps5 then the rest for ps4pro2, ps4pro and even ps4 is the most rationale choiche....

Why 2.0 ghz ?
To have the bus clocking accordingly and have the bandwith needed to make really ps5 a next gen console...

Ps6 maybe also as well a ps5 silicon doubling (with a 512 bit bus)... Sony has started this way of making BC and is difficult (or impossible) to change now... Why ? Is much more efficient and simple I understand. MS did miracles in enabling Xbox360 BC on One. And MS is MS !!!!
 
That was my thought as well, but then why run it at 2.0GHz? And why have a comment in one of the tests with 18WGPs that "full chip is used", and this was not BC test. Then there is a 288GT/s number which again, clearly indicates 36CUs.

I don't dispute the 36CU chip's existence, just the notion that it's the final configuration. 36CU's/18WGP's is the bare minimum required for BC, BC is massively important to the PS5, and spending more on a larger chip would be wasteful when this chip is potentially going to fail some aspects of testing and require a redesign.

My proposal is more that they've spent money on a testing chip that allows them to robustly test BC, test the limits of their cooling solution (hence the high clockspeed and the design of the dev kits,) whilst still being able to provide physical hardware to developers, replete with the features of RDNA 1 or 2.

They know from the PS4 that suddenly giving developers an extra 4GB of memory caused no problems. I think an extra 4 or so CU's would be much the same.

Of course, they may also have decided that a 256 bit bus is their limit, and downclocked 18gbps GDDR6 on such a bus is only good for ~9.2TF.
 
I don't dispute the 36CU chip's existence, just the notion that it's the final configuration. 36CU's/18WGP's is the bare minimum required for BC, BC is massively important to the PS5, and spending more on a larger chip would be wasteful when this chip is potentially going to fail some aspects of testing and require a redesign.

My proposal is more that they've spent money on a testing chip that allows them to robustly test BC, test the limits of their cooling solution (hence the high clockspeed and the design of the dev kits,) whilst still being able to provide physical hardware to developers, replete with the features of RDNA 1 or 2.

They know from the PS4 that suddenly giving developers an extra 4GB of memory caused no problems. I think an extra 4 or so CU's would be much the same.

Of course, they may also have decided that a 256 bit bus is their limit, and downclocked 18gbps GDDR6 on such a bus is only good for ~9.2TF.
So a gen 3 mode. That wasn’t tested. And the native gen 2 mode is actually just a boosted 4Pro mode?
Therefore the results never showed up.
I mean I guess if there are 54 CUs then yea you could shut off 18 of them and drop back down to 36.

I’m okay with people sort of hanging onto that because that is about missing information. But I’m pretty sure since June they cannot just add more CUs without extensive testing or delays.
 
I don't dispute the 36CU chip's existence, just the notion that it's the final configuration. 36CU's/18WGP's is the bare minimum required for BC, BC is massively important to the PS5, and spending more on a larger chip would be wasteful when this chip is potentially going to fail some aspects of testing and require a redesign.

My proposal is more that they've spent money on a testing chip that allows them to robustly test BC, test the limits of their cooling solution (hence the high clockspeed and the design of the dev kits,) whilst still being able to provide physical hardware to developers, replete with the features of RDNA 1 or 2.

They know from the PS4 that suddenly giving developers an extra 4GB of memory caused no problems. I think an extra 4 or so CU's would be much the same.

Of course, they may also have decided that a 256 bit bus is their limit, and downclocked 18gbps GDDR6 on such a bus is only good for ~9.2TF.
If MS managed the impossible with 360 compatibility on One and with Series X BC with all previous consoles, what makes it impossible that Sony may find a different and smarter solution without compromising next gen performance?
 
Status
Not open for further replies.
Back
Top