Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Boost clocks are not boost clocks when it's 100% of the time when the game is running. It's also an incredibly bad experience to rely on thermal throttling as that means the experience is not guaranteed, and in worst case situations it means you have no boost so the game runs worse.

Im not saying it’s a good thing. It makes as much sense on consoles as it does on PCs. What’s the point of boosting clocks when the chip isn’t being pushed?

Boost clocks have always seemed more like marketing to me, which is why I can see it coming to consoles if there is a discrepancy in performance.
 
36CUs makes hardware BC with Pro trivial.

Cerny put 64 ROPS in the pro when there isn't enough bandwidth for 32, just so they can get BC working via butterfly. It's not a stretch of the imagination to say that they might be hindered by BC for the PS5 design. At the very least they could have done 22WGP, with 20 enabled like the 5700 XT.

22? Isn’t 20 for high end and 18 for plain vanilla 5700?
 
22? Isn’t 20 for high end and 18 for plain vanilla 5700?

22 with 2 disabled for redundancy. They could have gone for any number higher than the 36 CUs.

PS5 Pro with 72CUs and 128 ROPS launching within a year of the PS5 is a possibility.
 
Last edited:
So instead you'd rather stick with a 4Pro playing the games instead of having an amazingly better quality of life experience with a PS5?

This sort of thinking makes no sense to me. Instead of comparing consoles between the camps, one should compare the new console with their current console and what quality of life improvements it brings. I guess if you have both consoles now, then having a choice of which one to upgrade first makes sense.
Unless Horizon ZD2 looks better than anything else on the market by a fair margin at launch, then it becomes somewhat a justifiable reason to shell out $399. Otherwise I can live with next gen multiplats and a handful of MS exclusives that interest me till the 5Pro hits, so I wouldn't say the quality of gaming life isn't totally void of improvement.
 
22 with 2 disabled for redundancy. They could have gone for any number higher than the 36 CUs.

PS5 Pro with 72CUs and 128 ROPS launching within a year of the PS5 is a possibility.

Redundancy across a product line is necessary when you are not binning the chips. Unless 7nm is so bad that AMD can’t produce enough non-defective chips to warrant a product with a fully enabled chip. (My bad I interpreted your statement as if the 5700 was a 22 WGP design)

But 22 WGP is an asymmetrical design. Why would AMD design a gpu with 2 SA that have 5 WGP and 2 SA that have 6 WGP? From a design standpoint it’s wasteful to design a chip this way. 36 out 40 enabled or 44 out of 48 enabled.
 
Last edited:
Redundancy across a product line is necessary when you are not binning the chips. Unless 7nm is so bad that AMD can’t produce enough non-defective chips to warrant a product with a fully enabled chip. (My bad I interpreted your statement as if the 5700 was a 22 WGP design)

But 22 WGP is an asymmetrical design. Why would AMD design a gpu with 2 SA that have 5 WGP and 2 SA that have 6 WGP? From a design standpoint it’s wasteful to design a chip this way. 36 out 40 enabled or 44 out of 48 enabled.

I read somewhere that shader arrays don't have to be balanced. I'll see if I can dig it up.
 
I read somewhere that shader arrays don't have to be balanced. I'll see if I can dig it up.

It may not have to be balanced. The vanilla 5700 may be proof of that reality if CUs are deactivated on the WGP level.

However you have to design two different shader arrays to accommodate a 22 WGP design and still maintain 4 shader arrays like the 5700. That may not be problematic if AMD is planning 6 WGP based SA to be used in upcoming 7nm products. But without that the chip would require additional design, verification and testing to accommodate two different sets of shader arrays.

All for an extra 2 WGPs. Instead of going with 4 more WGPs and duplicating the same block 4 times in your design.
 
Last edited:
There was way more than one reason behind XB1's lack of success. Had it been the same power as PS4, but still more expensive and with a mandatory Kinect no-one wanted and with the screwed up TVTVTV messaging, it'd have still struggled.

Microsoft had a lot of momentum coming into this gen from the 360. I think they had a great opportunity with One to seize majority of the market, had they released a "spiritual predecessor" of the Xbox One X model back then, which would have been around 2.7 - 3TF and Kinect only as an accessory. It could have been done and it would have stood better in their lineup of consoles, all the other models from the first Xbox to the Series X have been pushing the envelope with capabilities. The One was the anomaly and I believe it set them back a decade.

edited typos...
 
Last edited:
36CUs makes hardware BC with Pro trivial.

Cerny put 64 ROPS in the pro when there isn't enough bandwidth for 32, just so they can get BC working via butterfly. It's not a stretch of the imagination to say that they might be hindered by BC for the PS5 design. At the very least they could have done 22WGP, with 20 enabled like the 5700 XT. Just my analysis.
Yes easy BC is the reason of this machine.

But if they dont reach 10 TF somehow (and an adeguate bandwith having adeguate RAM chips in quantities) they better wait even 6/8 months for the release....
 
Personally I don't think that bc is important. Based on my PS2 and PS3 experience not even one person in my circle of friends used it. It is good thing but by no means should it be deciding factor in setting the priorities for next generation. Also, forum warriors opinions do matters, as they are setting the tone for the others. I am typical early adopter and during the courses of generations tens of people took my advice regarding which console should they buy.

I think that apparent Sony design may not be good enough if xsex is 12 TF monster they claim it to be.
 
22 with 2 disabled for redundancy. They could have gone for any number higher than the 36 CUs.

PS5 Pro with 72CUs and 128 ROPS launching within a year of the PS5 is a possibility.
Wow...
This is gonna to be the definitive PlayStation ?

I think the goal is to reach on all games real 4k@120 fps...

For that probably 20 TF are needed...

Dont have yet understood what are Sony's plan for RT... dont have also yet understood the RT benefits also....
 
Personally I don't think that bc is important. Based on my PS2 and PS3 experience not even one person in my circle of friends used it. It is good thing but by no means should it be deciding factor in setting the priorities for next generation.

This are your respectable toughts. But many others love to have theyr own previous gen games library available. Also ps now, plus and such push in this direction.

The best is to see the previous gen games run better on the new console with little or no effort from developers. This I'm sure the ps5 goal as it was for ps4pro.
 
It would be ridiculously inefficient and costly relative to its poor 9.2 TF performance to design a 36 CU chip running at 2Ghz purely for the sake of BC, it's borderline retarded thinking and a waste of 7nm die shrink oppertunity. It's simply not innovative, forward thinking to maximize performance for a new gen, much less offering the best multiplat experience. I don't think Cerny would fully endorse this design all by himself.
 
It would be ridiculously inefficient and costly relative to its poor 9.2 TF performance to design a 36 CU chip running at 2Ghz purely for the sake of BC, it's borderline retarded thinking and a waste of 7nm die shrink oppertunity. It's simply not innovative, forward thinking to maximize performance for a new gen, much less offering the best multiplat experience. I don't think Cerny would fully endorse this design all by himself.
I'm wondering why there are no AMD GPUs with 48-52CUs as people are proposing for next gen?

There are 36/40/56/64CU chips. 64 being GCN.

Big Navi is rumored to be 80CU (4*10WGP), Arden is supposed to be 60CU chip (3*10WGP with 4 deactivated) while Oberon is 40CU chip (2*10WGP with 4 deactivated). There is a pattern here...

How feasable is something like 48/52? Did Sony think, based on every other generation that has passed, that 60CU design is not realistic as they wouldnt be able to clock chip high enough to make sense of so much bigger die? Perhaps termals for Navi were meant to be considerably better in design phase of PS5. Perhaps Sony never thought MS is making mini PC inside your living room, and if they werent going to, even with 56CU GPU, they would not be able to clock it above 1.5GHZ duo to thermals, thus making 9.2TF relatively close and chip considerably smaller.

In a sense what I am trying to say is, if Navi was to have better thermals in design phase (so that expensive die can give you most bang for your buck), Sony perhaps had two choices :

36*2.0GHz and 56*1.5GHz console.

36CU one would be smaller, would get you much more bang for your buck from your silicon and would be easy way to keep perfect hardware BC intact. It would also mean using 256bit bus is perfect fit, and would deliver higher pixel fillrate then bigger chip.

56CU part at these clocks would give them 10.7TF instead of 9.2TF, but would also result in a bigger die, requiring wider bus as well.

Now some would say, why not clock it 200-300MHz further? Well, because back when they designed it, they designed it as a console and these clocks on 56CU part required far too much energy to be feasable in console. If knee for this hypotetical Navi was higher above, then clocking it much lower then necessary is pretty much wasting your silicon.

In any case, I am going by github data provided and thinking out loud why would they go with narrow and fast. It does scale better then wide and slow and it does mean you get more out of your silicon, that is getting more and more expensive. Also, AMD cards have shown that there is clear "hole" between 40 and 60CU cards, and there has not been one to fill that space out. Perhaps its for a reason, and looking at how Navi blocks work, it does make sense that there is none.
 
Or the GitHub leak was an APU with the minimum CU's required to test hardware based BC. Any configuration above that (e.g. 40 CU's) can always have additional CU's disabled. Any configuration above 36 CU's would also be a waste of silicon when you're going through the process of testing some hardware that may fail the test and need scrapping.

I really don't buy the 36CU part of the rumour attached to this leak. 40 or more makes sense, because it makes a greater percentage of defective chips viable for usage in PSNow servers, as PS4 or PS4Pro, depending on the number of defective CU's.

Maybe it will be 36CU's, I've seen businesses behave more stupidly, but I'm yet to be convinced.
 
36CUs makes hardware BC with Pro trivial.
Why? What is it about RDNA that it can run GCN code perfectly as long as you have an identical number of CUs? This 36 number is being fixed on understandably as related, but there should be a far better technical argument for BC over the numbers matching, identifying what are the problems running PS4 code on RDNA and what are the solutions.

I'm not saying it isn't good for BC, but I want to see technical justification. ;) Why can't you just throw an arbitrary number of CUs at the problem and have the GPU schedulers handle to workloads?
 
Status
Not open for further replies.
Back
Top