Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Also why would you always have disabled CU's when defective ones can still be used by disabling those CU's and using them in the Lockhart?
If you only have 100% perfect chips for Anaconda, you'll have a very small pool of chips to use. Let's say 10% would be perfect, all 64 CUs working, that means 90% of your consoles are going to be Lockhart and only 10% Anaconda. Great if that's what your market wants, but what if it wants Anaconda:Lockhart in a 60:40 ratio?

You have to design your chip to be able to produce the numbers needed. That means disabling some CUs to be tolerant of defects. Only the PC space with its rare super-high-end market can afford defect-free chips and products.
 
Last edited:
I don't see how they could manufacturate millions of XBS for 2020 using perfect 64CUs APUs. I can only see them using the 56CUs for another model of Xbox series at about 10tf. and later a 4tf model. 3 models of Xbox series. 5 different xbox will be available in 2021.:yep2:
 
If we assume Klee's comment of 64CUs is true, then his other info is worthy of noticing (PS5 is 10% more powerful than xsx).

Sony must have some extremely good cooling solution (for example, V shaped case) so they can clock higher. I am very curious to see the retail version.
Yes. If the dude is consistent, that means in raw number PS5 is 13.3 TF and so far Klee hasn't shown any signs of backtracking. If Anaconda has more TFs then he would said it by now. I think we should all remain open minded about this, after all there's a great deal we don't know about RDNA2 and how closely related the new consoles are to the unannounced big Navi chips. If we're still discussing the limit of 5700xt then the conversation should've ended when that supposedly 12 TF XSX was unveiled. Time to move on to bigger and better things folks.
 
that supposedly 12 TF XSX

Twice the power of one x translates to about 10TF of rdna1.0, with rdna2.0 being even more efficient i can't see how he ment 12TF of rdna2.0. Or atleast, i doubt it. Also 13.3TF is 2080Ti level of performance, on paper atleast. I know it's the baseless section so you can throw whatever you want, but anyway.
 
Twice the power of one x translates to about 10TF of rdna1.0, with rdna2.0 being even more efficient i can't see how he ment 12TF of rdna2.0. Or atleast, i doubt it. Also 13.3TF is 2080Ti level of performance, on paper atleast. I know it's the baseless section so you can throw whatever you want, but anyway.
Depends how you interpret it of course. In raw power/ teraflops term twice of a 6 TF 1X is indeed 12 TF regardless of efficiency. But if he said twice as fast as 1X then the wording becomes specific in relation to the overall speed and that would take efficiency into account so make true of ~ 10 TF RDNA 1.0. Regardless, I find it frustrating that MS is not making things crystal clear to the tech community in their reveal in terms of laying out all the relevant specificity of the gpu power thus causing some confusion.
 
Twice the power of one x translates to about 10TF of rdna1.0, with rdna2.0 being even more efficient i can't see how he ment 12TF of rdna2.0.
Because never in the history of console technology has difference in flops quality been considered. They've been compared like-for-like no matter the architecture. You also can't measure 'power' as a simple compute comparison as overall power (ability to render pixels on screen) is depending not only on GPU flops, but CPU, bandwidth, architecture, bottlenecks, etc. Is a machine with 2x the flops and 3x the bandwidth 3x faster as the highest single component improvement, 2x the improvement as the lowest single component improvement, or 6x as the total aggregate component improvement? What if the old GPU lacks RT hardware and RT's at 1/20th of the speed of the new one? Is it now 20x faster?

In the absence of anything to the contrary, the general interpretation would be 2x the performance is 2x the flops, because that the only metric that can be counted and compared, and there's nothing else MS can measure to derive a 2x figure, unless they're running something like 3DMark.
 
Twice the power of one x translates to about 10TF of rdna1.0, with rdna2.0 being even more efficient i can't see how he ment 12TF of rdna2.0. Or atleast, i doubt it. Also 13.3TF is 2080Ti level of performance, on paper atleast. I know it's the baseless section so you can throw whatever you want, but anyway.
If you follow Shifty's logic; you'll see why tunnel visioning on just 'FLOPs' is the flaw for the argument. Like comparing AMD and Nvidia flops with people saying 2 AMD flops = 1 Nvidia flop. No mention of memory, bandwidth, the test at hand, the architecture etc. It'll never fly. The most obvious way to compare like for like is to take it as spec.

if someone told you that a car a has double the power of car b. But all car a has is upgraded parts and a massive twin turbo. It's the same engine though, but you're going to measure horsepower at the flywheel for both. Regardless of how the power is generated the measurement at the end is what is going to matter.

It is a mistake to say well then 2x Scoprio means 9 TF rdna, because you're not measuring raw performance. You're measuring how well it handles different loads. That's like saying when we add a driver and a race course; I now expect the car A to only be 1.3x as fast as car B. Which might be true if the course is loaded with a bunch of curves in which the driver can't put down the pedal. But that doesn't mean the car isn't capable of delivering 2x the power at peak horsepower.

The hardware is the car.
The course and driver are the developers.
The car can't really change, but the developers can make a course and drive it in a way that would maximize the performance of the car (thus exclusives).

You can't get more FLOPs than what is calculated. Mul + Add is 2 operations. And that's it. It doesn't matter which architecture you are running, the most the GPU can do is a Mul + Add in a single clock cycle. It's just a matter of feeding the architecture to do work (bandwidth/latency on cache, reducing idle time), reducing workloads so that it does less work (compression), making design choices so that you're working smarter (optimization), leveraging new hardware features/accelerators to perform certain tasks much faster (VRS, RT, Mesh Shading, etc)
 
Last edited:
Twice the power of one x translates to about 10TF of rdna1.0, with rdna2.0 being even more efficient i can't see how he ment 12TF of rdna2.0. Or atleast, i doubt it. Also 13.3TF is 2080Ti level of performance, on paper atleast. I know it's the baseless section so you can throw whatever you want, but anyway.

A 13,3 Tflops RDNA is not 2080 Ti level power I think.
 

I mean; I'm sort of going to trust Phil more than any insider. At least he knows 100% of at least 50% of the equation here. The remaining 50% he has a better idea that any insider (that thinks they know both) because he knows what configurations can be built at which price points. He knows the cost mitigation strategies, he knows everything top to bottom and should know for a variety of configurations in which they should have shared some with Sony.

If Phil made a prediction on Sony's hardware and still got absolutely everything wrong; he's still have a chance at being 50% correct. Think on that.

It's like we're spectators watching a poker game being played. He knows his hand. As spectator se've don't know either player's hands, we can see them make their bets, we see the cards on the table, but that's it.
 
Last edited:
I find it frustrating that MS is not making things crystal clear to the tech community in their reveal in terms of laying out all the relevant specificity of the gpu power thus causing some confusion.

And Sony is? We're a year from release. Neither company are going to get into exact performance numbers this early. They won't even finalize clocks until at least GDC. They're each going to keep their competition in the dark as long as possible. This rush to wanting exact details now is crazy. Personally I'd rather MS stick to their rough 2x(XB1X) & 8x(XB1) numbers until they can give us final clocks/TFs.

Tommy McClain
 
A 13,3 Tflops RDNA is not 2080 Ti level power I think.

RTX 2080 Ti has roughly 13.4TF (FP32) in raw performance. If these next-generation consoles are actually hitting 12-13TF in raw performance with faster RT than the RTX line, then AMD's rumored RTX killer may not be farfetched after all. But we shall see...
 
But can Phil see through Cerny's poker face:mrgreen:? Sometimes in a game of poker being overconfident is how you lose a game :).
you don't need to see poker faces. You have your hand, you see the cards on the table. You know the percentages of the hands that can beat you. You work off that; that's what gambling is.
 
RTX 2080 Ti has roughly 13.4TF (FP32) in raw performance. If these next-generation consoles are actually hitting 12-13TF in raw performance with faster RT than the RTX line, then AMD's rumored RTX killer may not be farfetched after all. But we shall see...

Faster than RTX is maybe difficult. I think next-generations consoles will have a full access to BVH and probably be more flexible than current RTX version but more powerful we need to wait.
 
Last edited:
But can Phil see through Cerny's poker face:mrgreen:? Sometimes in a game of poker being overconfident is how you lose a game :).

Looking at the landscape a new drive tech or RT implementation could make more of a difference than 10% extra flops in the final user experience (the bit that ultimately counts)

A very interesting cat and mouse of showing their hands atm
 
you don't need to see poker faces. You have your hand, you see the cards on the table. You know the percentages of the hands that can beat you. You work off that; that's what gambling is.
That's not what poker is. ;) But these companies aren't playing poker and they won't be able to make the other guy fold. They can only have a long-term plan and follow it through.
 
Status
Not open for further replies.
Back
Top