Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Not to be THAT guy, but of a dev has target specs for both.... well....
Not to mention they're both using the same architecture for GPU and CPU, so a discernible difference in compute throughput and bandwidth should result in a similar difference in real performance.
Just like we saw with the 2013 consoles.
 
Last edited by a moderator:
Since no one is coming up with anything at the moment l thought it’s the right time........

Ps5 specs.
6 drops of essence of terror, 5 drops of sinister sauce, 1 drop of tenderness.
And it will be called Milton.

Cpu will be 6x more powerful than the ps4, gpu will be 5x.
That’s the math.
 
I think it might come down to clock speeds which probably are not yet set in stone. X1 got a bump in cpu and gpu clock well after it was unveiled and closer to release because it had wiggle room in its cooling solution. The same might also apply here. The estimate for Project Scarlett based on the reveal vid points to a large SoC. So its clock speed will go a long way to determining the final capability of the system and that might not yet be set in stone yet.

PS4 also got a RAM doubling just before it was unveiled. And X1X increased the RAM available for games not long before release. My point is Ed Boon is absolutely right. A lot is still up in the air even if the SoC or final silicon is a known quantity right now.
 
Since no one is coming up with anything at the moment l thought it’s the right time........

Ps5 specs.
6 drops of essence of terror, 5 drops of sinister sauce, 1 drop of tenderness.
And it will be called Milton.

Cpu will be 6x more powerful than the ps4, gpu will be 5x.
That’s the math.
Damn, I was gonna argue but you just can't argue with maths! :(
 
I think it might come down to clock speeds which probably are not yet set in stone. X1 got a bump in cpu and gpu clock well after it was unveiled and closer to release because it had wiggle room in its cooling solution.
Well if they did the cooling solution right they wouldn’t haven had the wiggle room. They were behind in console development which is why they had an external brick and they over compensated on the cooling to ensure they would not get another RROD issue.

If they were on track with everything, the cooling would exactly match the desired clock speeds and yield, there would be no wiggle room like we saw with PS4
 
It is interesting that some commenters are claiming that there is going to be very little difference between these two consoles because they are both using some of the same components from AMD. Maybe that is true, but I have seen a few hints that suggest otherwise.

Back with the PS3, Sony used separate memory banks, and high bandwidth XDR for their Cell. With Xbox One, MS used a huge cache. I recall seeing a recent MS patent that uses HBM for a very large L4 cache, which might be a good solution for improving bandwidth utilization with GDDR6. Are they not expected to use 16 GB of it for Scarlett, targeting 12 TF? With less bus contention, they might achieve well over 50% utilization. Also, if a 2.5D solution was used for the on package HBM, the chip would look rather large.

There has been reporting at E3 that the PS5 is going to be more powerful. As I recall, a leak from a presentation to developers in April claimed 56 compute units and 12.9 TF. That lines up nicely with Gonzalo’s GPU clock @ 1.8 GHz. But that was also claimed before AMD revealed a higher IPC for RDNA. Hmm...

Also, what about that PS5 devkit with 20 GB of GDDR6 and a bandwidth of 880 GB/sec? Coupled with the additional DDR4, wouldn’t that really push the console’s TDP?

Back in April, there was an interesting claim about the PS5 using both DDR4 and HBM2 for a total of 24 GB. Such a configuration would provide a lot more memory while using far less power, which could then be used by the GPU. Separate memory pools and the use of HBM would also greatly improve utilization, allowing a sustained bandwidth far closer to peak performance.

Why bother with any of this if ray tracing is only going to be a ‘minor feature’? It is probably the biggest improvement to rendering since rasterization. Wouldn’t it also simplify game development?

Anyway, this is complete guesswork on my part. I am sure whatever comes out next year is going to be awesome regardless. Really, I will be quite happy if I no longer see geometry and effects ‘pop in’!
 
There has been reporting at E3 that the PS5 is going to be more powerful
Both companies are aiming for performance as being the priority goal for their console here.

The likelihood that there is going to be a notable difference in performance between the two is unlikely at the same price points. There is really only so much customization can do to extending performance unless Navi itself is so terrible a chip they would never had gone with it to begin with.
 
Both companies are aiming for performance as being the priority goal for their console here.

The likelihood that there is going to be a notable difference in performance between the two is unlikely at the same price points. There is really only so much customization can do to extending performance unless Navi itself is so terrible a chip they would never had gone with it to begin with.

But developers are already saying that there is a notable performance difference. The memory system can have a large impact on performance as well, and there is little reason to assume that this will be the same.

Even though both Sony and Microsoft used AMD components for the current console generation, there were major differences in how they allocated resources, and the consoles ended up with a significant difference in performance.
 
Both companies are aiming for performance as being the priority goal for their console here.

The likelihood that there is going to be a notable difference in performance between the two is unlikely at the same price points. There is really only so much customization can do to extending performance unless Navi itself is so terrible a chip they would never had gone with it to begin with.
To be fair though, pricepoint (or degree of lossleading deemed acceptable), and target power draw could have some consequences.
Imagine one manufacturer opting for a 100W power target to maintain low noise and a minimum of problems with placement and heat dissipation, and settling on a $399 break even pricepoint, where the other went for a 300W design, with the correspondingly beefier electrical design and cooling, and sold it for $499 taking an initial hit of $200 per console (with correspondingly higher component budget).

The differences in performance could become significant. Even if so though, they would still be very similar architecturally and porting between them relatively trivial.

And the scenario isn’t terribly likely.

Console hardware design is interesting because it is about efficiency - what can be achieved within narrow cost and power constraints. Speculating about the consequeces of the manufacturers choosing drastically different such constraints is not as interesting technically, it’s mostly scaling.
 
But developers are already saying that there is a notable performance difference.
I would disagree with that. There actually hasn’t been; the rumours I assume you are referring to have said that PS5 is more powerful (but wait to be seen if true). More powerful is not the same as notable performance gulf.

There are other developers on record that both companies “went” for it and that they didn’t screw up like last generation and they gamers would be happy with where both companies landed.

And than Matt on Resetera “guesses” that it’s likely Scarlett that will come out more powerful. But once again that’s not a notable performance gulf.

To get a notable performance gulf; you’re either operating at different price points, subsidizing the hardware; ignoring power constraints; or someone screwed up royally chasing TV again.

The latter isn’t going to happen so that is out.

When we talk about subsidies, that’s a dangerous path to walk for Sony because they typically as a company don’t have the money to be loss leading. That is significant and unnecessary risk to their business which may not prove to be useful at all. you need to sell subs and software just to make up the subsidy before you profit. This type of strategy worked back in the day because whatever console makers tried to do the price points where out of reach from consumers. But relative PC rigs were in the several thousand for cost. I don’t think that is the case today. We are taking this coming year as he first step to AAA streaming game services.

when you consider threats from streaming (no console required) and that their main competitor has been working diligently on game pass for both console/PC/streaming; flooding the market with super low entry into gaming meaning those 3P titles despite being less performant may end up being purchased on Xbox instead relegating PS5 as an exclusive machine (worst case scenario if you are loss leading your hardware).

And I think we can look at power/performance in the same way we look at price performance. There is an optimal cost/performance graph that once exceeding that optimal point the costs continue to rise for yield and cooling, but the performance isn’t moving up much further.

I don’t have a lot of faith of seeing any notable differences. If people can’t notice the difference between X1X and 4Pro; the likelihood they will see a difference between XBN and PS5 will be less.
 
I would disagree with that.

“Definitely more powerful” was what was said.

With the Xbox One, MS apparently went with DDR3 for cost reasons. Sony initially was going with only 4 GB of GDDR5 for cost reasons as well, which would have put the systems a bit closer in performance. A similar justification could be made for avoiding HBM2, or perhaps just using less of it if MS did indeed build a L4 cache out of HBM.

HBM2 is far more power efficient than GDDR6, so it could free up significantly more power (100 W?) to use elsewhere: perhaps more memory and a better GPU? The choice is not an obvious one as high bandwidth memory could complicate manufacturing. The extra risk/cost could be mitigated with more investment in solving related problems and/or looking at the longer term picture: what happens to system costs 1-2 years down the road with the ‘slim version’?

So, it seems possible to me that similar costs, but different architectures could lead to a performance difference of greater than 20%.

(tradeoffs between HBM2 & GDDR6)

Now, mass production is probably 12 months away, and Sony is a little bit ahead in development, so maybe there is more to see, but I would still not be so dismissive this late in the game.
 
Status
Not open for further replies.
Back
Top