Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I mean cutting cost while still allowing for higher speed for a majority of the ram is a pretty large advantage. Especially since there is already OS reserved ram that does not need to be full speed. The difference could be $50 of BOM for the console making the difference vs the PS5.

I am going to assume even the CPU will use some of the fast ram and GPU could still use the slower ram as well. There is always data that does not need the full speed.

Sure, I completely understand the reasoning behind the mixed density, but the reason for using it is not because its provides a performance advantage but because it provides them with there goals at acceptable performance for a acceptable BoM.
 
Sure, I completely understand the reasoning behind the mixed density, but the reason for using it is not because its provides a performance advantage but because it provides them with there goals at acceptable performance for a acceptable BoM.
agreed.

though we have no details over their controller, but safe to assume there is nothing special there for this split pool
 
Did we get any details about the inner workings of the RT acceleration system? The absence of such info among the flood of info covering other parts is not a good sign IMO.

The only thing I've seen mentioned was that developers can create their own custom BVH structures offline which can be used by the hardware acceleration.

It's a unified pool on an imbalanced bus. To maximize the bandwidth usage it needs all channels to get an equal amount of requests over a time slice, to keep the queue in a healthy range. The simple no-brainer method is to spread the address space equally across all chips, requiring identical size chips. The 10GB is that way. By having an additional 6GB only on some channels, it causes accesses at that lower speed to stall other request as if the entire bus is effectively at the lower speed. If it's used very lightly, it won't have much impact, if at all. But if there's a lot of throughput on this partition, it will cause the average B/W to go down significantly.

The OS part isn't expected to be doing much during gameplay so it's perfect, but the special slower partition will need tl be used carefully as low access buffers, maybe perfect for I/O? What's the lowest access data types in a game engine? Code?

From the DF article.

In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen.

So, looks like a fair bit of stuff can be thrown into the slower game accessible memory pool that shouldn't significantly impact GPU use of the faster memory.

Regards,
SB
 
XBSX will have to either have to have SHAPE or emulate it in software. Hopefully, they not only integrate it, but actually enhance it further.
 
Not quite. If it was so you would get 6*32=192bit of bandwidth only.
It accesses all 10 chips when using the first gigabyte. And only the first 6 chips when using the second gigabyte (per chip).

The question is whether XSX setup is able to amply supply the CPUs with bandwidth. Desktop CPUs get by without using 100s of GB/s of bandwidth.
 
Overall, pretty close between the two. Actually closer than the PS4/Xbox One were overall, but with more arch differences.

So some parts of the PS5 GPU will run faster than the Xsx, but the Xsx is wider. And oooh, a slightly faster CPU for the Xsx as well, barely. Same ram, both NVME SSDs. MS's weird insistence on sustained throughput for SSD speed is only useful in a handful of titles, open world games where you zip around really fast. But double the bandwidth means bursty or not, the PS5 will load bursts twice as fast, which is when players notice it anyway (starting up a game, a match, fast travel, whatever). Also weird to see both having compression blocks for zlib and stuff.

Price is hard to determine, but may be pretty close as well. The PS5 CPU is less costly than the Xsx, but their GPUs might be siiimilar in price? With the high clockspeeds yield will go down on PS5 even as the much smaller die from the GPU drives them up... uhh, anyone have the TSMC 7nm yield curves for clocks/mm?

All that being said, it does seem like the PS5 will be easier to develop for. No split memory arch you have to take into account, just a straight 256bit bus to 16gb of ram. Super easy API for the decompression block promised, no "magic" texture mip dumping that might not work at all and will obviously break for many raytracing applications anyway, etc. The one caveat is Cerny's weird obsession with audio. Like, I get it, good audio design is cool and all. But a custom audio engine, taking pictures of your ears, bro most games don't even have good sound design no matter their budget (cough most of $200+ million costing RDR2 cough).

I'd expect something similar to last time but inverse. If a title runs at 2160p on Xsx it runs at 1890p or whatever on PS5, but then the PS5 loads the game up faster and stuff. Oooh, whatever. Obviously MS will have the "higher number advantage" but I'd imagine exclusives will play an even larger role, and MS is just terrible at that right now (No fucking next gen exclusive games for at least a year? Shoot yourself now MS).
 
Last edited:
I mean cutting cost while still allowing for higher speed for a majority of the ram is a pretty large advantage. Especially since there is already OS reserved ram that does not need to be full speed. The difference could be $50 of BOM for the console making the difference vs the PS5.

I am going to assume even the CPU will use some of the fast ram and GPU could still use the slower ram as well. There is always data that does not need the full speed.

This shouldn't be a price advantage at all, just due to the weird GPU compute to bus width requiring it. Odds are both consoles use pretty much the same GDDR6 spec with different buses and controllers. They're both just buying either two 8gb GDDR6 modules or one 16gb one, whichever is cheaper.
 
So any educated guesses how much the PS5 will be throttling?:)
I expect a scaling engine, that cuts off earlier.
We have to make reasonable assumptions on how it works, CPU must take priority, if there is available power remaining it goes to boosting the GPU.
In this scenario, as long as you continue to keep the pressure on the CPU down, you're not going to down clock that boost.

That being said, I think you'll see more dramatic differences once things like uncapped frame rates/high frame rates or game changing CPU code is used. If the CPU is getting busy that'll throttle the GPU down, that should scale the resolution down with it.

That being said, a game pushing the engine hard everywhere, I would be interested to see what the results will be like. The total system will be under heavy load to keep the system going resulting in further reductions on the GPU side of things.
 
Also weird to see both having compression blocks for zlib and stuff.

It will have encryption at-rest. Without that it's easily hacked the next day after launch.
So they must have a decrypt/decompress engine.

The PS5 CPU is less costly than the Xsx, but their GPUs might be siiimilar in price?

It's an APU.

The question is whether XSX setup is able to amply supply the CPUs with bandwidth. Desktop CPUs get by without using 100s of GB/s of bandwidth.

If you need bandwidth for CPU - something gone terribly wrong.
CPU is a dump for all the unoptimized, branchy horrible crap-code that is used for game logic.
That's it.
 
The architects of both the PS5 and XBSX went in the exact opposite direction, though.

BTW, after looking at both APU. I think they not.
It has mesh shaders, GPU work creation, more fine grained GPU compute, etc.
Next gen can probably render a frame without using CPU at all.
 
And oooh, a slightly faster CPU for the Xsx as well, barely.

3.8ghz vs 3.5 as max, not too small difference, i think.


Same ram type but much higher bw on xsx.

Then 2/3tf more.... overall its faster also for RT.

All that being said, it does seem like the PS5 will be easier to develop for.

With dynamic clocks, easier?

MS is just terrible at that right now (No fucking next gen exclusive games for at least a year? Shoot yourself now MS).

Wouldnt call terrible. They upped their ante, atleast they demo’d what to expect ;)
 
It will have encryption at-rest. Without that it's easily hacked the next day after launch.
So they must have a decrypt/decompress engine.

It's an APU.

Just breaking it down by part to get a sense of what yields will be, thus more CPUs will test right for the PS5, and the overall die should be smaller, but the high GPU clock rates are worrisome. Thus the PS5 stands to have a possibly a cheaper die package, but since both have to hit their numbers it might be very limited.
 
You're not factoring in the CPU as a consumer of bandwidth and on the PS5, with it's narrower bus, won't CPU memory accesses "block" the GPU more often?
Weren't we hearing similar arguments about bandwidth to flops with Xbox One versus PS4? It seems like more marketing than an actual point.

There may be a point about balance, maybe streaming assets is more important than more flops at a certain point but if that's the case Sony should be presenting some info to support that design philosophy.
 
There may be a point about balance, maybe streaming assets is more important than more flops at a certain point but if that's the case Sony should be presenting some info to support that design philosophy.

Yes that's what I want to see, Sony have obviously put more effort into the storage speed and I assume there's a reason for it. Cerney said they were targeting 5GBps, I assume there is a reason for that. If it's just for load screens it's nice to have almost instant loads but I don't think load screens justify the resources they put into it.
 
Yes that's what I want to see, Sony have obviously put more effort into the storage speed and I assume there's a reason for it. Cerney said they were targeting 5GBps, I assume there is a reason for that. If it's just for load screens it's nice to have almost instant loads but I don't think load screens justify the resources they put into it.
It may seem surprising but the audio focus in my opinion is likely going to be a waste of resources in the sense that the tech as good as it may be won't get utilized with many titles. I just don't see developers investing the resources to do this.

And ironically sound is probably an area that can dramatically increase the immersion given how mature the graphics are at this point.

RTRT would be the other obvious opportunity but I just think this generation has the resources to do a lot with that.

Maybe the audio stuff is going to tie into PSVR, that could potentially be a very good application.
 
BTW, after looking at both APU. I think they not.
It has mesh shaders, GPU work creation, more fine grained GPU compute, etc.
Next gen can probably render a frame without using CPU at all.
I think the CPU will still be involved, although there is an interesting rise in other processors besides the CPU and GPU that are also involved.

3.8ghz vs 3.5 as max, not too small difference, i think.
That's with SMT inactive on the Xbox Series X. I wonder if Sony could implement an SMT and non-SMT mode, or if they could if they reviewed the option in the past and opted not to.
With the more optimistic boost algorithm, maybe there could have been a reversed situation at the top.

Yes that's what I want to see, Sony have obviously put more effort into the storage speed and I assume there's a reason for it. Cerney said they were targeting 5GBps, I assume there is a reason for that. If it's just for load screens it's nice to have almost instant loads but I don't think load screens justify the resources they put into it.
Cerny mentioned a more general freeing up of level design and asset storage. Levels could be designed knowing they didn't need to implement barriers or sequences catering to MB/s of bandwidth, and asset duplication would become much less necessary. I'm still not clear on where the latency figures are for this upcoming gen, which might influence how aggressive developers can be on buffering.
 
It may seem surprising but the audio focus in my opinion is likely going to be a waste of resources to n the sense that the tech as good as it may be won't get utilized with many titles. I just don't see developers investing the resources to do this.

I mainly only play first party games on consoles and I have a feeling they would support the audio, they bloody well better.
 
Status
Not open for further replies.
Back
Top