Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
He did say things, though, that must have had data behind them to back them up. He just chose not to be precise with the things he said.
That sort of data is completely beyond anything outside of the engineering laboratories though. Sure, they could list power details and test cases and frequencies and whatnot that came up in the research, but expecting them to share that info...?? I feel "the dog's got the bone" is kinda happening here - people who'd like to know specifics are starting to stretch their expectations beyond what's sane instead of just accepting the talk given was as much as make sense factoring in an unknown future. In the past, we had clock speeds and bandwidths and everyone was happy. We never had any specifics on efficiencies.

Like, look at the SSD discussion and whether PC SSDs will be fast enough for next-gen games. We get specs like "7 gb/s" being talked about. No-one asking the manufacturers for "how often do we hit those peak speeds? What are the lowest transfer speeds?" There's no 'Spanish Inquisition' on details with anything except Sony's clockspeeds. What about MS's raytracing figures. "25 Teraflops of raytracing performance." How often? What are the bottlenecks? What are the typical attained performance rates?

This interrogation of Sony's clock speeds is reaching beyond sensible discussion into the ridiculous. It's a discussion point that's running away with itself; I think everyone needs to calm-down and recalibrate their expectations! We're not going to get Sony or AMD's research data.
 
Data =/= a significant body of data. You can engineer something and do testing and conclude it works most of the time, but you have to do an exhaustive amount of testing to it works xx.xx% of the time. I would have though Sony would want to be absolutely certain of things like this before throwing such statistics about publicly. It may be something they don't intend to make public, like many other specifics regarding their console's workings.

Again, I don't see how they could have determined the clocks they could safely set without having significant data. It is a vital decision that they wouldn't have left up to guesswork.
 
Again, I don't see how they could have determined the clocks they could safely set without having significant data. It is a vital decision that they wouldn't have left up to guesswork.
Because semiconductor design is a largely predictable engineering. Experienced engineers will know how their design will work before it's fabbed out because it's been simulated to death. The uncertainties tend to come from the fabrication process and quality of the wafers. AMD aren't throwing random designs at the wall to see what sticks. :nope:

But in terms of what the actualities are, until there are a lot of devs running a lot games running a lot of code on production hardware, tossing out specifics numbers is unwise.
 
Can you point me? I've searched Google's transcript of Cerny's Road to PS5 and couldn't see anything but equally there are references to "sikhs" [seeks] :LOL:

Just watch the whole video while not doing anything else. Cerny talked at length on third party drives that will be certified and that those drives likely are not available at launch. Also cerny said those drives have to be somewhat faster than ps5 internal drive due to some overhead.

Yep. But we don't really know what the interface between PS5 I/O controller and a certified SSD controller there will be.
And surely it will not be the same as the PC-oriented interface.

External drive has to be pcie 4 based based on what cerny said. It would only make sense internal drive is also pcie 4 based. Otherwise it would be extra difficult to reach compatibility claims sony is making.
 
Just watch the whole video while not doing anything else. Cerny talked at length on third party drives that will be certified and that those drives likely are not available at launch. Also cerny said those drives have to be somewhat faster than ps5 internal drive due to some overhead.

The only thing I can find is this:

Mark Cerny: We'll also be doing some compatibility testing to make sure that the architecture of particular M2 drives isn't too foreign for the games to handle once we've done that compatibility testing we should be able to start letting you know which drives will physically fit and which drive samples have benchmark appropriately high in our testing it would be great if that happened by launch but it's likely to be a bit past it so please hold off on getting that M2 drive until you hear from us.​

I took this more to be Sony publishing a list of known tested drives that are compatible rather than some formal certifications because that undermines what Mr Cerny said about being abled to use standard commercial drives - as long as they physically fit and are fast enough. I envisaged this being my Synology having a compatibility checker for HDDs compatible with their NAS drives.

The XBSX will never throttle because of heat.

Then it will just.. melt?
 
External drive has to be pcie 4 based based on what cerny said. It would only make sense internal drive is also pcie 4 based. Otherwise it would be extra difficult to reach compatibility claims sony is making.

That's a hardware interface.
But the main part is the software interface between controllers.
For example if all certified drives will need to be an Open Channel compatible. Or some other specific FTL.
Reaching the PS5 speeds with a "vanilla" FTL is impossible, I think.

The XBSX will never throttle because of heat.

That's unrealistic. Will it shutdown then?
 
That's a hardware interface.
But the main part is the software interface between controllers.
For example if all certified drives will need to be an Open Channel compatible. Or some other specific FTL.
Reaching the PS5 speeds with a "vanilla" FTL is impossible, I think.

The proof will be in certification pudding. I'll believe cerny until he lies. So far he has been reputable source.
 
Apparently it does. Here's the relevant clip starting right at the beginning of the 3rd party drive portion. The needing "a little bit" more performance due to the lack of priority levels on NVMe is discussed shortly in.


Mr. Cerny sure does love his vagueries, doesn't he? "A little bit", "Most of the time", etc.
Nice, so it does have to combine them. They will have to test each model, so they cannot know how much more is enough to pass, it will vary per model and per underlying architecture and number of nand channel (fewer chips allow less concurrency), controller latency... and number of priority queues. It looks like priority queues is an important one, it means the games themselves have access to more than one.

He cannot give numbers he doesn't have, he has the reputation of avoiding making a technical claims he is not in a position to know. Like for example promising every game will work in BC without having actually tested all of them, then having to backtrack.
 
He just chose not to be precise with the things he said.

It is very uncommon for a company to talk about anything that might seem negative when you unveil your product for millions to watch.

Then it will just.. melt?

Probably shutdown, just like the PS5 or any device would or should do, you know, to prevent fires or damage to the hardware.

PS5 GPU seems very similar to 5700XT, same 36CU, bandwith, and almost same boost as some aftermarket ones. They suspect that, the more you push the clocks up, the less you get back.
 
Last edited:
Why? AVX suddenly runs at full speed on AMD CPUs all the time?
GPUs do not ever throttle because of the heat?
It's getting ridiculous.

This isn’t the PC market. All the fiddling around with TDP and boost clocks is a PC thing influenced by marketing and competition.

Boost clocks and PowerTune is a result of AMD having to deal with powerviruses and competition with Nvidia. At one time AMD was forced to conservatively clock it’s GPUs due to powerviruses so it now uses utilities like PowerTune to manage pstates so it can maximize clock rates on its products and deal with edge cases by throttling.

AMD’s TDP figures use to be the maximum amount of power drawn by it’s CPUs. Until Intel started fiddling around and changed its definition by calculating TDP using a series of benchmarks.

AMD’ TDP is rated in Watts but it’s not electrical watts, its what AMD calls “thermal watts”. It’s a measure of heat not power consumption. There is nothing stopping a console manufacturer from calculating TDP based off its original definition and designing its product around that figure.

PC’s CPUs and GPUs are designed and marketed to get around maximum power consumption around any specific edge case. So anytime you get an case that maximizes power consumption at frequencies lower than marketed, those products can throttle.

A console manufacturer doesn’t have design their consoles in such fashion. They are free to design consoles around max power draws and fixed frequencies.
 
Last edited:
A console manufacturer doesn’t have design their consoles in such fashion. They are free to design consoles around max power draws and fixed frequencies.

Yep, that's why Xeons drop to 50% of the freq when AVXing.
While the desktop CPUs don't.
It is getting ridiculous.
 
Yep. But we don't really know what the interface between PS5 I/O controller and a certified SSD controller there will be.
And surely it will not be the same as the PC-oriented interface.

It's PCIe 4.0 just like the interface between the internal SSD and the APU. That was specified in Cerny's presentation.
 
Status
Not open for further replies.
Back
Top