Estimate a BOM delta for PS5 and XBSX

Manufacturing a part in house does not necessarily mean cost savings over those sourced from a 3rd party. Figuring out that actual amount, however, will be near impossible unless Sony chooses to share.
They won't be manufacturing in house. I imagine they would use TSMC more cheaper and ubiquitous nodes like 28nm or 16/12nm which Phison uses for their own controllers. Or use Samsung's fabs like they've used in the past for stuff like PS Vita if i recall correctly.
 
People seems to think Sony engineer are a bunch of idiot not knowing a SSD can throttle but if they need to detail every little details the video would have been 3 hours.

The teardown will be very interesting when people will understand Sony engineer are not a bunch of idiot not knowing what they are doing and probably smarter than B3D forumer or not more idiot than MS hardware engineer. They will understand it is part of the BOM.
No one said that. Thermal issues are solved, it's not like they are building a quantum computer here. If they have heating issues, just put in more cooling that results in more BOM. There's no rocket science here about it. The game is being able to cool everything successfully, while ensuring the hardware is stable and reliable for 7 years and keeping the costs and sound low.

How do? Sony targeted a min 5 but managed to exceed that to 5.5. As ever, let’s assume Sony can’t deliver based on?
It's my comprehension of what Cerny was trying to do. When someone says their targeting at least something, their aiming at their potential maxima number. In this case this is ideal number that they can achieve disregarding thermal throttling. I"m just looking to see if they ever intend to address a bandwidth number that is guaranteed, it could very well be 5.5 GB/s for PS5 which is fine, but they'll need additional BOM or one hell of a design to ensure the 3P external NVME drives will either (a) never throttle, or (b) throttle down to 5.5 GB/s as a minimum performance number.

Probably a bit harsh, but yeah - sometimes it’s like ‘how can Sony know something we don’t’.

Except every step of the way on PS5 we have countless ‘I can’t see how Sony can do it’ comments, firstly everyone downplayed the variable clock and now the SSD (although that was originally downplayed back during the Wired reveal.

What I want to know is, why aren’t people pulling apart the MS velocity engine claims that devs has instant access to 100gb of data? That’s not misleading at all!
We honestly just went through it with asymmetric memory discussion discussing whether MS is lying about it's 560 GB/s or if it's actually 408 GB/s due to some math. Somehow people are convinced that adding more capacity will result in less bandwidth and that was addressed. But ignoring that, this isn't a Sony can't achieve it discussion. We look at how much MS spent on cooling their internal and external SSD solution and they are running 1/2 bandwidth of PS5. We know that heat is generated significantly more with higher bandwidth. So what will Sony's solution look like? What will their BOM look like?

Next gen, I'm fairly positive the console I will purchase will likely be PS5 and I will go PC for all my other needs. I've never owned one before, aside programming for it, but this will be my first ownership of the console. I'm not invested into their ecosystem, I don't have fond memories of their games. But I mean, if I'm being real here: if Sony announced PS6 as being a quantum computer, there would be a subset of fans that wouldn't question it at all and of course ask why we aren't trusting Cerny on building a quantum computer. The rest of the world would be asking questions how they managed to get that happening in such a small form factor when cooling to near 0 Kelvin is usually the size of a room.

I'm just on the inquisitive side of things. Sony's SSD solution sounds significantly too good to have 0 drawbacks. Sony's clocking solution also sounds too good to be true too, they got all of the super high clocking power with none of the down sides. All of it was presented with no drawbacks. And we know there are draw backs that are often related to heat, yields, form factors, TDP etc. And i"m looking for it, and it won't dissuade me from buying the console, but perhaps I may wait for a second revision/slim model just in case I'm not fully sold on version 1.

XSX was presented with full transparency of their drawbacks, the fixed clocks, the slower SSD, the asymmetric memory, the removal of the optical out, the mega cooling. They had people come in and work with it hands on and assemble it. They demo'd games with it. They demo'd its features. We've seen enough that I don't need to question everything single thing about it when I can see where they made concessions and price is going to be one of them as well.

And for the sake of this discussion, I'm curious to see how Sony did it. It's not particularly hard to solve heating. It just costs more money if you can't find an elegant solution to do it for less. A lot of the discussion about PS5 has been around the 399 price point, and I actually think it's going to be very close to the XSX given these challenges. That's all I'm thinking here. They could very well be the same price.
 
Last edited:
it could very well be 5.5 GB/s for PS5 which is fine, but they'll need additional BOM to ensure the 3P external NVME drives will either (a) never throttle, or (b) throttle down to 5.5 GB/s as a minimum performance number.

Yes, a sustained number would be intresting, as games optimized for an SSD are going to be transferring at those rates hours and hours straight.

I'm not invested into their ecosystem, I don't have fond memories of their games.

It's the other way around for me, ive always had a PS, but there's less and less reason for me to own one as more and more of previously exclusives appear on my main platform now (PC). I dunno, it depends for me, most likely il get one down the line. It wont be that much different though, a handfull of exclusives doesnt really justify 500 dollars maybe. It's a different story when having the PS5 as a mainplatform and play all games there, aside from exclusives. But that will never be my intention.

No fond memories, i do have that though, PS2. After that, nah, same then.
 
When a suit says something sounding too good to be true, I dismiss it as questionable until we get proof.

When Cerny says something at GDC that sounds too good to be true, I look at his long history of GDC appearances and track record. And you need a pretty good understanding of the concepts involved not to fall into the "concern" traps on the internet. He speak clearly but people paraphrase him to change the meaning.
 
they'll need additional BOM or one hell of a design to ensure the 3P external NVME drives will either (a) never throttle, or (b) throttle down to 5.5 GB/s as a minimum performance number.
I don't see why if they are only allowing certified/endorsed drives. It also may not support 3P drives until 2021/2022 for all we know when dirves are fast enough and users are more interested in upgrading having filled up their internal drives.
 
sounds too good to be true

It doesn't sound too good to be true regarding performance directly, those speeds are achieved by other drives later this year already. It's the cooling that some wonder how they are going to do it, not if they can do it performance wise. They need to be able to cool that SSD at a rate of 5.5gb/s for hours straight without dipping below that, also for the user swapable drive.
MS seems to stress other hardware (bcpack block) rather then the drive itself to achieve those higher throughputs. Maybe because they have those tiny memory cards to cool also.
 
They need to be able to cool that SSD at a rate of 5.5gb/s for hours straight without dipping below that, also for the user swapable drive.
You phrase that to make it sound like a difficult task. How much heat does that SSD produce? Is it even a problem?
 
You phrase that to make it sound like a difficult task. How much heat does that SSD produce? Is it even a problem?

We know it's a problem in the PC space. It shouldn't be a problem in the PS5 just like RROD/YLOD shouldn't have been a problem with X360/PS3, but considering the public has yet to see the Sony PS5 case and cooling system, questions remain as to the extent of cooling required.
 
MS didn't emphatise audio as much as Sony did, maybe PS5's solution is far superior in many regards, but to conclude that we need more then what we have now.

Is it even a problem?

The question is how they cool it, it can't be a problem if it's ending up in final retail products
 
Last edited by a moderator:
MS didn't emphatise audio as much as Sony did, maybe PS5's solution is far superior in many regards, but to conclude that we need more then what we have now.



The question is how they cool it, it can't be a problem if it's ending up in final retail products

We have the details on the two sides, there is a game release with the solution you can play it on PC Gears 5 if you did not play it with a Windows Sonic headphone you will have the 3d Audio. You can compare it to Steam Audio too.
 
It is? I thought you just slap a heatsink on.
Yes. It's boring math. Wattage, thermal density, die-to-case and target temp from datasheets, environmental margins, worst case laminar airflow rate, shove it all in a spreadsheet for best fin width and height... Get a sample from you favorite supplier of custom extrusion in china... test it in limits conditions... And you have a simple alu extrusion profile that cost $2 each in large quantities. How difficult is this? What are the unknowns? Maybe a couple iteration to reduce the cost by a few cents I don't know.

With PC, half of the numbers you'd put in that spreadsheet are unknowns. With a console every detail have a clear figure and worst case margin, and it's easy to meet reasonable requirements.

Sony will have to test them, and those which work at least as fast as the internal drive under all test conditions, these models get the sales. The vendors are responsible for designing their product correctly and meeting sony's requirements, and the QVL from sony means the vendors will proudly advertise it in gigantic letters on the box.
 
It is? I thought you just slap a heatsink on.
No, I have read a report that even the most heat sinks will cause throttles after 15 min of heavy use.

yes usually writes are involved for that to happen. But we’ve never had nvme at this level of integration with graphics- and we have the “share” button that is constantly recording our last 30 seconds or of play.

no heat sink has no chance; straight to throttle or 128C.
This Samsung drive was reviewed 5 months ago:

The Plus model comes equipped with all the same advanced thermal control features as the 970 EVO. The firm also added a nickel coating on the Phoenix controller and a thin copper film on the back of the PCB to help dissipate heat efficiently. Samsung's Dynamic Thermal Guard, a thermal throttling algorithm, will slow performance if the drive gets too hot. The EVO Plus model comes with a revamped Dynamic Thermal Guard implementation that Samsung says can transfer 86 percent more data (134GB) during sequential writes before throttling kicks in.

if it was as simple as just slapping on a heat sink; companies wouldn’t be doing this.
 
Last edited:
https://www.tweaktown.com/reviews/8996/western-digital-black-sn750-heatsink-ssd-review/index.html

Right off the bat, the heat sink SSD gives users more headroom before engaging the thermal throttle feature. That's not the most important or even impactful effect. The heat sink first absorbs heat from the controller and then increases the surface area for the heat to dissipate compared to the bare controller.

This reduces the time it takes to reach a thermal throttle condition; it can even eliminate it if there is sufficient airflow. Using a hypothetical number, let's say the bare controller throttles after writing 200GB of data. The heat sink could extend the data writes to 500GB or more.

8996_97_western-digital-black-sn750-heatsink-ssd-review.png


only high end heat sinks can keep nvme drives from throttling after prolonged heavy use. Pictured above, cheaper heat sinks allowed them to only transfer a little more and a little faster before throttling.

Heat can corrupt long term data, and then how long the drive will last; the cooling has to do more than just keep the heat under thermal throttling.

https://www.digitaltrends.com/compu...rrelation-between-ssd-reads-and-failure-rate/
Facebook’s servers house a massive amount of data, so it should come as no surprise that they regularly publish research reports using that data on everything from relationships to the color of the infamous dress. In this case, the study was not about the data itself, but the effect it has on the SSDs that it’s stored on. The researchers found that it wasn’t usage, as is commonly believed, that wears down flash memory, but temperature that has the most effect on data integrity.
 
Last edited:
No, I have read a report that even the most heat sinks will cause throttles after 15 min of heavy use.

yes usually writes are involved for that to happen. But we’ve never had nvme at this level of integration with graphics- and we have the “share” button that is constantly recording our last 30 seconds or of play.

no heat sink has no chance; straight to throttle or 128C.
This Samsung drive was reviewed 5 months ago:



if it was as simple as just slapping on a heat sink; companies wouldn’t be doing this.
You are being disingenious. Writes are below 1% at best here.

Yes it is as simple as slapping a heatsink, correctly sized, under proper airflow. It's not as simple as putting a random piece of metal in stale hot air right under the 300W gpu, and running a 50% write benchmark. You have no idea what you're talking about.

The controller is the thing that produces the most heat and throttles under shitty conditions. It's a trivial problem for an engineer. It's a couple watts to five watts from the specs.
 
You are being disingenious. Writes are below 1% at best here.

Yes it is as simple as slapping a heatsink, correctly sized, under proper airflow. It's not as simple as putting a random piece of metal in stale hot air right under the 300W gpu, and running a 50% write benchmark. You have no idea what you're talking about.

The controller is the thing that produces the most heat and throttles under shitty conditions. It's a trivial problem for an engineer. It's a couple watts to five watts from the specs.
that fair; I happen to think the loads will be high. You might be right that I’m completely out to lunch; but these are consoles. They get hot. They have to work in small spaces. They sound like jet engines too and are designed to have the most amount of performance with the least amount of money possible.
 
This is not how semiconductor industry works at all. :nope: There is constant expansion of semiconductor manufacture and consistent changes to fabrication techniques to meet evolving needs. Fabs are hugely expensive so for the operator, knowing today that a big customer will need x times capacity at y node on a monthly/annual basis lets you cost manage where to invest your fab capacity.

Fabs plans unto a decade ahead; last year year TSMC were talking about their needs in 2030.

I've booked volume RAM orders for the next 5-10 years, you can get much better pricing. I don't know it works for anybody else but you generally begin with your minimum/maximum needs, which will be wide for a consumer product because you can't accurately predict popularity/sales/demand over the next 5+ years, then within that range there will be spectrum of pricing. There is usually a sweet-spot in the middle where it is cheapest, buying more and more doesn't always mean cheaper because you may get to a point where it's just too much for the supplier to handle comfortably without changing things up - and that costs, which is passed on to you (the customer).


From my aerospace days, it's much harder to keep a processor stable than memory - remembering that from a semiconductor angle, an SSD is merely very slow nonvolatile RAM. It's just we're more used to thinking heat and active in regards to processing semiconductors.

So what happens if Sony has a ps3 on their hands and not a ps4 ? Sony is going to buy 8 years worth of nand on what forecast? Also in all modern consoles we have seen hard drive space grow over the console cycles. The original xbox 360 and ps3 launched with 20 gig drives and ended with 500 gig drives. PS4 and Xbox one now have 1TB drives. Are sony going to stay at 800 gigs all generation ? How will they predict what they need in terms of a ps5 pro if there is one.

Nand continues to evolve quite quickly vs ram. I don't see a manufacturer that is going to want to keep a plant making what is most likely not cutting edge nand in 2020 still making it in 2028 it will only ever find a life in sony's machine. Meanwhile 8 years later there are systems out there that still require the same ram and so it makes sense to produce ddr3 ram overlapping ddr4 and ddr 4 over lapping ddr5.
 
If there are virtual channels in nand...

Launch:
Internal: 12x 667MT downclocked at 600MT (5.6GB/s)
External: 8x 1033MT minimum (6.5GB/s for equivalent perf without priority queues)

Later models:
Internal 6x 1200MT (heh, that downclock was a delayed gratification, 1333 is not a valid onfi protocol)
External: 8x 1033MT or 4x 2133MT minimum

Super slim:
3x 2400MT

Those should allow 7-8 years of cost reduction and third party devices compatibility.

Sony will need to make sure the ONFI specs are matching what they need in the future to get a reduction of nand chips as soon as possible, but they are co-writing the specs so they got this covered. I wouldn't be surprised to see 667 chips downclocked to 600 at launch, if they know what's cooking in the oven already.

http://www.onfi.org/-/media/client/onfi/specs/onfi_4_2-gold.pdf
Open NAND Flash Interface Specification
Revision 4.2
Intel Corporation
Micron Technology, Inc.
Phison Electronics Corp.
Western Digital Corporation
SK Hynix, Inc.
Sony Corporation
 
Last edited:
Back
Top