Xbox Series S [XBSS] (Lockhart) General Rumors and Speculation *spawn*

Status
Not open for further replies.
Every processor includes “sacrificial cores” for precisely that reason

Why is it a waste over chucking it in the trash? These are not runs, this is useful things to do with chips thst don’t pass QC
So much has already been reduced to ensure maximum yield. If you’re going to aim for something ng with 1/3 the compute. You could probably have something nearly 1/3 the size. That’s a dramatic amount of die space savings leading to a lot more chips per die bringing the price down significantly.

reduced binned chips for Scarlett can be for something else.
 
So much has already been reduced to ensure maximum yield. If you’re going to aim for something ng with 1/3 the compute. You could probably have something nearly 1/3 the size. That’s a dramatic amount of die space savings leading to a lot more chips per die bringing the price down significantly.

reduced binned chips for Scarlett can be for something else.

As I said, this is already SOP in the semiconductor industry, and I gave examples of similar, mainstream products that do this. Designing another chip is tens if not hundreds of millions of dollars before you get to production capacity, the cost of retooling the lines etc.

The audience for the streaming box are the ones who are size sensitive, not early adopters
 
Defect Xsx chips goes into Azure.
You're not going to get enough for Lockhart considering MS expecting Lockhart to sell Xsx 2:1.

Simple math. You cannot get 50 million from a fraction that comes out of 25 million.

The performance gap between Lockhart and Xsx is too big for the binning they do on desktop GPUs. Not to mention you'll also have extra buses, ROPs that are essentially wasted silicon on the Lockhart.
 
Last edited:
Defect Xsx chips goes into Azure.
You're not going to get enough for Lockhart considering MS expecting Lockhart to sell Xsx 2:1.

Simple math. You cannot get 50 million from a fraction that comes out of 25 million.

The performance gap between Lockhart and Xsx is too big for the binning they do on desktop GPUs. Not to mention you'll also have extra buses, ROPs that are essentially wasted silicon on the Lockhart.

You’re absolutely right they can be repurposed for Azure, but the Xbox One digital already is that. For all intents and purposes it’s an Azure Xbox Blade without the additional encoding hardware. I’m not saying the Lockharts they’re making in 2025 won’t be custom silicon, I’m saying that they won’t be in the near future. They don’t need a million Lockhart blades in the near future. . They do need a million consoles.

And they’ve got a lot of bad processors already. That’s just reality of working out things at the foundry for mass production. When Blu-ray lasers first started rolling off the line, they were over $100 each because only one in ten was good, part of what drove the PS3 BoM through the roof. Within a year that problem had been addressed. Today they cost about a buck to make.
 
You’re absolutely right they can be repurposed for Azure, but the Xbox One digital already is that. For all intents and purposes it’s an Azure Xbox Blade without the additional encoding hardware. I’m not saying the Lockharts they’re making in 2025 won’t be custom silicon, I’m saying that they won’t be in the near future. They don’t need a million Lockhart blades in the near future. . They do need a million consoles.

And they’ve got a lot of bad processors already. That’s just reality of working out things at the foundry for mass production. When Blu-ray lasers first started rolling off the line, they were over $100 each because only one in ten was good, part of what drove the PS3 BoM through the roof. Within a year that problem had been addressed. Today they cost about a buck to make.
xbox one hardware was used in xcloud because XSX hardware was not yet ready. In 2021 Microsoft will just be deploying XSX and later hardware to xcloud. XSX can run 4 xbox one streams at once. There is also a lot of machine learning hardware in XSX that doesn't exist in xbox one.

One of the goals with lockhart is to provide a low priced option for next gen. The zen cpu is already quite small for what it is. Taking a gpu the third of the performance of xsx will also reduce the size. If they are also reducing the bus size they can cut down size that way too. The smaller chip will allow them to make a lot more chips per wafer . The new chips should run at much lower power draws and produce a lot less heat. When you couple that with a smaller less complex board , cheaper and less ram you got yourself a console that is much cheaper to produce and ship.

They already have other plans for bad xsx dies and there wont be that many to begin with
 
The CPU isn't THAT small...

Don't forget the interconnecting areas that make up a huge part of the chip itself.

ps4_xb1_die-vergleich1dadx.png
 
Series X APU:
XboxSeriesXTech_Inline2.jpg
Exactly. Reducing the RAM, and/or using higher capacity chips will have more effect on the size of the board needed (which you can conceivably do due to the lower targets) than reducing the chip in a meaningful way at this time
 
Exactly. Reducing the RAM, and/or using higher capacity chips will have more effect on the size of the board needed (which you can conceivably do due to the lower targets) than reducing the chip in a meaningful way at this time

Reducing SOC size from 380 to 200 will save a metric ton too. Why do you ignoring the obvious mathematics of manufacturing?
 
Exactly. Reducing the RAM, and/or using higher capacity chips will have more effect on the size of the board needed (which you can conceivably do due to the lower targets) than reducing the chip in a meaningful way at this time

I am not sure the point of arguing about this when.

A. No PC CPUs/GPUs lineups that have 3:1 power gap have done your binning strategy.
B. We already know the Lockhart SOC is a different smaller chip.
 
Last edited:
I don’t think Lockheart is a separate SOC. I think it’s the exact same chip.

I really doubt it. You might be able to use some defective chips in Lockhart (that might increase yields of the big chip by a few percent), but that would be in addition to a dedicated Lockhart chip, which would be absolutely necessary to supply the bulk of the lower priced models.

Look at GPUs. The 2060/70/80 are exactly the same chip, just with more active cores. When these chips are produced, they don’t often come out at 100%. Additional sacrificial cores are built in for exactly this reason, so that a certain number can be bad, but it’s still a 2080. Then X number of bad cores is a 70 and Y is a 60.

The 2060 ~ 2080 range are actually produced using three different dies (two if you exclude include the Ti). But that's sort of beside the point.

Nvidia have a large range of products that they can salvage chips into. They can then adjust prices based on production and demand and trust that the market will choose between these carefully tiered products based on their needs and budget. But you'll notice that the 1660 is a completely different die, as is the 1650.

Other than niche OEM products released in limited quantities when enough "ultra salvage" parts have built up, chips are used for closely bracketed products. If you are going to disable functioning silicon for product segmentation then you want it to be as little as possible and you only do it because you make your money on hardware margins.

That's not what Sony and MS do. And it's certainly not what Lockhart is for. Console vendors make money on software and services - hardware profit is a bonus. If the case for disabling huge amounts of silicon for your core product isn't there for Nivida, it sure as heck isn't going to be there for a console vendor!

Chips are extremely expensive to produce, so I feel it’s more likely that they’re not testing the SOC for Lockheart, but the new microcode/configuration.

Under your proposed plan, if MS are selling a lot of Lockharts - a very different product compared to XSX and not two closely matched tiers they can juggle Nvidia style - they have no option but to disable huge areas of very expensive completely functional silicon.

Lets imagine Lockhart is outselling XSX by 2:1. If Anaconda yields are 80%, and you can get to 90% by salvaging for Lockhart, then you'd be disabling about 60% of you working die to feed the Lockhart market. That's insanity.

You'd be throwing away something like a third of your functional silicon - which is not only hugely expensive, but not always easy to get capacity on in the first place (there are bidding wars for slices of TSMC 7nm capacity). Suddenly those Lockharts are looking pretty expensive.

There's no point in Lockhart existing if it isn't cheap and available in large quantities early on in the generation.

MS can't shuffle their product stack around yields. They only have two products, aimed at very different parts of the market. That requires two chips. If they're using disabled Anaconda chips at launch it's only because something has gone wrong, not because it's a good plan.

The engineering of a cut down Lockhart chip - that can literally copy large amounts of the Anaconda layout and with all the fundamental issues solved for a more complex, power hungry chip - is a relatively small cost compared to the prolonged waste of very large amounts of cutting edge silicon.

... that ended up being a bit long. :|
 
Reducing SOC size from 380 to 200 will save a metric ton too. Why do you ignoring the obvious mathematics of manufacturing?
Because it’s almost certainly not being done right now.
As I said in my last post, I’m not saying in 2025 Lockhart won’t be its own chip. Just not now because it’s not practical
 
Because it’s almost certainly not being done right now.
As I said in my last post, I’m not saying in 2025 Lockhart won’t be its own chip. Just not now because it’s not practical

No. It's an entirely different SOC today.
 
Series X APU:
XboxSeriesXTech_Inline2.jpg

Assuming this is based on the actual layout of the real chip (would an artist know what an APU looked like without some hints?), the CPU CCX proportions look somewhere between the mobile APU Renoir (4MB L3 per CCX) and the full desktop Matisse (16MB L3 per CCX). Sort of ... half way inbetween.

Based on this accurate, informed, scientific calculation I'm going to guess ... 8MB L3 per CCX!!
 
Every processor includes “sacrificial cores” for precisely that reason

Why is it a waste over chucking it in the trash?
Because faulty processors will be so few and far between, you'll be throwing away good chips to make a cheap box.

Let's say you can make 1,000 XBSX SOCs on a wafer, and there's 30 defects. That means for each wafer, you can make 970 XBSX's and 30 LHs. That's not enough LHs, so you have to take some of those good SOCs to make LHs. Now you're putting a $100 chip into a $300 console instead of a $50 SOC.

Now if instead MS make a LH SOC and can fit 2,000 of them onto a wafer, they can produce more economical LH consoles and not cannibalise their XBSX production.
 
Because faulty processors will be so few and far between, you'll be throwing away good chips to make a cheap box.

Let's say you can make 1,000 XBSX SOCs on a wafer, and there's 30 defects. That means for each wafer, you can make 970 XBSX's and 30 LHs. That's not enough LHs, so you have to take some of those good SOCs to make LHs. Now you're putting a $100 chip into a $300 console instead of a $50 SOC.

Now if instead MS make a LH SOC and can fit 2,000 of them onto a wafer, they can produce more economical LH consoles and not cannibalise their XBSX production.
Just to put into perspective.
Lockhart will sit around 14-18CUs.
Scarlett is 56 CUs - 4 for redundancy.

That's a lot of CUs to toss away, not to mention an oversized bus and cache to follow.
 
Status
Not open for further replies.
Back
Top