Rumor: XBox dual SKU next-gen launch

When talking about multiple SKU's, I think it's worth considering that launching with two SKU's of the exact same architecture, but different clockspeeds, core counts etc. is quite a bit different to launching a second SKU, with a somewhat different architecture, a few years into the generation.

If the base XBoxTwo is a 3GHz CPU paired with a 20CU GPU, I don't see how it's going to be much effort for developers to increase the resolution and dial up some effects with, for example, doubled resources, when all of those resources function identically.
 
I considered that, but smartphones (at least the ones people pay attention to) can command prices close to $1000 and actually have real profit margins. I see one console SKU carrying the other in a dual SKU launch.

For one year, if that. And much smaller companies than MS and Sony have found a way to navigate the difficulties of multiple product design/production/inventory management during that extremely abbreviated (relative to consoles) product cycle.
 
When talking about multiple SKU's, I think it's worth considering that launching with two SKU's of the exact same architecture, but different clockspeeds, core counts etc. is quite a bit different to launching a second SKU, with a somewhat different architecture, a few years into the generation.

If the base XBoxTwo is a 3GHz CPU paired with a 20CU GPU, I don't see how it's going to be much effort for developers to increase the resolution and dial up some effects with, for example, doubled resources, when all of those resources function identically.

Additionally, having a lower-spec machine allows for a different means of improving yields than having disabled processing units on every die whether those disabled units work or not. Hopefully, binning would allow for full utilization of the manufactured silicon on the highest-spec SKU.
 
If they use the same die with different binning, they end up with 10% to 15% additional power at best. Why do this?
 
The mid gen consoles pretty much proved that consumers will support multiple SKU's with different performance profiles. The higher priced performance model opens the door for the manufactures to make better margins on their premium model. Margins have been very thin on the mass market SKU's for years now, it would be refreshing for manufactures to be able to make decent profit on the hardware again. The PS4 Pro and the Xbox One X laid the groundwork for how multiple hardware SKU's can work, and it doesnt seem to convoluted to me. A good, better, best scenario shouldn't be to hard to market. Your uneducated consumer will pretty much always choose the lower priced unit, that is where your mass market consumer is, but there is a niche audience out there that is happy to pay $600+ for a premium console, and that is where decent margins can be found. Good for everyone.
 
If they use the same die with different binning, they end up with 10% to 15% additional power at best. Why do this?
Scarlett. X fully enabled full speed cu's
Scarlett cloud 2 disabled 10% slower ghz
Scarlett S . 4 disabled 10% slower ghz

Not saying these are the configurations/numbers that makes sense, just saying that with a few uses for the same apu it gives very good binning opportunities. With a really strong top end and a bit slower bottom than would've been the case otherwise.

@Goodtwin I personally don't think they need to make better margins per se on the consoles, as long as close to breaking even as possible, or making small profit on the hardware. It's not where the money is to be had.
 
mrcorbo said:
Additionally, having a lower-spec machine allows for a different means of improving yields than having disabled processing units on every die whether those disabled units work or not. Hopefully, binning would allow for full utilization of the manufactured silicon on the highest-spec SKU.

I hadn't really considered instances of no CU's being disabled.

What kind of prevalence does that have in the PC GPU space? What kind of percentage of GPU's have all CU's enabled?

I've used this to calculate a 300mm diameter wafer with a 0.4% defect density to account for 7nm.

225mm2 dies are the biggest it would let me calculate. The yield was calculated as 43.48% with 111 good dies, and 145 defective dies.

If I go as low as 100mm2, the yield improves significantly, up to 67.93% with 392 good dies, and 185 defective.

Then I tried the middle ground of 144mm2, and the yield's 57.79% with 232 good dies, and 169 defective.

I really don't know how many of those defective dies are salvageable by way of disabling a couple of CU's and reducing clockspeeds - smaller dies seem to be desirable, but I've no idea how much that can be mitigated via binning.

If they use the same die with different binning, they end up with 10% to 15% additional power at best. Why do this?

I really hope that AMD have managed to get chiplet GPU's working.

They've said in the past that it's difficult to make an MCM GPU invisibly function as a single GPU, so it's more akin to SLI/Crossfire, which goes largely unused. But that wouldn't matter on consoles, so maybe there's scope for it to begin and grow there?

If so, let's just say each chiplet is 20CU's, for the sake of argument. Disable a couple of CU's for yield, and clock them lower, for the base model's single chiplet design. Don't disable any CU's, and clock them higher, for the high end model's two chiplet design.
 
Last edited:
I hadn't really considered instances of no CU's being disabled.

What kind of prevalence does that have in the PC GPU space? What kind of percentage of GPU's have all CU's enabled?

For the PC no idea but this already happens for Xbox, those chips are harvested and used in the devkits.

https://www.eurogamer.net/articles/...ook-at-xbox-one-scorpios-superpowered-dev-kit


Eurogamer said:
there's a whopping 24GB of DDR5 RAM, double that of the retail Scorpio's 12GB, an additional 1TB SSD drive, and 44 CUs (compute units) instead of 40 on the GPU.
 
I hadn't really considered instances of no CU's being disabled.

What kind of prevalence does that have in the PC GPU space? What kind of percentage of GPU's have all CU's enabled?

I've used this to calculate a 300mm diameter wafer with a 0.4% defect density to account for 7nm.

225mm2 dies are the biggest it would let me calculate. The yield was calculated as 43.48% with 111 good dies, and 145 defective dies.

If I go as low as 100mm2, the yield improves significantly, up to 67.93% with 392 good dies, and 185 defective.

Then I tried the middle ground of 144mm2, and the yield's 57.79% with 232 good dies, and 169 defective.

I really don't know how many of those defective dies are salvageable by way of disabling a couple of CU's and reducing clockspeeds - smaller dies seem to be desirable, but I've no idea how much that can be mitigated via binning.



I really hope that AMD have managed to get chiplet GPU's working.

They've said in the past that it's difficult to make an MCM GPU invisibly function as a single GPU, so it's more akin to SLI/Crossfire, which goes largely unused. But that wouldn't matter on consoles, so maybe there's scope for it to begin and grow there?

If so, let's just say each chiplet is 20CU's, for the sake of argument. Disable a couple of CU's for yield, and clock them lower, for the base model's single chiplet design. Don't disable any CU's, and clock them higher, for the high end model's two chiplet design.

What does that mean?
 
That there have been rumours of AMD transitioning to chiplet based GPU's.

People much smarter than I, such as Sebbi, have highlighted the inherent problems with that in the Navi thread. Namely, decoupling the GPU from the IO and having it all communicate at sufficient bandwidth over IF.

But if it's something to which AMD are committed, and manufacturing constraints suggest they might have to be, then I remain convinced we'll see it happen in some capacity.

If there's any truth to the rumour of a two tier launch, chiplets would be a perfect accompaniment. Hence, "I really hope they've managed to get chiplet GPU's working." It's poor wording though.
 
If they use the same die with different binning, they end up with 10% to 15% additional power at best. Why do this?

MS managed with the One X to get 43% more performance vs. what Sony managed on the Pro on the same process with just 4 extra CUs (the same number of CUs that are disabled on the Scorpio die) and increased clocks enabled by the extra cooling and power delivery added to the design.
 
That there have been rumours of AMD transitioning to chiplet based GPU's.

People much smarter than I, such as Sebbi, have highlighted the inherent problems with that in the Navi thread. Namely, decoupling the GPU from the IO and having it all communicate at sufficient bandwidth over IF.

But if it's something to which AMD are committed, and manufacturing constraints suggest they might have to be, then I remain convinced we'll see it happen in some capacity.

If there's any truth to the rumour of a two tier launch, chiplets would be a perfect accompaniment. Hence, "I really hope they've managed to get chiplet GPU's working." It's poor wording though.

I cannot see any chance of having a GPU chiplet in any of the next-gen consoles. IMO, we will have a classic GPU with all the i/o transistors integrated like now.
 
Probably. But I'll keep hoping anyway.

Would two GPU's with integrated IO, say two 20CU GPU's, be able to interface at sufficient bandwidth to be equivalent to a single 40CU GPU?

For a two tier console launch, that could still yield the manufacturing benefits of a chiplet design.
 
I think one of the big issues with a 2 SKU launch will be getting the balance between the consoles just right.

If the top tier SKU commands a much higher price than the base console, it will need to be sufficiently more powerful to justify the price.

There might also be the temptation to "Gimp" the lower SKU to create that power difference.

Hopefully neither MS or Sony will do this.
 
I think one of the big issues with a 2 SKU launch will be getting the balance between the consoles just right.

If the top tier SKU commands a much higher price than the base console, it will need to be sufficiently more powerful to justify the price.

There might also be the temptation to "Gimp" the lower SKU to create that power difference.

Hopefully neither MS or Sony will do this.
This is the biggest challenge.
XO <-> 1X too large, XO is a compromised experience.
PS4 <-> 4Pro too small, to justify having either. *
PS4 <-> 1X is reasonable experience and justification.
* This is talking about next gen, this gen different circumstance.

Scarlett S:
6tf
2gig less and slower memory (maybe different type depending if costs works out better doing that)
cpu 'slightly lower clock
Standard HSF cooling

Targeting 1440p
I think that would give reasonable 4K output, for the lower price that I think could be saved whilst still giving equivalent experience, and not putting a lot of work on devs.
 
Scarlett. X fully enabled full speed cu's
Scarlett cloud 2 disabled 10% slower ghz
Scarlett S . 4 disabled 10% slower ghz
But what's the business proposition? What's the price delta between the full speed and 15% slower console, and how do you differentiate them to the customers? You could do a 'special edition' that's a bit faster for a bit more money, but you couldn't realistically charge a large markup for only 15% better performance, I don't think. If we have a $400 base console, what's the price of the high-end SKU and improvements the end user gets for that money?

A special edition would work for me, I think, at say a $75 mark up with a couple of extras (more/faster storage, fancy paint-job and trimming, couple more in-your-face blue LEDs for no good reason), but you'd have to differentiate it so consumers don't get confused between it, the base unit, and the price of rivals. If some think the console costs $475 versus $400 for the PS5, say, they may well be deterred. Perhaps make the special edition only available from the MS Store? You could keep your binning and sell only to your tech-savvy fanbase.

Which doesn't tally with the rumour anyway - the high end SKU is supposed to be substantially more than a bit of binning. ;)
 
but you couldn't realistically charge a large markup for only 15% better performance, I don't think. If we have a $400 base console, what's the price of the high-end SKU and improvements the end user gets for that money?
15% better performance?
Scarlett X could be around the 12tf mark if not more, that's double the performance right there.

Easiest for devs to use most of the performance up by higher resolution, but personally I would be ok with dynamic average around 1800p 90% of the time, and on better pixel quality.
 
Targeting 1440p
I think that would give reasonable 4K output, for the lower price that I think could be saved whilst still giving equivalent experience, and not putting a lot of work on devs.

This makes the most sense to me. Scarlett Arcade could target the 1440p resolution(maybe ~6tf), but Scarlett Pro would target 4K resolution(maybe ~12tf). That's almost 2x resolution increase(1.7778 to be exact). I could see them offering the Pro version at a $200 premium. I think the Arcade at $300 might be a bit too aggressive(still have to sell the X1X), but I could see $350. Depends on what their Pro version comes in at. I'm expecting around $550 since I'm not expecting it to be priced more than $100 than a single SKU PS5. I think the 2 SKU strategy makes the most sense if you think you're competitor ships a year earlier. They launch @ $450, then you launch a year later with 1 SKU at $350-$400 & a 2nd SKU at the $200 premium. You get the performance & value crown in the same year. The extra year gives you the ability for both & it also helps your studios bake their platform exclusives a bit longer.

Tommy McClain
 
I think the Arcade at $300 might be a bit too aggressive(still have to sell the X1X)
This got me thinking.
In the past kept selling the previous gen due to cost.
But why would they need to sell any XO's next gen. Could do a top to bottom Scarlett launch.

Maybe could keep selling 1S for markets where streaming isn't viable and they want a very cheap console.
Otherwise the lowest end Scarlett arcade/s model should suffice.

I have my doubts about being able to get the 1X down in price enough to compete with Scarlett arcade, even in 2 years. But question is why keep selling it?
 
Back
Top