Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I thought wasn't written in stone yet. Been rumors on both companies taking advantage of salvaged parts to use in a cheaper SKU?

Tommy McClain

Sure, I'm personally a proponent of a 2 tier launch which would allow them to be more aggressive with chip yields.

Or to put it another way.
  • The base console will be exactly the same as a normal 1 tier launch. Maximized yields for a given level of performance.
  • The upper tier console will just be using chips that can clock higher, perhaps significantly higher as you only need hypothetically 20% of the chips to be capable of reaching these speeds.
    • Additionally you can likely go with a higher power envelope as people willing to pay this much (enthusiast/core gamers) are likely to be far more tolerant of the increased noise levels and/or increased cost. Alternatively, they won't have a spouse that isn't a fan of spending a lot on console hardware nor the impact it would have on their immaculately maintained living environment.
Regards,
SB
 
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
 
1 - assumed these 4 CUs would need to be taken out of the total 48 CUs I proposed, since what I did were calculations for power consumption and the disabled CUs wouldn't consume any power.
Just historical precedence. 4 SE, 1 redundancy. I suppose there could be 52 CUs, but the wider you go the more challenges you have across the board in widening all areas of the chip to feed so many CUs. Bandwidth comes to mind here. But eyeballing it; when you have a high number of CUs (wide) you also can't have a super high clock rate, combined with it being nice and cool. Wide often means slow, and if wide is fast, then it's running hot and a lot of power. At least this is my understanding. MI60 uses HBM IIRC. That's a lot closer to home than off chip GDDR6 and significant increase in available bandwidth.

Can't the SoC or GPU chiplet be designed with 52 CUs to then disable 4 CUs out?
Besides, Navi now uses dual-CUs so if they want to have redundancy for every shader engine, wouldn't they actually need to write out 8 CUs (which is now a huge amount of transistors / die area)?
Maybe they don't want to implement redundancy in the same way this time.
Possibly, and this is something i don't know, so I just used historical precedent. I have no idea how they will do this for Navi. I don't think you can just put as many CUs as you want. There is a sweet spot here too that is being overlooked.

2 - Decided that 1700MHz was some sort of baseline for 7nm Navi (which it isn't because all Navi 10 chips so far clock way above that), and that 200MHz would need to be taken down from said baseline.
Back in 2012 the highest clocked Pitcairn and Bonaire cards were clocked at 1GHz. Then Liverpool had its GPU clocked at 800MHz and Durango at 853MHz.
The highest clocked Navi cards seem to be able to sustain over 1850MHz, or 1950MHz if we take the anniversary edition into consideration. Why are we assuming AMD needs to take 200MHz down from 1700MHz?
Arbitrary numbers pulled from thin air. It just felt like a reasonable Mhz drop when we consider yield, the number of CUs you are suggesting, thermals, and the supporting costs for a device like this.
 
Sure, I'm personally a proponent of a 2 tier launch which would allow them to be more aggressive with chip yields.

I'm a proponent mainly because I'm too poor to get a new expensive system at launch. LOL But personally I'm not totally sure they will go with such a strategy. Seems like the negatives outweigh any benefits. Guess we'll see.

Tommy McClain
 
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.

Unless, Sony or Microsoft goes with a multi-chiplet GPU design for the premium models. Essentially, eliminating the need for carrying multi-tiered GPU chips.

Which could explain Sony's odd PS5 SDK design. A kit configured on pulling double duty. No need for a standard or pro kit.
 
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.

Launching with a lesser and a higher end model seems useless. All games are going to be made for the base model anyway, and since the base models will be '8k' what's the point of releasing multiple models. I'm not seeing it happening that the Pro model will offer higher settings like Ultra vs medium on base or something, or less pop-in, higher res textures. Only a unlocked framerate then, since i don't think we will go beyond 4k for games just yet.
 
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
That's an interesting point. But that point is also based on how historically launches went.
If there was a dual launch I wonder if the norms would apply.
Would the uptake from people in echo system be higher than just the hard core.
2nd system owners would be more inclined to purchase.
People with less income, or don't feel the need to pay for top model (casual gamer) may pick up a cheaper system.
Low model with gamepass could be very attractive.
How would all that split between the two models is interesting.
Could work well for both Sony and MS, but would benefit MS more as the XO sales have cratered and so with next gen they would only be really selling to the hard core.

I personally can afford whatever they release at, but may not buy unless there's compelling next gen only exclusives, as I don't game enough to justify it. Low end model I would buy though.

Shame doesn't seem to be the case for either company, would be nice to have different strategies.
 
What's a lot? 100W for the GPU part?
Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.

Do you think the Xbox One X or the PS4 Pro have less power dedicated to the GPU?
The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.
 
Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.


The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.

Well if they going with flash storage that should save 10W ?
 
the 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.

At max power is really a Jet....
I think 150 (or 200 with better cooling) is what rasonable also with ps5 / xbox Scarlet....

Fehu not so wrong... ;)
 
At max power is really a Jet....
I think 150 (or 200 with better cooling) is what rasonable also with ps5 / xbox Scarlet....

Fehu not so wrong... ;)

I'll say this. Fehu is definitely wrong. At least 1 console will have double digit TF GPU. None will have more than 20GB total ram.
 
That's an interesting point. But that point is also based on how historically launches went.
It's more a case of never being tried before, so no-one knows what the market would choose. Closest parallel is XB360 which launched a cheaper Arcade and the normal version, and no-one wanted the cheaper Arcade, but it was poorer value. Oh, also the 60 GB PS3 massively outsold the cheaper 20 GB PS3.

However, in a binned-console two-tier system, if the split among consumers was more in favour of the higher end than the lower end, you have a problem, which means relying on chance. You can always disable better CPUs to turn them into low boxes, but if the market wants more high-end boxes than you can make, that'll be costly. Which such a large degree of uncertainty, I'm not sure how any console company could commit to a two-tier model using binned parts.
 
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
There is no problem when high-end model is $150~$200 more expensive than base model.

PS4 pro costs 100 dollars more and market share is only 20% of total PS4 families.
 
I suppose there could be 52 CUs, but the wider you go the more challenges you have across the board in widening all areas of the chip to feed so many CUs. Bandwidth comes to mind here.
You don't need to be concerned with CUs that are put there for redundancy and will be disabled. AMD could make a GPU with a total 64 CUs to reserve 16 CUs for redundancy, and they wouldn't have to design the rest of the chip for anymore than 48 CUs.
You do have to concern with ballooning die area and cost-per-chip, but you don't have to worry about power, bandwidth, fillrate, L2 cache etc. for the units that are disabled.
Sony didn't design Liverpool's bandwidth to feed the 20 CUs that are present in the SoC, or Neo to feed 40 CUs.

when you have a high number of CUs (wide) you also can't have a super high clock rate, combined with it being nice and cool. Wide often means slow, and if wide is fast, then it's running hot and a lot of power. At least this is my understanding.
The clock-efficiency curves don't seem to change much with larger GPUs (GP102 comes to mind).
Larger GPUs tend to be lower clocked because more execution units = more heat at a concentrated spot, not because their optimum efficiency stands at a lower clock.
My suggestion of a 48 active CUs Navi at 1700MHz would generate 15-20% more heat/power than the ~85W of a Navi 10 at 1700MHz, and that's ~100W.
Get the 3.2GHz 8-core Zen2 in there and you have a 150W SoC, which is really nothing out of the ordinary (that's a RX480). I don't think we'll see SoCs with a TDP lower than 150W for next-gen, and it'll probably be closer to 200W.




Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.
That's obviously not a cooling system for a 100W SoC.
This is a cooling system for a 120W SoC:

348S5Cg.jpg





The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.
100W is the consumption of the Pro when running non-patched games, which disables half of the GPU and lowers its clocks.
With properly optimized demanding games (like God of War) the Pro consumes up to 170W (with Sony actually declaring 165W average for the console).
So enabling the other half of the GPU with higher clocks puts an extra 65W on the console.
Take away some power for PSU efficiency and higher clocks, and you probably get 45-50W for enabling half of the GPU.

It sure looks like the GPU in the Pro is consuming close to 100W.
 
There is no problem when high-end model is $150~$200 more expensive than base model.
The $100 more expensive PS360 consoles sold massively more than their cheaper models. Why would 90% of consumers prefer a $600 PS3 to a $500 but if it was $50 more, so a $400 PS5 versus a $550 PS5+, say, they'll skew so much further the other way?

As for PS4 Pro, it's a mid-gen refresh and its sales aren't necessarily going to parallel the sales of a new generation. eg. The people buying PS4Pros now are possibly the people who bought PS4 at launch and want a better machine, and when they have the choice of a better PS5 next-gen, they want the Full Experience. So where we see 10 million core-gaming PS4 buyers buying 10 million PS4Pros when PS4 Pro is released, we'd see 10 million core-gamers buying PS5+ and none buying PS5Lite.

I don't think there's any way to forecast the sales beyond market research where you present the two possibilities to a large audience and get their opinions.
 
I don't think there's any way to forecast the sales beyond market research where you present the two possibilities to a large audience and get their opinions.
Agree.

If it was gpu chiplet based of some sorts, it wouldn't really be an issue. But I'm not expecting that.
So scaling production up for top model could be an issue if lower model was same soc binned and disabled cu's.
 
Last edited:
You don't need to be concerned with CUs that are put there for redundancy and will be disabled. AMD could make a GPU with a total 64 CUs to reserve 16 CUs for redundancy, and they wouldn't have to design the rest of the chip for anymore than 48 CUs.
You do have to concern with ballooning die area and cost-per-chip, but you don't have to worry about power, bandwidth, fillrate, L2 cache etc. for the units that are disabled.
Sony didn't design Liverpool's bandwidth to feed the 20 CUs that are present in the SoC, or Neo to feed 40 CUs.
Correct, sorry I wasn't being clear. I mean, the idea that they would just build CUs purely for redundancy I don't think AMD would do. There will come an eventuality where you do get a good amount of perfect chips, but then suddenly you don't have a design to feed them.
IMO, it's always designed to be maximum possible performance and you bin downwards; causing there to be an eventual upper limit on CUs.

The clock-efficiency curves don't seem to change much with larger GPUs (GP102 comes to mind).
Larger GPUs tend to be lower clocked because more execution units = more heat at a concentrated spot, not because their optimum efficiency stands at a lower clock.
My suggestion of a 48 active CUs Navi at 1700MHz would generate 15-20% more heat/power than the ~85W of a Navi 10 at 1700MHz, and that's ~100W.
Get the 3.2GHz 8-core Zen2 in there and you have a 150W SoC, which is really nothing out of the ordinary (that's a RX480). I don't think we'll see SoCs with a TDP lower than 150W for next-gen, and it'll probably be closer to 200W.
I have no issues with your thoughts on this, when I posted earlier I was using your numbers as a baseline for my own thoughts of what it could be; not necessarily a refute of yours.

Though on the discussion of redundancy. When I think about die shots, Nvidia, the current consoles. The 20XX series we know that the chips are large and die shots (which I haven't seen) But aside from say CUs that could be set aside as redundant, what about their tensor cores or their RT cores? What if a silicon defect in those areas would be detrimental enough to throw the chip away? Perhaps the chip doesn't need to be that large, they had to include redundancy to bring costs down.

edit:
NVIDIA-RTX-Turing-GPU_3.png

This labelling is clearly wrong, everything looks the same! There are 24 blocks in a section. There are 6 of them that all look the same. Is there a diagram that breaks down what the components are like we can do with the consoles?

So then it got me thinking that perhaps there are similar considerations on the console side of things. If there is additional silicon put towards acceleration of other things, would they also require redundancy?

This is where if we assume a 10+ TF GPU, with RT and say AI accelerators; perhaps chiplets are the way to go here. I'm just having a harder time thinking so much can be packed into a chip with little regard to silicon defect and still hit a price point of where consoles normally land.
 
Last edited:
Status
Not open for further replies.
Back
Top