Considering that you can't salvage a console SOC like you can a GPU or CPU
I thought wasn't written in stone yet. Been rumors on both companies taking advantage of salvaged parts to use in a cheaper SKU?
Tommy McClain
Considering that you can't salvage a console SOC like you can a GPU or CPU
I thought wasn't written in stone yet. Been rumors on both companies taking advantage of salvaged parts to use in a cheaper SKU?
Tommy McClain
Just historical precedence. 4 SE, 1 redundancy. I suppose there could be 52 CUs, but the wider you go the more challenges you have across the board in widening all areas of the chip to feed so many CUs. Bandwidth comes to mind here. But eyeballing it; when you have a high number of CUs (wide) you also can't have a super high clock rate, combined with it being nice and cool. Wide often means slow, and if wide is fast, then it's running hot and a lot of power. At least this is my understanding. MI60 uses HBM IIRC. That's a lot closer to home than off chip GDDR6 and significant increase in available bandwidth.1 - assumed these 4 CUs would need to be taken out of the total 48 CUs I proposed, since what I did were calculations for power consumption and the disabled CUs wouldn't consume any power.
Possibly, and this is something i don't know, so I just used historical precedent. I have no idea how they will do this for Navi. I don't think you can just put as many CUs as you want. There is a sweet spot here too that is being overlooked.Can't the SoC or GPU chiplet be designed with 52 CUs to then disable 4 CUs out?
Besides, Navi now uses dual-CUs so if they want to have redundancy for every shader engine, wouldn't they actually need to write out 8 CUs (which is now a huge amount of transistors / die area)?
Maybe they don't want to implement redundancy in the same way this time.
Arbitrary numbers pulled from thin air. It just felt like a reasonable Mhz drop when we consider yield, the number of CUs you are suggesting, thermals, and the supporting costs for a device like this.2 - Decided that 1700MHz was some sort of baseline for 7nm Navi (which it isn't because all Navi 10 chips so far clock way above that), and that 200MHz would need to be taken down from said baseline.
Back in 2012 the highest clocked Pitcairn and Bonaire cards were clocked at 1GHz. Then Liverpool had its GPU clocked at 800MHz and Durango at 853MHz.
The highest clocked Navi cards seem to be able to sustain over 1850MHz, or 1950MHz if we take the anniversary edition into consideration. Why are we assuming AMD needs to take 200MHz down from 1700MHz?
Sure, I'm personally a proponent of a 2 tier launch which would allow them to be more aggressive with chip yields.
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
That's an interesting point. But that point is also based on how historically launches went.One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.What's a lot? 100W for the GPU part?
The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.Do you think the Xbox One X or the PS4 Pro have less power dedicated to the GPU?
Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.
The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.
the 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.
At max power is really a Jet....
I think 150 (or 200 with better cooling) is what rasonable also with ps5 / xbox Scarlet....
Fehu not so wrong...
I'll say this. Fehu is definitely wrong. At least 1 console will have double digit TF GPU. None will have more than 20GB total ram.
It's more a case of never being tried before, so no-one knows what the market would choose. Closest parallel is XB360 which launched a cheaper Arcade and the normal version, and no-one wanted the cheaper Arcade, but it was poorer value. Oh, also the 60 GB PS3 massively outsold the cheaper 20 GB PS3.That's an interesting point. But that point is also based on how historically launches went.
There is no problem when high-end model is $150~$200 more expensive than base model.One problem with that two-tier model is what if early adopters all want the more powerful version? You're stuck with underselling low-tiered inventory and no capacity to improve production of the preferred model without making even more redundant low-tier units.
You don't need to be concerned with CUs that are put there for redundancy and will be disabled. AMD could make a GPU with a total 64 CUs to reserve 16 CUs for redundancy, and they wouldn't have to design the rest of the chip for anymore than 48 CUs.I suppose there could be 52 CUs, but the wider you go the more challenges you have across the board in widening all areas of the chip to feed so many CUs. Bandwidth comes to mind here.
The clock-efficiency curves don't seem to change much with larger GPUs (GP102 comes to mind).when you have a high number of CUs (wide) you also can't have a super high clock rate, combined with it being nice and cool. Wide often means slow, and if wide is fast, then it's running hot and a lot of power. At least this is my understanding.
That's obviously not a cooling system for a 100W SoC.Yes. That's about the maximum possible average power draw for the entire APU, unless they replace their elaborate heat-pipe cooling system with an even larger one.
100W is the consumption of the Pro when running non-patched games, which disables half of the GPU and lowers its clocks.The 'typical' draw for the PS4 Pro seems to be around 100 W (average) when gaming, and the maximum is around 150 W (average) for some demanding games. That's for the whole system, which also includes PSU, memory, and hard disk. The PSU is rated for 300 W.
The $100 more expensive PS360 consoles sold massively more than their cheaper models. Why would 90% of consumers prefer a $600 PS3 to a $500 but if it was $50 more, so a $400 PS5 versus a $550 PS5+, say, they'll skew so much further the other way?There is no problem when high-end model is $150~$200 more expensive than base model.
Agree.I don't think there's any way to forecast the sales beyond market research where you present the two possibilities to a large audience and get their opinions.
Correct, sorry I wasn't being clear. I mean, the idea that they would just build CUs purely for redundancy I don't think AMD would do. There will come an eventuality where you do get a good amount of perfect chips, but then suddenly you don't have a design to feed them.You don't need to be concerned with CUs that are put there for redundancy and will be disabled. AMD could make a GPU with a total 64 CUs to reserve 16 CUs for redundancy, and they wouldn't have to design the rest of the chip for anymore than 48 CUs.
You do have to concern with ballooning die area and cost-per-chip, but you don't have to worry about power, bandwidth, fillrate, L2 cache etc. for the units that are disabled.
Sony didn't design Liverpool's bandwidth to feed the 20 CUs that are present in the SoC, or Neo to feed 40 CUs.
I have no issues with your thoughts on this, when I posted earlier I was using your numbers as a baseline for my own thoughts of what it could be; not necessarily a refute of yours.The clock-efficiency curves don't seem to change much with larger GPUs (GP102 comes to mind).
Larger GPUs tend to be lower clocked because more execution units = more heat at a concentrated spot, not because their optimum efficiency stands at a lower clock.
My suggestion of a 48 active CUs Navi at 1700MHz would generate 15-20% more heat/power than the ~85W of a Navi 10 at 1700MHz, and that's ~100W.
Get the 3.2GHz 8-core Zen2 in there and you have a 150W SoC, which is really nothing out of the ordinary (that's a RX480). I don't think we'll see SoCs with a TDP lower than 150W for next-gen, and it'll probably be closer to 200W.