Real life performance seems on par with GDDR ram and since you no longer need traces out to memory you can reduce the layers of your board also. It should reduce costs of the console greatly. As for cooling I don't know if it will be a huge issue on a console SOC if they are able to do it on a fury and vega board with gpu's running at much higher wattageyes maybe with volumes HBM prices will go down, but real life performances are till now not so exaltating, so it will never probably be cheap enough... Also I have some doubt on long term realibility. To much heath in too little space.
A much stronger CPU would not necesseraly mean more 60fps games... there were 30fps games on PS1 and they still exist on PS4. You can expect more complex worlds but that's all.
The framerate is tied to the developper goals, not the hardware... also i doubt that we will see a much more complex AI if that translates in much more difficult games. Developers want to sell their games to the mass market.
Sony and MS know that and, once again, they will likely put their best efforts on the GPU side. People primarily want pretty graphics.
But if we reach a point where there is a strong visual diminish return on 30fps games, we could see more 60fps games. Let's not forget that 60fps games tend to be less complex though.
"Each new generation brings with it a new set of capabilities: CPUs, GPUs and the like but also controllers and new types of display devices. If you go back to the 1970s, it was colour TV. That was the new display device," Cerny told us. "These capabilities unlock new potential for the type of games that can be created. For example, increased CPU power might not seem like a game-changer but it actually allows for much better enemy AI, more enemy characters, better world simulation and a whole host of other evolutions in the game experience."
So a few things to think about:
1. I think something like the Samsung low cost hbm option could be something that is viable for consoles and APU's over the mid term.
2. Remember the HBCC, AMD(as is everyone) is well aware that memory performance scaling isn't keeping up with SOC performance scaling, especially when considering cost ($/power/etc). I think the next consoles will have 2 tiers of memory minimum maybe even 3:
So Something like:
1. bank of very high throughput memory ,2-4GB, 1 stack HBM3 or 2 stacks of HBM2/low cost HBM ~ 400-500GB/s ,GPU high throughput target
2. 8GB GDDR6, low/middle of the road GDDR6, take the cost to GB sweet spot 128bit bus? ~256GB/s , GPU low throughput target
3. 8GB DDR5, low/middle of the road DDR5, take the cost to GB sweet spot 128bit bus? ~100GB/s , primary CPU target
Have two modes of operation ( dev selectable on game load etc) , 1. HBCC handles data movement , maybe possible to give hints 2. each memory type is mapped to ranges and devs have full control.
It looks to me like AMD already have a lot of the infrastructure to facilitate this:
1. The interconnect that supports multiple memory controllers
2. The memory controllers and the packaging technologies needed
3. The black magic to decide when to move things around
What would be interesting in this type of solution is what the exact topology is, ideally all these memory types should be connected to the global fabric directly but this doesn't align to what we normally see with standard Dgpu typologies, but given that GPU's can handle latency well maybe it doesn't matter either way so long as the interconnects can scale throughput.
I dont see consoles getting a large amount of a single high performance memory regardless of what it is, to expensive across the board, by making a high cost investment in a small amount of memory, you can take the fish that John West rejects for the bulk of your memory requirements.
my assumption is DDR5 will be significantly cheaper then GDDR6, if its not then 16gb 256bit GDDR6 makes more sense.
Zen 2 is going to be fabbed at both GloFo and TSMC, so yes.Is Ryzen possible to be build on TSMC process?
The question is sort of "backward" to me, it will mostly come down to whatever AMD can provide.
I can't see the CPU being a "focus", the thing is that AMD CPUs have made strides in serial performances and power efficiency. Zen architecture is still being worked and they already whip the floor with Jaguar cores, I see no point for either Sony or MSFT to stick with that old architecture. Clearly AMD could use lower power, tinier cheaper cores but it does not look like they have the means to support the R&D of a second line of CPU at the moment no matter their financials.
I've more "concerns" (not really yet the architecture show its age) about AMD catching up with Nvidia last GPU when it comes to lots of metrics (perfs per watts, per GB/s, geometry, what about "AI" cores).
Well X86 cpu cores can run X86, having BC up and running should not proove a "huge" (not trivial either) an issue now definitely I agree with your point Sony will be behind MSFT on that front Aand will have to paly catch-up and scale up the BC offering.backward is not the question... It will come anyway... Its the forward compatibility the real question... Specially for Sony loosing the Ps4 installed base will be bad... Specially now that online is growing up and montly Subscription fees will change the market... (and in case of MS is already here).... So I guess vanilla PS4 on 720p even for most of the future games till -at least- 2025. Also consider that with the cripto boom silicon is not getting cheap soon.
I don't think HBM and GDDR and DDR makes sense. Cheaper and simpler to use more, faster HBM (8gb, 1TB/s) and lots of relatively fast DDR (32gb, 120GB/s). Maybe some NAND flash as well.So a few things to think about:
1. I think something like the Samsung low cost hbm option could be something that is viable for consoles and APU's over the mid term.
2. Remember the HBCC, AMD(as is everyone) is well aware that memory performance scaling isn't keeping up with SOC performance scaling, especially when considering cost ($/power/etc). I think the next consoles will have 2 tiers of memory minimum maybe even 3:
So Something like:
1. bank of very high throughput memory ,2-4GB, 1 stack HBM3 or 2 stacks of HBM2/low cost HBM ~ 400-500GB/s ,GPU high throughput target
2. 8GB GDDR6, low/middle of the road GDDR6, take the cost to GB sweet spot 128bit bus? ~256GB/s , GPU low throughput target
3. 8GB DDR5, low/middle of the road DDR5, take the cost to GB sweet spot 128bit bus? ~100GB/s , primary CPU target
Have two modes of operation ( dev selectable on game load etc) , 1. HBCC handles data movement , maybe possible to give hints 2. each memory type is mapped to ranges and devs have full control.
It looks to me like AMD already have a lot of the infrastructure to facilitate this:
1. The interconnect that supports multiple memory controllers
2. The memory controllers and the packaging technologies needed
3. The black magic to decide when to move things around
What would be interesting in this type of solution is what the exact topology is, ideally all these memory types should be connected to the global fabric directly but this doesn't align to what we normally see with standard Dgpu typologies, but given that GPU's can handle latency well maybe it doesn't matter either way so long as the interconnects can scale throughput.
I dont see consoles getting a large amount of a single high performance memory regardless of what it is, to expensive across the board, by making a high cost investment in a small amount of memory, you can take the fish that John West rejects for the bulk of your memory requirements.
my assumption is DDR5 will be significantly cheaper then GDDR6, if its not then 16gb 256bit GDDR6 makes more sense.
8GB @ 1TB/s is 4 stacks of HBM2, that is not cheap, maybe 2 stacks of HBM3 if it existing as a mass manufactured product in time. Thats going to be way more expensive then my two 1-2 Hi stack proposal. Also i said providing both DDR and GDDR comes down to cost difference between the two and to take the advantage the HBCC/HBM gives you to really drive down the spec/price of the GDDR/DDR.I don't think HBM and GDDR and DDR makes sense. Cheaper and simpler to use more, faster HBM (8gb, 1TB/s) and lots of relatively fast DDR (32gb, 120GB/s). Maybe some NAND flash as well.