Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Five dies on an interposer. Four 8gbit DRAM dies and the system SOC.

Cheers

Correct me if I'm wrong but I thought the main cost associated with HBM is the placement of the dies on the interposer, not the stacked memory dies themselves.
 
What are the chances of Xpoint making it into next gen consoles? Xpoint is supposedly cheaper than RAM so you can get 128 GB XPoint instead of 32 GB of DDR4 or something like that. Then you can have 8GB of HBM caching for 128 GB of XPoint and then a regular HDD for mass storage.
IMO doubtful usability in a console. In the current offering, it has nowhere near the endurance necessary to use as ram (30 full write per day). And it's very expensive compared to flash. But as a niche product, I'd love to try it on my nosql database at work...
 
What are the chances of Xpoint making it into next gen consoles? Xpoint is supposedly cheaper than RAM so you can get 128 GB XPoint instead of 32 GB of DDR4 or something like that. Then you can have 8GB of HBM caching for 128 GB of XPoint and then a regular HDD for mass storage.

Intel's first Optane SSD at 375 GB goes for $1500.
 
Optane is out of the question. It can't replace dram because of performance and it can't replace flash because of price. Optane $/GB is currently 50-100% dram prices, flash 1/10th that (even less). A 32GB Optane cache would be >$100, 128GB FLASH would cost $35.

There is also the question if it gains enough traction to even exist in 2021.

Cheers
 
Optane is out of the question. It can't replace dram because of performance and it can't replace flash because of price. Optane $/GB is currently 50-100% dram prices, flash 1/10th that (even less). A 32GB Optane cache would be >$100, 128GB FLASH would cost $35.

There is also the question if it gains enough traction to even exist in 2021.

Cheers
You have to take into account that Optane is only expensive now because it is new, once Micron comes out with their own Xpoint implementation, I expect prices to fall significantly. The price of new tech is always expensive until production ramps and demand subsides, I expect both to have happened by 2020ish. It will depend on price and performance of xpoint when the console comes out, not the price and performance of it now. Both can change drastically. Given how Intel thinks some servers can switch to using Optane DIMMs, I can see Xpoint being a decent tech to use, especially with the non volatile nature, which means you can resume games from powered off states almost instantly. This all depends on how console designers want to approach memory, we are at a point where it seems like an extra tier of memory in the memory hierarchy would benefit everyone, whether it gets filled by Xpoint or NAND or something else is still up in the air for tech 3-4 years away.
 
Given how Intel thinks some servers can switch to using Optane DIMMs, I can see Xpoint being a decent tech to use, especially with the non volatile nature, which means you can resume games from powered off states almost instantly.

While I like the idea of Optane, this wouldn't be a good choice for consoles which are cost constrained.

Considering that you can already accomplish this via sleep state all you would save with Optane for instant resume of a game is potentially a few watts (low single digits to keep refreshing DRAM pages) for an off state versus a sleep state. And that would come at a significant cost as Optane wouldn't be suitable to replace memory, so it would be an additional cost on top of what you still need to include.

Regards,
SB
 
Correct me if I'm wrong but I thought the main cost associated with HBM is the placement of the dies on the interposer, not the stacked memory dies themselves.

imo they are stacks themselves. I guess getting away of them could reduce significantly the costs. If Fud's estimations of what nVidia and AMD are paying for a mere 4GB stack of ram are correct, then additioning dies on the interposer could be a much less expensive solution, while getting the performance needed (high bandwith).
 
AMD went with twin 8GB stacks for Vega vs quad 4GB, leading to a smaller interposer footprint.
 
Last edited:
AMD went with twin 8GB stacks for Vega vs quad 4GB, leading to a smaller interposer footprint.

That's also less stacks as well as a lower interposer footprint. So may not really tell us about the cost balance of the interposer vs stacks for HBM (assuming that is what you were answering?).
 
That's also less stacks as well as a lower interposer footprint.

That's... what I wrote. o_O

So may not really tell us about the cost balance of the interposer vs stacks for HBM (assuming that is what you were answering?).

There's bound to be a crossover point, but evidently they'd prefer the twin 8GB stacks and smaller interposer along with the lower bandwidth. Obviously, half-width memory IO takes up less die space, and smaller interposer has a lower cost.

It's interesting where GV100 goes for a much higher price bracket and opts for Ludicrous Speed. Why they didn't offer a 32GB variant might be a SKU strategy at the moment.

---

Looking at Fiji though, they had no option but to go with 4 stacks to get the still-mediocre 4GB.

The bandwidth thing is curious, where Vega is 13TF & <500GB/s.
 
Last edited:
The bandwidth thing is curious, where Vega is 13TF & <500GB/s.

From Sebbbi's previous posts in this thread on the subject, I kinda got the sense that with the RGBA8 format being more dominant, this level of memory bandwidth should be enough (~410GB/s needed to fully saturate the ROPs).
 
Loading data straight from its source to memory is ideal.
What would be the point of an extra memory pool ?
Copy my resources twice [disk->slow RAM->usable RAM] ?
Introduce more latency [when loading from disk] ?
Use it as a disk cache ? [In that case I think an SSD/HDD hybrid would be better than putting extra burden on the dev shoulders...]


I think the point of the extra (slower) memory pool would be to have it exclusively for O.S. tasks, leaving a larger portion of the fast memory pool for the developers to use.
The Pro got a 512MB increase for devs due to the southbridge RAM increment.
Sure, some O.S. stuff needs to go through the GPU and the fast memory (e.g. compressing framebuffer into video for streaming/sharing), so I doubt devs will ever get all the console's fast memory to themselves. But if there was an extra single 64bit DDR4 channel (even if accessible only by the CPU) I guess a significantly larger chunk of the fast memory could be made available for developers.

IMO, looking at how HBM seems to be the inevitable future for high-performance devices in the mid/long-term, together with its relatively high price/GB, I think both consoles next gen will make use of the slow+fast memory pools. Again, with devs only having access to the fast pool.



Epic "forced" Xbox executives to go from 256MB to 512MB of unified ram in X360, while Randy Pichford was instrumental in forcing Sony to go from 4GB to 8GB of ram in PS4.
I don't believe for a second that Randy Pitchford ever had or will have that kind of leverage on Sony. I do think 4GB was on the table for a long time, but purely due uncertainty of the available density of GDDR5 chips when the PS4 went into mass production. I really find it hard to believe any ICE dev saying "4GB is quite enough, let's just settle with it".
Unlike e.g. Nintendo who seemingly wanted an almost exact copy/paste of Shield TV PCB with its 2*1.5GB LPDDR4 chips.
 
Last edited by a moderator:
There's also services and system calls made by the game that fall under the OS umbrella. Some of the synchronization functions and access to secure/DRM data happen with calls that require processing by code that games may not have implemented or are not trusted to handle.
Part of the reservation comes from the platform's catering to the game, although how much of it there is isn't discussed externally.
 
Here's an interesting thought - originally, how much RAM was reserved from that 4GBs for the OS? 1GB? So why not still have that amount with 8GBs instead of the 2.5 we have? It'd be great to hear what exactly the OS usage breakdown is.
 
Here's an interesting thought - originally, how much RAM was reserved from that 4GBs for the OS? 1GB? So why not still have that amount with 8GBs instead of the 2.5 we have? It'd be great to hear what exactly the OS usage breakdown is.

Not really, if you follow Occams Razor. If they could give you the current PS4 Experience with only 1GB of RAM out of 8GB they'd only use that. It's obvious they couldn't so that's why they increased it. If they only used 1GB of RAM everyone would have a very limited experience.
 
He did not have influence on Sony, he pushed this point to Adam Boyes in a room full of developers.
http://www.playstationlifestyle.net/g00/2013/06/12/if-you-go-with-4gb-of-gddr5-ram-on-ps4-you-are-done-said-randy-pitchford/?i10c.referrer=https://www.google.rs/

Adam told this story in front of cameras when he talked to Giant Bomb crew.

That story reads like a fairytale. One dev told one VP of publisher relations the PS4 needed 4GB or they "were done". Then this one executive asked around a little bit more, went to Tokyo and they said "OK" and that's the story of how the PS4 got 8GB instead of 4. Of course there's a lot more to it.
IIRC, the PS4 got access to 2Gbit GDDR5 chips in time for initial production by a hair, and this stuff isn't achieved out of willpower alone.
 
Status
Not open for further replies.
Back
Top