Five dies on an interposer. Four 8gbit DRAM dies and the system SOC.
Cheers
Correct me if I'm wrong but I thought the main cost associated with HBM is the placement of the dies on the interposer, not the stacked memory dies themselves.
Five dies on an interposer. Four 8gbit DRAM dies and the system SOC.
Cheers
IMO doubtful usability in a console. In the current offering, it has nowhere near the endurance necessary to use as ram (30 full write per day). And it's very expensive compared to flash. But as a niche product, I'd love to try it on my nosql database at work...What are the chances of Xpoint making it into next gen consoles? Xpoint is supposedly cheaper than RAM so you can get 128 GB XPoint instead of 32 GB of DDR4 or something like that. Then you can have 8GB of HBM caching for 128 GB of XPoint and then a regular HDD for mass storage.
What are the chances of Xpoint making it into next gen consoles? Xpoint is supposedly cheaper than RAM so you can get 128 GB XPoint instead of 32 GB of DDR4 or something like that. Then you can have 8GB of HBM caching for 128 GB of XPoint and then a regular HDD for mass storage.
The price is high initially but that is still cheaper than RAMIntel's first Optane SSD at 375 GB goes for $1500.
The price is high initially but that is still cheaper than RAM
You have to take into account that Optane is only expensive now because it is new, once Micron comes out with their own Xpoint implementation, I expect prices to fall significantly. The price of new tech is always expensive until production ramps and demand subsides, I expect both to have happened by 2020ish. It will depend on price and performance of xpoint when the console comes out, not the price and performance of it now. Both can change drastically. Given how Intel thinks some servers can switch to using Optane DIMMs, I can see Xpoint being a decent tech to use, especially with the non volatile nature, which means you can resume games from powered off states almost instantly. This all depends on how console designers want to approach memory, we are at a point where it seems like an extra tier of memory in the memory hierarchy would benefit everyone, whether it gets filled by Xpoint or NAND or something else is still up in the air for tech 3-4 years away.Optane is out of the question. It can't replace dram because of performance and it can't replace flash because of price. Optane $/GB is currently 50-100% dram prices, flash 1/10th that (even less). A 32GB Optane cache would be >$100, 128GB FLASH would cost $35.
There is also the question if it gains enough traction to even exist in 2021.
Cheers
Given how Intel thinks some servers can switch to using Optane DIMMs, I can see Xpoint being a decent tech to use, especially with the non volatile nature, which means you can resume games from powered off states almost instantly.
Correct me if I'm wrong but I thought the main cost associated with HBM is the placement of the dies on the interposer, not the stacked memory dies themselves.
AMD went with twin 8GB stacks for Vega vs quad 4GB, leading to a smaller interposer footprint.
That's also less stacks as well as a lower interposer footprint.
So may not really tell us about the cost balance of the interposer vs stacks for HBM (assuming that is what you were answering?).
The bandwidth thing is curious, where Vega is 13TF & <500GB/s.
Loading data straight from its source to memory is ideal.
What would be the point of an extra memory pool ?
Copy my resources twice [disk->slow RAM->usable RAM] ?
Introduce more latency [when loading from disk] ?
Use it as a disk cache ? [In that case I think an SSD/HDD hybrid would be better than putting extra burden on the dev shoulders...]
I don't believe for a second that Randy Pitchford ever had or will have that kind of leverage on Sony. I do think 4GB was on the table for a long time, but purely due uncertainty of the available density of GDDR5 chips when the PS4 went into mass production. I really find it hard to believe any ICE dev saying "4GB is quite enough, let's just settle with it".Epic "forced" Xbox executives to go from 256MB to 512MB of unified ram in X360, while Randy Pichford was instrumental in forcing Sony to go from 4GB to 8GB of ram in PS4.
I don't believe for a second that Randy Pitchford ever had or will have that kind of leverage on Sony.
Here's an interesting thought - originally, how much RAM was reserved from that 4GBs for the OS? 1GB? So why not still have that amount with 8GBs instead of the 2.5 we have? It'd be great to hear what exactly the OS usage breakdown is.
He did not have influence on Sony, he pushed this point to Adam Boyes in a room full of developers.
http://www.playstationlifestyle.net/g00/2013/06/12/if-you-go-with-4gb-of-gddr5-ram-on-ps4-you-are-done-said-randy-pitchford/?i10c.referrer=https://www.google.rs/
Adam told this story in front of cameras when he talked to Giant Bomb crew.