Megadrive1988
Veteran
How about 8 GB GDDR5 on 256-bit bus and 16 GB HBM2E.
I agree, I just dont think it will be 7nm EUV nor do I think BW provided would be enough for 13TF GPU + RT and Zen2.
My thinking is, Sony would provide 7x increase in TF and only 3.2x increase over PS4 in BW. A bit unrealistic IMO.
I dont think 256bit bus fits with 13+TF, but does for 9-10TF.
Or maybe their RT solution is less reliant on main bandwidth, but will eat the tflops, thus rending the 576GB/s bandwidth enough for the rest of rendering + CPU ?
I always thought they downclocked it on ps4 because of their power consumption limit allocated to the ram setup as they first desinged it for 4GB.In clamshell there is two chips loading the same lanes (the address/command are double loaded, not the data lanes), it reduces the signal quality so a specific speed bin is lower in clamshell than the single chip mode. The best example to show this is the ps4 that was 192GB/s using 6gbps parts in the early devkit doc leak, and when they changed to 8GB clamshell there was a revision down to 5.5gbps, even though the parts are still the 6gbps ones.
Not sure if it still applies to gddr6, but it has the same topology with double loading of addr/cmd lanes....
EDIT: wait, it shouldn't impact data rate if the data lanes are single loaded, only latency is impacted by add/cmd. I'll look this up, it was sort of the observation that clamshell gpus always downclocked from the printed speed bin.
EDIT2: It really shouldn't. So I don't know why we observed this!
It's too generic and boring looking for me but hey, whatever as long as there's 12 TF of power in it.I like the way it looks
Are mixed density chips doable? Can BW access be uniform?
yes, full speed read for 10gb and 60% for last 6Are mixed density chips doable? Can BW access be uniform?
For the XSX, I don't see how 16GB on a 256-bit bus can provide sufficient BW. Even with 18 Gbps modules it may not be enough as you're adding another ~20%+ TF over the 5700XT and a 8-core CPU. So I think the bus has to be higher.
For a 320-bit bus to get the rumored 16 GB of ram, you need a mixture of 1 and 2 GB chips. Wouldn't you see a non-uniform speed across the entire memory (full BW for the first 10 GB and 60% for the remaining 6 GB)? It's something that has bothered me since the reveal video back at E3.
Why would they mix the chips if the lose some bandwidth on some ? Why not using 10 chips of 2GB ?yes, full speed read for 10gb and 60% for last 6
Becasue 16 gb is cheaper than 20 I guessWhy would they mix the chips if the lose some bandwidth on some ? Why not using 10 chips of 2GB ?
Honestly it looks like memory paging magic is HBCC.Any chance memory paging *magic* (wtf that means) have to do with the change in memory chips I guess with respect to jumping to the SSD drive
With a 320-bit but, it would be 4 1 GB chips and 6 2 GB chips. It's only 4 more chips, seems like an incremental costs.
With this setup, it would be 10 GB of RAM with full BW and 6 GB at 60% BW. The rumored split was 12 for games and 4 for the OS on Anaconda. I would assume the OS would get a portion of the region that the occupies the lower BW addresses. That would leave the remaining 2GB of the games region at 60% BW. Maybe it's would have some special uses for the games, but not be part of the general pool.
Agreed that would be an obvious baseline. I guess just thinking out loud; if HBCC doesn't care about symmetrical sources; then HBCC would be responsible for handling different memory chips?Honestly it looks like memory paging magic is HBCC.