Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
1650MHz GDDR6 on 352-bit bus would be 580GB/s. Strange to see that width for an AMD GPU too. That sort of partial bus width is more nV's alley. If that's due to some funky reservation (see CPU L4) for a 384-bit APU, then I guess total bandwidth would be ~633GB/s. Weird.

Yeah the RTX 2080 has 11GB and 352-bit bus width. Which which would line up with this (assuming 22GB on dev kit means 11GB on retail). Though I believe the memory clock is 1750MHz on the RTX cards...
 
Is this is a dev kit then likely means 11GB of memory and probably 58(?) CU's for the GPUon the retail kit....if the Xbox One X dev kit is anything to go by...

That would be about 10.3TFlops....

1GB of L4 cache seems like a red flag. Unless it's acting as some sort of EDRAM/system memory??

I dunno what to make of this. My gut says fake...but it's a pretty darn good fake if it is.
 
I think they only way to get an L4 would to use HBM on interposer especially at that size.

You could use 1 HMB stack that would give you 256GB / sec, but if you already have a 352-bit bus for GDDR6 for around ~700 GB/sec, does that really buy you anything other then a whole lot of complexity? I think a L4 HMB cache could happen, but that would be to compensate for a slower and narrow bus to main memory.

Had MS continued with the embedded RAM designs of the 360/X1, I think this type of approach could have been the next evolution. But even then I don't think one stack could provide enough bandwidth and with more stacks the costs increase drastically.
 
I can't wrap my head around that 1GB L4. Nothing that would make sense comes to mind.
Maybe a large-ish scratchpad (64bit LPDDR4?) for the CPU to avoid doing GDDR6 access requests?

I mean if this was a two-chip solution, with the GPU acting as northbridge and using HBCC to take over the main GDDR6, and the CPU accessing the main GDDR6 through the GPU.

1GB eDRAM or MRAM chiplet over IF (but yeah, this rumor is BS).
 
I saw a lot cpu and gpu mumbo jumbo and thought okay, 22GB ram is nice, then I saw module 80211ax and my brain said fake right away. Funny what triggers different people.
 
I saw a lot cpu and gpu mumbo jumbo and thought okay, 22GB ram is nice, then I saw module 80211ax and my brain said fake right away. Funny what triggers different people.

Why? Because it's unlikely that module exists yet?
 
I saw a lot cpu and gpu mumbo jumbo and thought okay, 22GB ram is nice, then I saw module 80211ax and my brain said fake right away. Funny what triggers different people.

Why? 802.11ax on a premium console launching in 2020 doesn't seem implausible to me, especially since the console will be stuck with that spec for some years after launch. I don't think the "leak" is credible, but that didn't really stand out to me as a reason why.
 
Yeah the memory is the big question for me. Is it possible that the dev kit wouldn't have double the memory like the Xbox One X dev kit does? That the dev kit has 22GB but the retail would have like 16GB?

That's only way I could believe this. Because there's no mention of a separate dedicated memory here. And 11GB wouldn't really seem like enough for a unified pool.
 
  • Like
Reactions: snc
The only way I can believe any leak is if the source gets corroboration from journalists with proven contacts in the industry.

99% of these things are fake.
 
The only way I can believe any leak is if the source gets corroboration from journalists with proven contacts in the industry.

99% of these things are fake.

Screenshot_2019-01-30_at_19.18.15.png


Sony Santa Monica is fucking funny. Jason Schreier must have let the meltdown going on for one hour. It would have been very funny.
 
I think they only way to get an L4 would to use HBM on interposer especially at that size.

You could use 1 HMB stack that would give you 256GB / sec, but if you already have a 352-bit bus for GDDR6 for around ~700 GB/sec, does that really buy you anything other then a whole lot of complexity?
Cache is all about latency, not bandwidth. How much lower latency access could L4 cache be versus main RAM? Surely there's an epic overhead in having 1 GB of actual cache versus scratchpad - it all has to be managed which slows it down. That's why we have KB of L1/L2 and MBs of L3, instead of MBs of L1. AFAIK L4 isn't used ever. It appeared as L4 cache in Intel's 128 MB eDRAM integrated graphics but wasn't it more a scratchpad?

This for me is the 'fake' flag.
 
Cache is all about latency, not bandwidth. How much lower latency access could L4 cache be versus main RAM? Surely there's an epic overhead in having 1 GB of actual cache versus scratchpad - it all has to be managed which slows it down. That's why we have KB of L1/L2 and MBs of L3, instead of MBs of L1. AFAIK L4 isn't used ever. It appeared as L4 cache in Intel's 128 MB eDRAM integrated graphics but wasn't it more a scratchpad?

This for me is the 'fake' flag.

Can't speak to this, but it did improve gaming performance. So in that particular configuration it delivered a tangible benefit at least.
 
https://arstechnica.com/gadgets/201...tphones-may-have-new-1tb-storage-chip-inside/

1TB UFS memory with speed up to those of a cheap M.2, and latency I don't know.
Price I don't know too, but if they can put that on a phone, in two years can be a viable option?

The 1GB/s quoted transfer speeds match up with the recent "leak", so there could be something in this.

No idea of price vs a cheap M.2, it's hard to get a feel for price with the markup they put on phones with more storage.
 
I think MS can go much more esothic than Sony because able to create BC trough libraries and Os... anyway they have this vison of console used into theyr Azure global system... so have to think about this new console as a small server... This rumor sounds strange, almost unbelievable but also exiting somehow.
 
Status
Not open for further replies.
Back
Top