Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
They already get 'confirmation' from xbox insiders that MS also has RAM on Interposer, aka HBM, via patents.

So, Anaconda is going to have an HBM configuration, Lockhart not? Because if Lockhart is reportedly the cheaper system with 12-16GB ram configuration, then HBM is out of the question. Unless the $299 price point is no longer an option.

And let's say for the sake of arguing that both systems are using two different memory architectures. Wouldn't this make development of cross-porting between the systems even more tricker (i.e., bandwidth differences, caching, latency, etc.), considering the rumored gulf in TF performance already between the two platforms?
 
They already get 'confirmation' from xbox insiders that MS also has RAM on Interposer, aka HBM, via patents.

For the interested the confirmation can be found here: https://www.resetera.com/threads/ne...a-dont-want-none.112607/page-15#post-20047981

But what do we actually know? The post quotes multiple patents as well as the twitter posting that talks about HBM and eDRAM + GDDR6 and (like the scribble) dedicated raytracing compute chiplets (RCC). Which he probably has from this thread since he wrote "boom the one on B3D confirming my birdie".

The supposed insider only says it's related to Arden ("Nice to see someone found out what's Arden. Hope Argalus to be decrypted soon"). Maybe Arden is the Hardware for Azure (HBM) but Lockhart and Anaconda will be using GDDR6 + eDRAM instead?

Another option would be that Lockhart is not a 249 to 299 dollar box like some rumors suggested, but instead competes with the PS4 at 399-499 using 16 GB 256 bit GDDR6, while the Anaconda is "MS angry mode" (quoting the tweet) with 16 GB HBM with 4 stacks for a total of 1 TB/s bandwidth (just so they can claim to be the first console reaching 1 TB/s) at a higher enthusiast price point. Looking at their sales data they should know how much they can charge and if that would allow HBM (especially should they reuse the Anaconda design in Azure/XCloud)

But maybe it's just like @bgroovy says and it's for their Surface lineup. I belive @anexanhume pointet out over at resetera that the Xbox GPU lead is one of the people behind the patent which could be an indicato,r or just be a sign of better teamwork/experience/knowledge sharing between the different hardware divisions (cloud, consoles, surface) that MS was talking about.

What are chances we see a streaming focused solution from PS5 with less actual memory but very fast drive and high bandwidth HBM and more but slower UMA solution from MS?

There's a case to be made for less memory if you can stream assets quickly and high extremely high bandwidth. Essentially you'd be trading speed for total memory foot print.

Asking something like this was actually my reason for registering here on B3D, since there is a reddit rumor about HBM+DDR4 where HBCC would handle streaming between HBM+DDR4+Storage.

@DmityKo brought up something like that in january:

I disagree. Vega includes a lot of useful new features which Navi would definitely use and expand on. For a start, HBCC virtual memory paging capabilities look especially promising in a game console.

Consider an IO die with a HBCC derived memory controller, and HBM3 die on the package in a high-end SKU. This configuration could give you:
* 4-8 GB of local HBM3 memory - 512 GByte/s;
* 8-16 GB of DDR5 system memory - 30-50 GByte/s;
* 30-60 GB of NVRAM scratchpad memory - 3-5 GByte/s.
All this memory would be connected directly to the crossbar memory/cache controller and mapped into virtual address space, with the ability to detect and unload idle pages from local memory to another partition.

That would be a cross between complicated high-speed memory subsystems of PS3 and Xbox One, but without the burden of manual memory management. Just load your assets all at once, and the OS will move them between memory partitions as necessary.

The third time is the charm I hope. Last time I bring this up, I promise. :eek:
 
The third time is the charm I hope. Last time I bring this up, I promise. :eek:
Thank you for summing up that portion. I was a bit lost with following that back and forth.

I thought this was the conversion according to @anexanhume
Ariel == PS5
Arden == Anaconda
Argalus == Lockhart

Which he probably has from this thread since he wrote "boom the one on B3D confirming my birdie".
I would exercise caution from reading too much from twitter person. I've gone through a couple of their tweets and it's MisterXMedia behaviour of some sort. I'm not saying the individual is wrong, but a broken clock is still correct 2 times a day throw enough darts and you're bound by probability to get something right. Which of course sounds mean, but a confusion matrix would show you that he's not scoring well on true positives and instead picking up a lot of false positives. There are a lot of tweets that attempt to draw items together some of which I know are wrong, and others don't have nearly enough signal to make that leap at least with the confidence that they have. But constant reference to Xbox One having more stuff 'yet still, or being a front runner into something for next gen' is a dead give away. Not only as a group do we know what's in there, but I'll tell you right now everything we've ever wanted to mine from XBO has been mined out entirely. Scorpio might have some things left to look at since its hardware is somewhat unknown in terms of how it was customized, and it's performing quite well given it's hardware spec (and I expect to see those learnings carry forward into next gen), but not the XBO.

Over the last two days, I would say that this is the first time I've been here since 2013, that I've seen this forum sort of 'wake up' from some deep slumber. Normally people are interested in 'winning' some form of argument, arguing positions of hardware superiority; this is the first time in a long time where I've seen everyone come together with different ideas and information to figure things out instead of caring about positions.

I definitely encourage out of box thinking, provided that we're taking the steps to go through the technology and explain what it does, how it works, expected costs, and what the pros and cons/implications there could be for a console. If there is API information linking it all together even better.
 
Last edited:
Over the last two days, I would say that this is the first time I've been here since 2013, that I've seen this forum sort of 'wake up' from some deep slumber. Normally people are interested in 'winning' some form of argument, arguing positions of hardware superiority; this is the first time in a long time where I've seen everyone come together with different ideas and information to figure things out instead of caring about positions.

The Reapers come every 50,000 hours or so.
 
http://boards.4channel.org/v/thread/459661965

^^^Another 4chan "leak", this one seems more believable...

Context:

Big PC towers.
Pretty loud, fans seem to be running at max rpm all the time. Bug?
GPU dump has memory at 18432mb, bandwidth 733GB/s, core clock at 1850mhz.
CPU shows up asBig PC towers.
Pretty loud, fans seem to be running at max rpm all the time. Bug?
GPU dump has memory at 18432mb, bandwidth 733GB/s, core clock at 1850mhz.
CPU shows up as Zen 7. According to docs only the GPU on the SOC is being used in this iteration of the devkit.
64GB of system ram.
4TB SSD

OP here. To clarify, this devkit is nothing more than a PC with the PS5s GPU. It runs windows.
 
Context:

Big PC towers.
Pretty loud, fans seem to be running at max rpm all the time. Bug?
GPU dump has memory at 18432mb, bandwidth 733GB/s, core clock at 1850mhz.
CPU shows up asBig PC towers.
Pretty loud, fans seem to be running at max rpm all the time. Bug?
GPU dump has memory at 18432mb, bandwidth 733GB/s, core clock at 1850mhz.
CPU shows up as Zen 7. According to docs only the GPU on the SOC is being used in this iteration of the devkit.
64GB of system ram.
4TB SSD

OP here. To clarify, this devkit is nothing more than a PC with the PS5s GPU. It runs windows.

Ryzen 7?
 
Nice "leak" structure, doesn't have the obvious signs of a fake. No TF numbers, nothing other than what could be obverved superficially with simple tools. Start with random physical observations which make it feel real.

A very good fake.
 
GPU dump has memory at 18432mb, bandwidth 733GB/s, core clock at 1850mhz.
Probably just a Radeon 7 with one HBM stack disabled and HBCC set to 20GB.
There's been a number of rumours claiming 20GB for games now.
 
Probably just a Radeon 7 with one HBM stack disabled and HBCC set to 20GB.
There's been a number of rumours claiming 20GB for games now.
Could you elaborate further on the 20GB? I’m not understanding this part of the rumour
 
This, with 4GB OS and 20GB for games.
thanks for clarification. for some reason I was still stuck on the storage talk on the other thread. I was thinking games were going back to 20GB instead of the 60GB that they are now and I was like, whoa this is nuts
 
If PS5 does end up having 16GB DDR4 and 8 GB HBM2, with 20 GB total for games (4GB of that DDR4 for the OS) I wonder if a potential PS5 Pro in 2023 might use HBM3 in place of HBM2.
Assuming that the HBM in a PS5 Pro is going to need to feed a GPU that's between 2x and 2.5x the TFlops of base PS5. Remember the PS4 to PS4 Pro TFlops was 2.3x, however the bandwidth increase wasn't nearly as much, going from 176 GB/s to 218 GB/s.

While personally I don't care about 8K resolution, and I don't expect base PS5 to do anything other than upscale native 4K (max) to 8K, that doesn't mean Sony won't try to push a PS5 Pro to handle some native and CB 8K games.

Clearly, a better use of any potential PS5 Pro, IMHO, would be to greatly improve ray-tracing performance instead of higher pixel density, and I'd imagine many of you might agree with that. Also, I could make the exact same argument for any potential mid-gen Xbox Scarlett upgrade that is intended to be a meaningful upgrade from Anaconda (and I guess by extension, Lockhart as well).
 
Last edited:
This hypothetical memory setup for PS5 seems really odd to me. I'll buy a split memory pool or even a HBCC cache system to make the memory look unified, but the part that bothers me is the low bandwidth of the system (relatively).

You have 16GB of DDR4 at ~100 GB/sec + 8 GB of HBM at ~400 GB/sec (according to the rumor). With a GDDR6 setup on a 384-but, you could have 600+ GB/sec in a unified pool. Sure you need to deal with contention more, but I would think the whole system maybe cheaper. This is less bandwidth than Vega64 so I hope Navi is a lot more efficient.

With an HBM solution I would hope that they would shoot for some outrageous bandwidth that's in the ballpark of 800-1000 GB/sec.
 
This hypothetical memory setup for PS5 seems really odd to me. I'll buy a split memory pool or even a HBCC cache system to make the memory look unified, but the part that bothers me is the low bandwidth of the system (relatively).

You have 16GB of DDR4 at ~100 GB/sec + 8 GB of HBM at ~400 GB/sec (according to the rumor). With a GDDR6 setup on a 384-but, you could have 600+ GB/sec in a unified pool. Sure you need to deal with contention more, but I would think the whole system maybe cheaper. This is less bandwidth than Vega64 so I hope Navi is a lot more efficient.

With an HBM solution I would hope that they would shoot for some outrageous bandwidth that's in the ballpark of 800-1000 GB/sec.
What sort of latency and contention issues would a UMA pool of GDDR6 have? Sony seems to have a specific bandwidth target in mind if they are going with this approach.

My sense says they are interested in streaming assets quickly without bottlenecks rather than theoretical peaks that in practice aren't attainable.
 
Status
Not open for further replies.
Back
Top