Sony's ReRAM plans - what can and can't ReRAM bring to a console? *spawn

I wonder if you program to take advantage of 25.6gb/s of ssd cache, what results would you get? Wouldn't that ultra-fast relatively large capacity of memory open up new rendering paradigms?
I’m sure it could. Potential is massive. But so is Graphene over silicon. But we haven’t seen Graphene make it’s debut since it’s discovery a decade or so ago. And that’s sort of what the discussions have been about. The potential is great but we have to see it in production and out for us to really validate it.

In some ways, Its not going to be very different from moving the baseline or GPU programming to be a 2080TI. Imagine what you can do with that over a 1.3TF GPU.
 
If it's a standard nvme, it will probably be 4 lanes, but if it's custom I wouldn't be surprised to see a 8x pcie.

Yeah, they can really use as many lanes as they decided to provision. They are not limited by the M.2 NVME form factor at all.
 
I’m sure it could. Potential is massive. But so is Graphene over silicon. But we haven’t seen Graphene make it’s debut since it’s discovery a decade or so ago. And that’s sort of what the discussions have been about. The potential is great but we have to see it in production and out for us to really validate it.

So it's not a wasted silicon like some here are suggesting. That's good to hear.

I understand that nothing is confirmed. Sony hasn't made any announcement. Amigo Tsutsui presentation wasn't intended to be made public even his pdf. All the articles I've linked about mass production are speculations from japanese publications.


This is something that people that look at news and PR announcements about new technology often miss.


I agree. Never did I imply that ReRam will "definitely" be in the PS5. I know a lot of examples that were said to be releasing that never really actually made it to public. I know.

But the fact that a sony engineer said they will commercialize ReRam in 2020 allows the possibility to exist. Plus all the statements from Cerny and other Sony executives pointing to the importance of the SSD they put in the PS5. And then you have a professional analysis that ReRam in 2020 would approximate SLC Nand in price. Plus the fact that Sony's ReRam will compete against an established product from Intel gives the impression that there is no point producing them in small quantities.

Therefore it's not a speculation pulled out of thin air. You can be pessimistic about it and shows hundreds of example of promising technology that never materialized or companies' roadmap never coming to fruition. But you can also be optimistic about it and show examples where the opposite is true. I'm sure those naysayers of Optane technology being ready for production then are eating crows today. It's a tie.

Will ReRam be the same case as Intel's Optane or the same case as Toshiba's SED TV?

Sony has yet to announce anything and the odds are definitely not in ReRam's favor. But you know the possibility exist and PS5 is a year away from being released. No arguments exist right now that would point that ReRam in the PS5 is "impossible". All arguments against it just prove that it is "improbable". Now 4 months before the release of PS5, say August 2020, and there is no announcement or news of Sony getting into ReRam production, then you'll have your argument that proves it impossible rather than just improbable.
 
Last edited by a moderator:
You'll have to buy an expensive nvme drive that would approximate the bandwidth of the default one if you want to upgrade. 1TB of nvme ssd next-gen will be extremely limiting. I don't think that's the better option. A 128GB cache paired with relatively cheap ssd would allow the user to put in low cost ssd to their liking. The bandwidth cold storage on that set-up is not essential to developers. The "loading times being a thing a the past" (in-game) is just a bonus.
You could use fast NVME drive internally and an 'external' extra storage that could be slower and the system could cache onto the fast NVME drive. External storage could be anything from HDD to SSD to flash carts and needn't be external to the case. ReRAM isn't necessary to solve the problem you want solved.

I’m sure it could. Potential is massive.
I'm not so convinced. 25.6 GB/s is paltry BW versus the system RAM while 4 GB/s is already enough for a paradigm shift in gameplay as discussed in the SSD debate. 25 GB/s isn't going to make the difference between what 4 GB/s and what ReRAM could do save maybe some super-fringe cases dealing with absolutely MASSIVE data sets that fill your drive up with 500 GB/s games. Low latency would make streaming content easier and allow a little more in RAM thanks to less need to prefetch, but it won't fundamentally change what can be done. And remember, if we are rendering efficiently we don't need much data at all, as per Sebbbi's maths on tiled textures.

What would be doable is a console with less RAM say, perhaps 8 GBs instead of 16, as the storage is that much closer to where the data is needed. If that money could be more economically directed from RAM to ReRAM, it makes some sense.
 
Last edited:
2080Ti of gpu power in next gen would be nice :)
I would like 2x2080ti power on the next gen consoles.

The problem however is power draw as well as die size.

Let's take die size out of the equation for a moment. 2080ti has an average gaming power draw of 273 watts.

https://www.techpowerup.com/review/msi-radeon-rx-5700-xt-gaming-x/30.html

Turing at 12nm is already on par or better than Navi at 7nm, from a perf/watt perspective.

https://www.techpowerup.com/review/msi-radeon-rx-5700-xt-gaming-x/28.html

Heck, even pascal is better at some points.

So how exactly are they going to fit 13.5tflops at a power budget that can be effectively cooled inside a small case, with added extra hardware for this chip to work?

5700xt is already at 220w @9.8 tflops and 250mm^2 die size.

I understand that they could add CUs at lower clocks, but at what transistor cost, rt and CPU cores included?

I am pretty sure I am not the first asking this, but I just found this thread recently and it's overwhelming. Some quick light shedding would be appreciated.

Thanks.
 
Wasn't really serious, i'm not expecting 2080Ti level, or even 2070 level. 1070/2060 is more in line i think if we are lucky. Maybe 5700 non XT performance, with RT hw. CPU cores will probably be very low clocked compared to what you can get, like 3ghz somewhere.
 

Cold boot. Linux. 3 seconds. NVME.

when you program to take advantage of the hardware you will get results.

a lot of work here though to make Linux in rust. Make your own drivers etc. But this enables the creator to maximize the hardware.
I have a trash Windows tablet. Literally the free ones that ISPs give you as an incentive to switch sometimes. It's older, shipped with Windows 8 and I upgraded it to 10. Uninstalled all the bloatware, only have Netflix and Nook installed on it. It has a slow Atom Z3735F and a slower 32GB eMMC drive. Cold boots to the lock screen in 8-10 seconds. I think the current generation of consoles could boot faster if that was a design priority. It simply isn't.

You could use fast NVME drive internally and an 'external' extra storage that could be slower and the system could cache onto the fast NVME drive. External storage could be anything from HDD to SSD to flash carts and needn't be external to the case. ReRAM isn't necessary to solve the problem you want solved.

I'm not so convinced. 25.6 GB/s is paltry BW versus the system RAM while 4 GB/s is already enough for a paradigm shift in gameplay as discussed in the SSD debate. 25 GB/s isn't going to make the difference between what 4 GB/s and what ReRAM could do save maybe some super-fringe cases dealing with absolutely MASSIVE data sets that fill your drive up with 500 GB/s games. Low latency would make streaming content easier and allow a little more in RAM thanks to less need to prefetch, but it won't fundamentally change what can be done. And remember, if we are rendering efficiently we don't need much data at all, as per Sebbbi's maths on tiled textures.

What would be doable is a console with less RAM say, perhaps 8 GBs instead of 16, as the storage is that much closer to where the data is needed. If that money could be more economically directed from RAM to ReRAM, it makes some sense.
That would make the newer consoles more like older ones, like Neogeo or SNES, where you had a tiny amount of RAM, but fast storage. Except in this case, games would be installed onto fast system storage instead of stored on external carts. It would be pretty interesting if Sony went this route, and MS delivered a console based on current design trends.
 
So it's not a wasted silicon like some here are suggesting. That's good to hear.

I'm not so convinced. 25.6 GB/s is paltry BW versus the system RAM while 4 GB/s is already enough for a paradigm shift in gameplay as discussed in the SSD debate. 25 GB/s isn't going to make the difference between what 4 GB/s and what ReRAM could do save maybe some super-fringe cases dealing with absolutely MASSIVE data sets that fill your drive up with 500 GB/s games. Low latency would make streaming content easier and allow a little more in RAM thanks to less need to prefetch, but it won't fundamentally change what can be done. And remember, if we are rendering efficiently we don't need much data at all, as per Sebbbi's maths on tiled textures.

What would be doable is a console with less RAM say, perhaps 8 GBs instead of 16, as the storage is that much closer to where the data is needed. If that money could be more economically directed from RAM to ReRAM, it makes some sense.

Right. So when I mean it's not wasted silicon, it's got big uses in the data science field, there are also big uses in the rendering field, (offline), where you are trying to make movies or video editing etc.; but as Shifty says, probably not so useful in the game console area.
As data sets get larger and larger, compute is moving much faster than I/O and eventually compute is sitting idle waiting for I/O to move that data in. There are a variety of solutions that are coming together to solve this, this is also why we are seeing so much data science moving to the cloud - because big data processing is a metric ton of work.

But if you want to do a lot of big data processing locally; then it becomes super challenging. So we have these video cards that look like this:
https://www.newegg.com/p/N82E16814105088
Radeon SSG, which has a SSD bolted right onto the video card to add an additional 2TB of onboard memory. So these are the applications where I can see ReRAM having an effect. A place where we require more storage than memory, and something faster than we have 5 GB/s. Things like AI/Automation where perhaps, say take self driving cars, in order to improve the AI further, it requires an even larger sliding window of the inputs in order to determine the next action. So instead of looking back at the last 5 seconds of inputs, it's now wanting to look further back in the 10/15 seconds worth of driving and making decisions with that history in mind.
 
I see. So the benefit is really just instant loading then and probably ease of development?

So it's a question of cost-benefit ratio again whether this thing gets included or not.
 
I see. So the benefit is really just instant loading then and probably ease of development?

So it's a question of cost-benefit ratio again whether this thing gets included or not.
The benefit is when you hit a scenario where you data sets are much larger than system memory and I/O becomes a bottleneck. So we're talking data sets that are 32GB in size or greater. In which loading it into memory would limit the amount of system memory usage for the actual processing. I'd say it's becomes more useful as the data footprint increases in size. So 8K resolution for instance if you're looking at graphics. This is where ReRAM may start to be useful to have around. Very useful for cameras for instance, taking pictures/recordings at 8K with instant playback etc. Or 4K recordings (longer than 30s) to share might be another alternative for console; writing the results to ReRAM in realtime with playback, before passing it down to hard storage at a later time.
 
The benefit is when you hit a scenario where you data sets are much larger than system memory and I/O becomes a bottleneck. So we're talking data sets that are 32GB in size or greater. In which loading it into memory would limit the amount of system memory usage for the actual processing. I'd say it's becomes more useful as the data footprint increases in size. So 8K resolution for instance if you're looking at graphics. This is where ReRAM may start to be useful to have around.

Interesting. Any other scenario than 8k resolution?

And is it high in the cost/benefit ratio in your book? Let's say the inclusion of a small amount of this ultra-fast ssd + 2TB HDD is equal in over-all cost to a high-bandwidth 1tb nvme ssd.
 
Interesting. Any other scenario than 8k resolution?

And is it high in the cost/benefit ratio in your book? Let's say the inclusion of a small amount of this ultra-fast ssd + 2TB HDD is equal in over-all cost to a high-bandwidth 1tb nvme ssd.
I can't talk about what I don't know. It's a good position to come in with some skepticism because it takes a while for things to change.

I'd say any new technology, until it's mainstream, won't see it get used effectively if we're looking at software. So it's better used supporting hardware.
NVME has been out for a while now, yet we still code with SATA drives in mind. Think about how long adoption is going to be for something like ReRAM. You need a hardware based application in which it will be used to its fullest but won't require developers to have to code for the lowest common denominator.

We had DX11 out in 2009 about. Maybe as early as 2007. Games didn't really leverage compute shaders until after 2013, and effectively until the end of this generation. Think about that for a moment. Hardware is just hardware, but coding is the problem that needs to be tackled.

There are use cases exist for cameras and stuff. Realtime applications. But I can't see this happening in games for a while. It may never ever transfer over either.
 
You need a hardware based application in which it will be used to its fullest but won't require developers to have to code for the lowest common denominator.

Sound like first party devs are the only ones who can exploit an advantage to that ultra-ssd set-up. That is, if there is one.
 
Cold boot. Linux. 3 seconds. NVME.

when you program to take advantage of the hardware you will get results.

a lot of work here though to make Linux in rust. Make your own drivers etc. But this enables the creator to maximize the hardware.

Redox is not Linux. It is a totally different OS kernel (+ userspace). It is more comparable to FreeBSD or something similar.
 
Redox is not Linux. It is a totally different OS kernel (+ userspace). It is more comparable to FreeBSD or something similar.
Yea I know it's written in Rust and they made some changes to how the OS works (URLs over filenames) I just thought it would be easier to explain it that way.
 
Any other scenario than 8k resolution?
High res Megatextures everywhere. Could stream data around player form HD to SSD / ReRAM, and from there to GPU for what ends up actually visible on screen.
It could be ReRAM has a big advantage here because it's high read speeds, while slower write speeds would matter less because player position changes slower than visibility.

But not sure - i'm not up to date with SSD speeds - may be a constructed argument.
Also there would be the other issue of games becoming huge, maybe half a gigabyte or more (or some compression magic).
On the other hand i guess content creation would become much easier with no more need to reuse texture data in clever ways to hide the repetition.
 
High res Megatextures everywhere. Could stream data around player form HD to SSD / ReRAM, and from there to GPU for what ends up actually visible on screen.
It could be ReRAM has a big advantage here because it's high read speeds, while slower write speeds would matter less because player position changes slower than visibility.
Megatextures only need a small read rate and RAM footprint. They benefit from low latency storage and/or RAM caching. If the drive if fast enough, you don't need to cache much in RAM, but with SSDs, a few GBs of RAM storing tiled data should be ample cache.
 
Back
Top