Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Whatever they do, please sell them with included fast SSD, so we can use virtual memory (texturing...) w/o issues!
 
Whatever they do, please sell them with included fast SSD, so we can use virtual memory (texturing...) w/o issues!
Again, isn't it cheaper to have Multiple GBs of very cheap auxiliary ram for that purpose? Something with barely larger bandwith but no seek times would already be a big win for streaming and virtualized textures/geometry etc.
 
I'm leaning towards the idea that we'll see a relatively small amount of HBM and a decent chunk of cheaper, slower memory.

But I also quite like AMD's Solid State Graphics: 100's of Gigabytes of NAND directly addressable by the GPU. At the very least, it seems a cheaper alternative to an SSD, and could allow them to ship the console with a decently sized HDD for game storage.

Would something like that be viable for a console within a couple of years?

http://www.anandtech.com/show/10518/amd-announces-radeon-pro-ssg-fiji-with-m2-ssds-onboard
 
The PS4 Pro did add some small RAM, that's a kludge to get a cheap upgrade likely but I think doing it on a new console would have to be cheap as well, such as a single channel (64bit) of DDR4.

That would be 4GB or 8GB, I'll go bold and say 8GB. If that runs at DDR4 2666 that gives ~21.3GB/s, would be good for a number of things. This "slow pool" will get used but it will be much smaller than the main pool (say, 16GB GDDR6)

That's my interpretation, I don't foresee a huge slow pool this way


Whatever they do, please sell them with included fast SSD, so we can use virtual memory (texturing...) w/o issues!

Could be an NVDIMM slot (for the "slow pool") where you plug in what you wish in? :). Although I wouldn't expect such modules to be cheap so I'll leave that as a theoretical exercise.
 
...
The actual reason sony did not go that rout in my opinion was to give devs the message they were really considering ease of develoment this time.
And that's why they'll still ship their next PS with a fully unified memory pool dedicated for the games as seen on PS4, Pro and XBX (supposing Cerny is the PS5 architect).

Again, isn't it cheaper to have Multiple GBs of very cheap auxiliary ram for that purpose? Something with barely larger bandwith but no seek times would already be a big win for streaming and virtualized textures/geometry etc.
I don't believe in a cheap and slow pool of ram dedicated for textures (or anything else) because it wouldn't be a flexible solution. Some games might not need it, depending of the engine used. Some games might need more of it and some less and in most cases it would mean more work for the developers to adapt their engines to the slow pool of ram (should reminds us the tragic esram predicament except with a slow pool of ram instead of fast...).

Considering the OS takes as much as 3GB currently, I think they'll probably need at least twice much more on PS5, hence the 8GB of ddr ram for the OS. 32GB (or more :yep2:) of fast HBM totally dedicated for games would be a god send for developers IMO like 5GB of GDDR5 was great for PS4 developers, even indies.
 
Loading data straight from its source to memory is ideal.
What would be the point of an extra memory pool ?
Copy my resources twice [disk->slow RAM->usable RAM] ?
Introduce more latency [when loading from disk] ?
Use it as a disk cache ? [In that case I think an SSD/HDD hybrid would be better than putting extra burden on the dev shoulders...]
 
Loading data straight from its source to memory is ideal.
What would be the point of an extra memory pool ?
Copy my resources twice [disk->slow RAM->usable RAM] ?
Introduce more latency [when loading from disk] ?
Use it as a disk cache ? [In that case I think an SSD/HDD hybrid would be better than putting extra burden on the dev shoulders...]

With automatic paging from memory introduces with Vega it will probably reduce the work for dev... The problem of SSD is the cost too expensive for a 399 console and 32 Gb of HBM seems very expensive for 2019/2020 release date...
 
Loading data straight from its source to memory is ideal.

How about most of the entire game memory addressable without needed to load anything from storage until you have put in a different game ? Laying out the data structures for being slurped up into the APU so that various forms of memory management of the gaming world wouldn't even be needed. I recall something about Gorilla Games using compute to "de-frag" texture data so would there be a way of structuring the layout of data to begin with that wouldn't change ( or change much and managed easily ) regardless of what was happening in the game world and would that add some value WRT freeing up computational resources ?

Of course making that work cross platform seamlessly enough would be a problem to solve.
 
GDDR6 will top out at 16 Gb (launching 2nd half of 2018) at 16 Gbps. SK Hynix is talking about a 384 bit gpu with 768 GBs of bandwidth launching in 2018, which is rumored to be Volta.
 
With automatic paging from memory introduces with Vega it will probably reduce the work for dev... The problem of SSD is the cost too expensive for a 399 console and 32 Gb of HBM seems very expensive for 2019/2020 release date...
I don't have access to any Vega documentation, my understanding sofar was that Vega Virtual Memory was compatible with the CPU (granularity, addressing) so it could access the whole RAM as a cache/store instead of just 256Mio like it does atm.
I don't know how well it would work with disk paging. As that would lead to the GPU calling the OS page fault handler to load the missing page, that might not work too well with a GPU due to the latency it would introduce... (Especially if the data is to come from a HDD, or much worse optical ;p)
 
How about most of the entire game memory addressable without needed to load anything from storage until you have put in a different game ? Laying out the data structures for being slurped up into the APU so that various forms of memory management of the gaming world wouldn't even be needed. I recall something about Gorilla Games using compute to "de-frag" texture data so would there be a way of structuring the layout of data to begin with that wouldn't change ( or change much and managed easily ) regardless of what was happening in the game world and would that add some value WRT freeing up computational resources ?

Of course making that work cross platform seamlessly enough would be a problem to solve.
If you mean paging from disk, you'd better have a very fast/low latency disk... so an NVMe SSD...
 
Considering the fact that ps4pro has a separate pool of cheap/slow memory for OS and aparently that didn't become such a big overhead in design complexityas I've heard claimed before, wouldn't the smart design solution be to have LOADS of the cheap memory for caching virtual texture and asset streaming, which are big eaters of memory but don't need fast access all the time, and have a cosr effextive 8 or 16GB lightning fast pool dedicated only for what really needs it for?
The PS4 Pro expanded the DDR3 capacity for the southbridge chip, and since it was already there the change was incremental. It does provide direct storage for an OS, but not the one run on the APU where the game is.
The extra storage can be used to hold non-game application state when it is switched out of use, the transfer back to using non-game mode is already heavier since that's not expected to be done all that frequently.

Making the DDR3 pool that is attached to an odd IO virtualizing southbridge running its own OS a more directly addressable pool for a game would be a larger change. There may be interfaces and technology that might make things more direct in the future.

(*edit: corrected grammar)
 
Last edited:
If you mean paging from disk, you'd better have a very fast/low latency disk... so an NVMe SSD...
Actually not paging but actually having everything in memory as opposed to leaning on paging to storage. Fast storage is great but it can also be a bit of a gate if you were trying to do something more interesting than what you could get with a PC for instance.

Just spit-balling here but if Sony brought out a PS5 that had some large amount of memory not chopped up into memory, pcie, SATA etc ( although it could function that way as needed ) then you could lay out your data in a more easily accessible manner to be used in algorithms or processes to create games that you actually might not ever see on a PC if used Sony's tools for doing things. Data structures and code just laid out bare as you please in some non-volatile form and ready at a moments notice.
 
Actually not paging but actually having everything in memory as opposed to leaning on paging to storage. Fast storage is great but it can also be a bit of a gate if you were trying to do something more interesting than what you could get with a PC for instance.

Just spit-balling here but if Sony brought out a PS5 that had some large amount of memory not chopped up into memory, pcie, SATA etc ( although it could function that way as needed ) then you could lay out your data in a more easily accessible manner to be used in algorithms or processes to create games that you actually might not ever see on a PC if used Sony's tools for doing things. Data structures and code just laid out bare as you please in some non-volatile form and ready at a moments notice.
Are you still living in the 60s when we didn't use paging at all?
 
Loading data straight from its source to memory is ideal.
What would be the point of an extra memory pool ?
Copy my resources twice [disk->slow RAM->usable RAM] ?
Introduce more latency [when loading from disk] ?
Use it as a disk cache ? [In that case I think an SSD/HDD hybrid would be better than putting extra burden on the dev shoulders...]

The point is you gain granularity. Vitually all games use streaming nowadays. No sane game streams data from disk right at the frame it needs it. It's loaded in chunks (and large chunks at that, to avoid long HDD seek times) and a lot of it never actually ends up apearing on screen (or heard in your speakers) parts of the world, sounds, models, animations and all sorts of data are loaded very with a fat safe margin so the engine knows things are already available on memory IF it comes to be needed. Presently, on ps4, that's a large chunck of amazing GDDR5 that is sitting there with it's arms crossed. It's wastefull. I imagine even that, in an opem world game for example, the engine loads all assets and data needed for all possible times of day when you walk into a region, even if it takes multiple minutes for the TOD to change enough for some of that to be needed, because it all comes packed into a single large compressed file straight from the HDD just in case, sits some time in there still compressed and unusable, then as the player aproaches said region som more the engine uncompresses all of it at once. That's wasteful. That waste could go to a cheap memory, and the engine can then pull from the large pool only EXACTLY what it knows it will need in a few frames from now into the fast mem for use. The latency this introduces is irrelevant considering the one already present from the HDD.
 
There's only so much memory that can be part of your active set, because you have limited bandwidth to begin with.
Anyway, memory is used as a cache BECAUSE of the HDD or worse Optical drive performances...

In my 15 years of experience making games I can tell you few engine load in chunks like you assume they all do. (Granularity is generally at the individual objet level, sometimes even lower since it might load partial MIP pyramid for exemple, not even considering engines based around virtual memory that stream at the page granularity (64Kio on GCN)...)

Anyway I don't see the point of an extra copy if you can have a *low latency enough, fast enough* mass storage device, it's simpler and opens an opportunity to have massive streamed worlds...

[Not to say I wouldn't be able to make use of the memory cache you speak of, just saying there's no need to make things more complicated when there is a technical solution broadly available at an acceptable cost that doesn't require more developer effort that could be spent on something more interesting instead.]
 
At the begin of this generation there was the speculation of a smart cache between the hdd and the main memory.
It was probably due to the 8GB of solid state memory in the One, that in the end is just used for fast suspension / startup.

In the XBox One X btw, there's the logic to use spare memory as cache without the developer intervention.
So what if we all acknowledge that a 2TB ssd will not make it in the cheap models even in 2019, and consider an evolution of this concepts with a standard hdd, and a soldered 32GB of emmc (or ufs if possible) managed by the soc?
 
Not in response to anyone in particular.
For a console releasing 2021.
Game sizes are getting bigger, and I expect the next console to have multi TB drives in them, let's say 2TB min.
SSD drives don't seem to be dropping in price enough to make a 2TB SSD a reality, so may still be mechanical which would then make 3TB the min.

Ram prices don't seem to be dropping in cost a lot either, so how much ram are people expecting in next console considering we have a mid gen at 12GB. 24GB? That much high speed memory may be very costly even in 2021. 18GB possibly?

SSD + large single pool of fast memory may just end up being too costly, especially together.

Developers didn't seem to have problems inherently with the X1 split pool design, it was the fact that it was so small that it made managing it a lot harder. If target res was 720p may not have heard many complaints at all, apart from bad tools in the start.

With a reasonably sized smaller high speed pool, and a larger slower cheaper memory pool may be the way to alleviate the issues, also help with contention etc.

Could end up with less than 18GB main memory and the developers will just have to manage it a lot better with help from things like the Vega HBCC, BC6 texture compression?

I'm not very hopeful for 2TB SSD & more than 18GB HBM/GDDR6/5x memory. Not at a $400 PS5 price point in 2021. I've not been following memory prices and SSD so happy to be told by then price won't be a concern at all.
 
Something I've said before, but thinking about all this again makes it even more clear to me, is that these mid gen machines has made it much harder to build a compelling next gen by 2020-21 (earlier for some).
Especially the 1X with the extra memory for assets.

With no mid gen, could've released a console in 2019 with 8TF, Zen, 2-3TB HDD, 16GB fast memory. It would've easily looked much better than XO and PS4 at 4k.

Not saying I'm against mid gens though, just saying it's made it a lot harder for next to be compelling or seen as a generational jump. Easier sell is if you say it's iterative though.
 
Last edited:
Status
Not open for further replies.
Back
Top