Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
You can't process data directly from storage.

Of course you can... Virtual Memory / Pagefile is exactly that.
It's only a matter of choosing what type of data you can stream directly from a (very) fast storage. Obviously latency will become a very big problem if the CPU asks for many KByte-sized files (shader programs) instead of >10 MByte-sized ones (textures).
 
Last edited by a moderator:
Of course you can... Virtual Memory / Pagefile is exactly that.
It's only a matter of choosing what type of data you can stream directly from a (very) fast storage. Obviously latency will become a very big problem if the CPU asks for many KByte-sized files (shader programs) instead of >10 MByte-sized ones (textures).
But it's far too slow to be counted on for much. That's why some current gen games on PC stutter with 8gb + ssd for pagefile and it's remedied with 16gb ram.

That's current gen games... now counting on pagefile for next gen games with only 12gb? Not happening. 16gb is a stretch even if we're expecting a dramatic improvement like a gen on gen leap should be.
 
If we are Cost restricted on sizing, then the only other sensible option is too look further into compression/or other ways to remove assets out of the installation.

It's still puzzling how audio and subtitled seem weirdly packages and limited by region. I generally do not want more than English audio, occasionally Japanese, to be installed but there could be licensing implications.

Where are we on those fronts? Any expected changes with regards to compression or just entire removals of redundant asset creation ?

Audio still has scope for compression but this adds a challenge to realtime re-mixing for audio output. There's probably some marginal improvements on large texture storage but TANSTAAFL. Heavily compressed saving RAM but you may take a hit to performance as you're not just processing a a texture but decompressing it in realtime.

Of course, there is procedural texture generation. :runaway: It's been a good few months since I saw a 100k demo promising a breakthrough.

Realtime compression of other data has been around a few years in most operating systems but I wonder how much you could meaningfully save in a console. A general purpose OS knows that not all apps need to be active all of the time.
 
But it's far too slow to be counted on for much. That's why some current gen games on PC stutter with 8gb + ssd for pagefile and it's remedied with 16gb ram.

That's current gen games... now counting on pagefile for next gen games with only 12gb? Not happening. 16gb is a stretch even if we're expecting a dramatic improvement like a gen on gen leap should be.

Is this not what the and Vega hbcc (high bandwidth cache controller) is supposed to assist? We must assume this is possible on the next gen consoles.
 
But it's far too slow to be counted on for much.
Can you give any examples? AI for world simulation doesn't need to have ten thousand agents in RAM at once. Physics potentially doesn't need lots. Yes, you could do more with more RAM for certain, but we're cost constrained here. If something's got to give, more RAM + lower storage speed is in most cases going to be a less flexible solution than less RAM + fast storage speed.
 
Can you give any examples? AI for world simulation doesn't need to have ten thousand agents in RAM at once. Physics potentially doesn't need lots. Yes, you could do more with more RAM for certain, but we're cost constrained here. If something's got to give, more RAM + lower storage speed is in most cases going to be a less flexible solution than less RAM + fast storage speed.
If it were between 2.5 HDDs again with 24-32gb vs 16gb with ssd yes i would agree. But that's an extreme example.

24gb + a 3.5 full speed drive would be better than 16+ssd... 24gb is the current standard in the PC space right now. That being 16ddr4 + 8gb VRAM and you don't need an ssd on PC where you have full size drives and good CPUs. Load times on my Seagate barracuda with ryzen are fast.

The only caveat here being consoles would have to be larger which idc personally. General consumers might.
----

Isn't it much slower to swap out AI functionality and world calculations than just swapping out lods and textures? Interactive elements vs. non?

I will look into this further admittedly.
 
I really think what next gen consoles should launch in 2022, because if all you guys talking about is true, then new consoles will not be next gen. Next gen consoles should be at least 24 Tf and with 64GB of RAM. Only then difference in graphics will be big enough.

Atleast 64GB ram lol, not even PS6.... 24TF is also dreaming. Have you seen the last of us part 2, GoW, HZD? Thats also running on 1.8TF hardware. What you think those developers can do with something like 11 or 12 TF, with perhaps also other improvements to the entire system? Not only that but hardware dont define a console, software does. PS2 had by far the weakest hardware but it had the best games according to most, system sold north of 150+ million units.

If you want specs then go for it now, theres already 64GB ram systems, 128 if you want. 24TF also there for you right now if you want with SLI 1080Ti.

However it turns out, Sony wont be releasing a system that doesnt visually improve by a significant margin, whatever amount of ram or TF number. Really, if those next gen games look twice as stunning as GoW on average, can we really complain? Maybe they improve on other aspects aswell such as audio, physics etc.
And dont forget the next gens maybe get a real CPU instead of those tablet jaguars, that alone will improve very much. Maybe we get to see more of Cyberpunk 2077 and get an idea of what next gen will be?
 
24gb + a 3.5 full speed drive would be better than 16+ssd... 24gb is the current standard in the PC space right now. That being 16ddr4 + 8gb VRAM
PCs with 16 GBs RAM and 8 GBs VRAM are expensive and the top end. Isn't it more usual that consoles have about half, if not less, than the gaming PC standard? Haven't found a clear answer to that through Google but a PC around PS360 launch was apparently 512 MB VRAM and 512 MB RAM going up to multiple GBs. This article suggests a 2000 PC had 256-512 MB RAM when PS2 had ~36 MB.

Isn't it much slower to swap out AI functionality and world calculations than just swapping out lods and textures? Interactive elements vs. non?
If latency is low enough, you can effectively work from storage in smaller pages. Or basically, all memory is just a bucket to speed data availability to the CPU's L1 caches. All the work is done in the caches, but they need to be small, so then it's moved to the L2 cache. But that can't be too big, so then it's moved out to the L3. Then RAM. Then storage. Now if RAM is fast enough for the work, you could skip the L3 altogether, and if the storage was fast enough, you could skip the RAM altogether. Everything on screen and in the local vicinity will need to be in RAM (which can be very minimal with tiled resources) but everything beyond that can be cached in small workable units and swapped in and out, potentially.

If you look at a 16 GB PC, how much of that RAM is actually being accessed at any given moment? How much data there is sitting there for when it's needed at some point? Sebbbi's posts about tiled textures really highlight the inefficiency of the traditional "everything in VRAM" model. Without the need for RAM to function as a cache for slow access HDDs, RAM requirements are reduced. And tiled resources should reduce requirements further; those tiled resources benefiting massively from the low latency of flash. So pressure on RAM is definitely less than the conventional model. Well, it would be if there's enough fast-enough flash.
 
PCs with 16 GBs RAM and 8 GBs VRAM are expensive and the top end. Isn't it more usual that consoles have about half, if not less, than the gaming PC standard? Haven't found a clear answer to that through Google but a PC around PS360 launch was apparently 512 MB VRAM and 512 MB RAM going up to multiple GBs. This article suggests a 2000 PC had 256-512 MB RAM when PS2 had ~36 MB.

Well Starting with the Ps4 consoles adopted whatever mid range card is out at the time. 8GB 1080's are high end right now, but that and vega's performance will occupy the mid range when the Ps5 is ready. Mid range 2019 could even surpass that, who knows. It's worth noting since Ps5 is getting Navi, it's by default more advanced than Amd's current offerings.

Ps4 had 8gb and the high end on PC wasn't even double that, 8gb ddr3 and 2 or 3gb cards were (depending on if you had 680 or 7970). As for 2005, there were games actually recommending 1gb like FEAR and also a 256mb card. So the gap keeps getting narrower memory wise.
If latency is low enough, you can effectively work from storage in smaller pages. Or basically, all memory is just a bucket to speed data availability to the CPU's L1 caches. All the work is done in the caches, but they need to be small, so then it's moved to the L2 cache. But that can't be too big, so then it's moved out to the L3. Then RAM. Then storage. Now if RAM is fast enough for the work, you could skip the L3 altogether, and if the storage was fast enough, you could skip the RAM altogether. Everything on screen and in the local vicinity will need to be in RAM (which can be very minimal with tiled resources) but everything beyond that can be cached in small workable units and swapped in and out, potentially.

If you look at a 16 GB PC, how much of that RAM is actually being accessed at any given moment? How much data there is sitting there for when it's needed at some point? Sebbbi's posts about tiled textures really highlight the inefficiency of the traditional "everything in VRAM" model. Without the need for RAM to function as a cache for slow access HDDs, RAM requirements are reduced. And tiled resources should reduce requirements further; those tiled resources benefiting massively from the low latency of flash. So pressure on RAM is definitely less than the conventional model. Well, it would be if there's enough fast-enough flash.

Hmm yes I knew about the L1 to storage hierarchy. Ryzen does have a ton of cache so that is good for next gen at least.

Well seeing benchmarks for the latest Battlefield/front games as an example they seem to be using 11GB or thereabouts usually so they definitely need more than 8.

Perhaps there's some development advancement to be made, and having fast storage as standard would help greatly to that. But how many devs would be willing to do those workarounds rather than just have the extra ram available? The console makers have also got to think about making it as easy as possible for a wide range of developers. Having just 16gb (so 12-13 after OS) is only 50% more than the X. Or glass half full, it's more than double the jump from the base consoles in usable memory. Idk, if they go that route they better spare no expense on storage speed.
 
Well seeing benchmarks for the latest Battlefield/front games as an example they seem to be using 11GB or thereabouts usually so they definitely need more than 8.
PC benchmarks aren't at all representative of what consoles can do. The engine cannot be optimised for a specific amount of VRAM, so it'll probably just copy resources ad hoc from system RAM under DirectX calls, unless they're all in VRAM which removes that bottleneck. The amount of data needed is how much is rendered on screen plus world simulation. If hypothetically you can swap in all the data needed from storage in a single frame, you'd need only as much data as one frame - the principle of tiled resources.

This go-to post from Sebbbi lays out the (eye-opening) numbers as to what data is actually needed if it can be used efficiently.

Having just 16gb (so 12-13 after OS) is only 50% more than the X. Or glass half full, it's more than double the jump from the base consoles in usable memory. Idk, if they go that route they better spare no expense on storage speed.
I don't think you or they should be looking at it in terms of how many times the previous generation, but what can they do with $x of their limited BOM spent here or there. PS4, for example, is a great freak. It was set to get 4 GBs, then got a last minute upgrade. Sony knew they could plan for the 8GB option without changing anything fundamentally in the architecture and just being able to support the double sized chips if they became available in time. They did. The rend result was a jump in RAM from PS2 > PS3 of 12x, and from PS3 > PS4 of 16x where it would have been 8x. Now if PS4 had come with 4 GBs, we'd be looking at 16 GBs now and saying, "that's 4x as much, and when you factor in the OS footprint, more." The ratios could easily be different to what we got.

Now in planning PS5, there's no roadmap for RAM prices dropping massively in the next year for a box launching in 2020. It might happen, but we don't know that. So we have to design pretty safe, or stretch out our timelines and refuse to release a console until 24 GBs is avaiable.
 
Ps4 had 8gb and the high end on PC wasn't even double that, 8gb ddr3 and 2 or 3gb cards were (depending on if you had 680 or 7970). As for 2005, there were games actually recommending 1gb like FEAR and also a 256mb card. So the gap keeps getting narrower memory wise.

16gb ddr3 in late 2013 for high-end wasnt uncommon, i would say its right on. One of my systems from 2009 has 8gb (i7 920). The forst GTX Titan had 6gb vram (4.7TF), that was almost a year before PS4 came. A friend bought a 6gb HD7970 (4.3TF) in summer 2012 if i remember correctly, they werent rare either.

If you would say 8gb was high end by the consoles release (which i dont think it is) with 6gb vram cards thats 14gb, close to double the amount. Imo 16gb was high-end that would be 22gb ram total, almost 3 times more.
 
Direct comparisons to PC are always misleading.

You can put up to 128GB (or more) into a consumer PC. Doesn't mean a thing.

PCs are staggeringly wasteful in terms of memory due to them needing to store many GB of data that you don't need at any one time. Consoles do this too, but with a sufficiently fast storage system that could always be relied upon you could - if you chose to develop it - have a streaming system that could do away with 1~3 GB of memory on current systems and probably far more on next gen systems. (I have 32 GB of ram, no game has used even half of it, even with all games loading from a mechanical HDD).

If every next gen console uses fast flash (even a ~300 MB/s eMMC cache for game data would do) then games can move their baseline to not needing several GB for caching.

Unfortunately I wouldn't bet on flash unless it's able to eliminate the need for a mechanical drive in the base (pov tier) console.
 
It's still puzzling how audio and subtitled seem weirdly packages and limited by region. I generally do not want more than English audio, occasionally Japanese, to be installed but there could be licensing implications.



Audio still has scope for compression but this adds a challenge to realtime re-mixing for audio output. There's probably some marginal improvements on large texture storage but TANSTAAFL. Heavily compressed saving RAM but you may take a hit to performance as you're not just processing a a texture but decompressing it in realtime.

Of course, there is procedural texture generation. :runaway: It's been a good few months since I saw a 100k demo promising a breakthrough.

Realtime compression of other data has been around a few years in most operating systems but I wonder how much you could meaningfully save in a console. A general purpose OS knows that not all apps need to be active all of the time.
lol I was hoping someone would make mention of this. I haven't done much looking into this area, what is the major culprit about procedural that makes it impossible to use outside of tech demos?
 
Supplementing 16 GB of RAM with fast flash storage seems like an interesting and feasible idea and a good direction to take the conversation.
 
Atleast 64GB ram lol, not even PS6.... 24TF is also dreaming. Have you seen the last of us part 2, GoW, HZD? Thats also running on 1.8TF hardware.

I work in video games shop for almost 10 years, of course I know a lot about games.

Thats also running on 1.8TF hardware. What you think those developers can do with something like 11 or 12 TF, with perhaps also other improvements to the entire system?

There's already Xbox One X with 6Tf. And most of it's power went to 4K, so if next gen consoles will be ~14Tf, almost half of this power will go only to match Xbox One X, and there will be a little bit more than same power for other stuff, so realistically that will be console with something like twice better graphics, and that isn't generation leap.

If you want specs then go for it now, theres already 64GB ram systems, 128 if you want. 24TF also there for you right now if you want with SLI 1080Ti.

Console gaming and PC gaming differs not only in specs. I play on consoles not because of power. And if we speake about market, in 2020-2021 there will be so powerful PC's what PC gamers will laugh about consoles a lot stronger than when Xbox One and PS4 were launched. Many of them laughted then, because consoles were just like mid range PC's, and a year later gamer could buy for the same parice as consoles a PC with twice better specs. If next gen consoles will lauch in 2020-2021 with 14TF GPU and 16 GB of RAM, that realy will be even less than low end PC of 2020-2021.
 
PCs with 16 GBs RAM and 8 GBs VRAM are expensive and the top end.

Not exactly. ;) My friend two years ago bought laptop with 32GB of RAM, and i think 4GB video RAM, and that was two years ago in 2016.

Now in planning PS5, there's no roadmap for RAM prices dropping massively in the next year for a box launching in 2020. It might happen, but we don't know that. So we have to design pretty safe, or stretch out our timelines and refuse to release a console until 24 GBs is avaiable.
Yes that 24GB plan is really good. But maybe there will be same situation like with PS4. They will plan console with 16GB RAM, but will double RAM and will make console with 32GB RAM.
 
Last edited by a moderator:
Many of them laughted then, because consoles were just like mid range PC's,

They werent even mid range, maybe lower-end midrange... long before ps4/xone there were gtx titan and hd7970 which were much more powerfull gpu's.
Cpu wise they were a joke, total ram also a snap under midrange, if 8 + 2 were mid range late 2013.

And if we speake about market, in 2020-2021 there will be so powerful PC's what PC gamers will laugh about console

You dont buy a console for its specs, its the exclusives that justify one. If next gen games will look twice as good thats nothing to complain about, think twice hzd or the last of us 2. Then there could be better ai, physics and audio.

Im a pc gamer but not laughing at console specs, they dont impress on a hardware level but they do on a software level. Impressive how some games look on low/med 2013 hardware. Im not into cinematic singleplayer games @30 fps otherwise wouldve bought one.
 
Last edited:
There's already Xbox One X with 6Tf. And most of it's power went to 4K, so if next gen consoles will be ~14Tf, almost half of this power will go only to match Xbox One X, and there will be a little bit more than same power for other stuff, so realistically that will be console with something like twice better graphics, and that isn't generation leap.
But you canna change the laws of physics! There are three factors in play : time, price, and power. If you absolutely pick a power level, 8x PS4, and you pick a price of $400, you'll have to wait until the time allows that power at that price point. Which may be 2025. If you pick a release time, 2020, and a price, $400, you'll have to pick power that's available then at that price.

There's no point asking for a 'generational' advance in hardware (8x everything) by 2020 as it's not possible in a consumer-priced product. Especially compared to XB1X which of course is going to limit the gap between this gen and last. For starters XB1X is only experienced by a tiny fraction of current gen gamers. Most are on XB1/PS4 and whatever is a generational advance on them matters.

If instead you're asking for a 'generational advance' and to wait until it can happen, you want to discuss the whens of a launch in another thread. This thread is about the technical possibilities and plausibilities. That is, discuss what's possible in a 2023 console, but don't argue that that's the decision that should be made.
 
There's already Xbox One X with 6Tf. And most of it's power went to 4K, so if next gen consoles will be ~14Tf, almost half of this power will go only to match Xbox One X, and there will be a little bit more than same power for other stuff, so realistically that will be console with something like twice better graphics, and that isn't generation leap.

TF is just one measure, also to an extent Devs are being held back by the lowest common denominator (Xbox one). This gen CPUs were weak, do you remember when MS said how 'unbalanced' PS4 was? Well how 'unbalanced' is X!? With a better CPU and other improvements along with a nice GPU upgrade I can imagine a true gen leap from last gen.

Yes that 24GB plan is really good. But maybe there will be same situation like with PS4. They will plan console with 16GB RAM, but will double RAM and will make console with 32GB RAM.

There may be a similar scenario but I doubt they would double just go up to 24GB, as previously mentioned it'll all be to do with costs Vs dev expectations...IIRC Sony doubling RAM was a calculated risk they took after feedback from Devs.
 
Status
Not open for further replies.
Back
Top