Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Just making it bigger because you expect it to get bigger is poor engineering. How much memory will actually be needed, based on time to populate it and how much devs can spend to fill it? That there will give you the RAM requirement. I'd far prefer less RAM and an SSD, and I'm sure devs would too. Far more flexible.
Historical precedence counts for a lot though. Past speculation (that i've seen) of future console tech has usually included a vast underestimation of the amount of memory needed. Like last gen, a lot of people thought 2GB would be what we would get.

I agree that an SSD would be great, and would be the only justification I can think of not to use ample amounts of memory considering the cost of SSD storage.
 
Historical precedence counts for a lot though. Past speculation (that i've seen) of future console tech has usually included a vast underestimation of the amount of memory needed. Like last gen, a lot of people thought 2GB would be what we would get.

I agree that an SSD would be great, and would be the only justification I can think of not to use ample amounts of memory considering the cost of SSD storage.
But it sounds like we're trying to take all this power to move to more procedural based systems, and that would be a great use of both memory and power; however, it's also really challenging to curate for players to find enjoyable.
 
I'm not a big fan of the 4K is wasted power narrative. Higher quality textures and more fine detail is what moves us to next gen among other details. Yes it's not as a big of a jump from lower levels, still a jump none the less.

There might be better areas to invest that power but I'm not necessarily sure if people would come to love the game more because of it.

I guess it's the lazy way to increase quality since you're sampling more texels & geometry naively. Rotated Grid MSAA shaded @ sample frequency with increased per-surface AF should be more efficient along with other techniques for specular anti-aliasing ala LEAN mapping, reconstruction etc. Objects with small screen-space area would be the biggest loser, of course.

Like last gen, a lot of people thought 2GB would be what we would get.
Instead of trying to be the guy who predicts with 100% certainty, it'd be more fruitful for discussion to see what options are available. i.e. look at the actual specs that determine the final amount (bus width & chip density) instead of just a blanket wishlist.

2GB is what's easily achievably on a 128-bit bus with 2Gbit density chips @ 16-bit I/O per chip.

As we know, there are trade-offs for a fatter bus, but this gen we had APUs from the start, so a significantly larger chip than an individual CPU or GPU in previous generations. Next gen design philosophy may continue with that further.

256-bit bus obviously doubles that to 4GB, which is what we nearly got with PS4 until the timely arrival of 4Gbit density chips. It wasn't until very late that public evidence of their existence showed up in Hynix catalogue, and it was a literal last minute decision on Sony's part before the reveal presentation.

Again, instead of trying to be Mr. Right All The Time, just keep open to the range of options and the situation.

Next gen, who knows if we might see them go for an even fatter chip for yield purposes if the latest node isn't going to be as mature as the 28nm consoles at the same relative point in time.
 
Last edited:
I'm not a big fan of the 4K is wasted power narrative. Higher quality textures and more fine detail is what moves us to next gen among other details. Yes it's not as a big of a jump from lower levels, still a jump none the less.

There might be better areas to invest that power but I'm not necessarily sure if people would come to love the game more because of it.
It'll surely be like the last 2 generations of consoles where there's a target resolution ; 720p last gen and 1080p on the ps4 but certain games drop below that.

Thing is sub 4K will still be very crisp so non native games will be less of an issue than ever. But I wouldn't mind if native 4k was standard, I don't think great image quality is a waste at all.
 
For a lot of these, SMT included, could help speed development time since developers aren't wasting excessive time optimizing. They can focus elsewhere.

It also helps with teams who don't have the resources (indies, smaller studios)
Perhaps, although I'm struggling to imagine devs who would have such a large dataset who wouldn't optimize in subsequent titles, so it feels like a bandaid. I guess I'm more wondering what the cost of implementing HBCC is silicon-wise, and how it might affect bus traffic in a shared environment - potential for falling off a performance cliff.

I guess it'd be curious how Scorpio handles BC titles with the extra 4GB for caching.
 
I guess it's the lazy way to increase quality since you're sampling more texels & geometry naively. Rotated Grid MSAA shaded @ sample frequency with increased per-surface AF should be more efficient along with other techniques for specular anti-aliasing ala LEAN mapping, reconstruction etc. Objects with small screen-space area would be the biggest loser, of course.


Instead of trying to be the guy who predicts with 100% certainty, it'd be more fruitful for discussion to see what options are available. i.e. look at the actual specs that determine the final amount (bus width & chip density) instead of just a blanket wishlist.

2GB is what's easily achievably on a 128-bit bus with 2Gbit density chips @ 16-bit I/O per chip.

As we know, there are trade-offs for a fatter bus, but this gen we had APUs from the start, so a significantly larger chip than an individual CPU or GPU in previous generations. Next gen design philosophy may continue with that further.

256-bit bus obviously doubles that to 4GB, which is what we nearly got with PS4 until the timely arrival of 4Gbit density chips. It wasn't until very late that public evidence of their existence showed up in Hynix catalogue, and it was a literal last minute decision on Sony's part before the reveal presentation.

Again, instead of trying to be Mr. Right All The Time, just keep open to the range of options and the situation.

Next gen, who knows if we might see them go for an even fatter chip for yield purposes if the latest node is too expensive to have any wasted chips early on.

I'm just speculating like anyone else, take it easy. You think i'm scoffing at people saying less than 32gb? 24gb seems reasonable as well, yes, depending on the chips they have. I never said anything opposed to that.

I don't think 2GB was ever seriously on the table and Sony would've had to really limit the Ps4's non gaming functionality, or add additional DRAM if they were limited to 4gb gddr5.
 
I'm just speculating like anyone else, take it easy.
Sorry, I didn't mean to target you specifically. It was more a general statement on the matter. Plenty of folks have wishlists based on historical trend, but it's easy to get caught up in trends vs what's going on behind the scenes that led to the final spec.
 
Can HBCC access an SSD directly?

The Radeon Pro SSG places NVMe drives on the board, and AMD's description and some of its slides indicate it is a connection separate from the PCIe link to the rest of the system.
I don't think the storage is treated like a typical system drive, however, so a console using that particular feature may have to include another drive that the main system can use.
 
Historical precedence counts for a lot though. Past speculation (that i've seen) of future console tech has usually included a vast underestimation of the amount of memory needed. Like last gen, a lot of people thought 2GB would be what we would get.
A lot? I don't recall anyone presenting that as a valid amount. It was always 4GBs minimum factoring in a balance with speed (GDDR5). Precedence doesn't define any competent engineering process either. That is, precedence would tell us TV resolutions will keep quadrupling every 5-10 years, so eventually we'll get 30,720 by 17,280 pixel displays that are thirty feet across... There'll be hard limits and sane limits, the same limits that meant we didn't go from 30 MHz PS1 CPU to 300 MHz PS2 CPU to 3000 MHz PS3 CPU to 30,000 MHz PS4 CPU.

The amount of RAM in the next machines will be decided upon based on smart engineering principles and not mindlessly following a historic and vaguely geometrical progression.
 
A lot? I don't recall anyone presenting that as a valid amount. It was always 4GBs minimum factoring in a balance with speed (GDDR5). Precedence doesn't define any competent engineering process either. That is, precedence would tell us TV resolutions will keep quadrupling every 5-10 years, so eventually we'll get 30,720 by 17,280 pixel displays that are thirty feet across... There'll be hard limits and sane limits, the same limits that meant we didn't go from 30 MHz PS1 CPU to 300 MHz PS2 CPU to 3000 MHz PS3 CPU to 30,000 MHz PS4 CPU.

The amount of RAM in the next machines will be decided upon based on smart engineering principles and not mindlessly following a historic and vaguely geometrical progression.

Well I was on Neogaf prior to the release of the 8th gen and that is what plenty of people expected. It's good to set expectations low I guess lol.

You can't build a PC with a 30 ghz cpu, but you can build one with 128gb of memory. Obviously there are limits to everything but I can't see any for memory amounts anytime soon. Not that a next console would need that much but at some point that's going to be a standard amount of memory.
 
Last edited:
Well I was on Neogaf prior to the release of the 8th gen and that is what plenty of people expected. It's good to set expectations low I guess lol.

You can't build a PC with a 30 ghz cpu, but you can build one with 128gb of memory. Obviously there are limits to everything but I can't see any for memory amounts anytime soon. Not that a next console would need that much but at some point that's going to be a standard amount of memory.
Looking at dramxchange 2017 contract prices for 128GB of cheap ddr4, the problem becomes evident. It's cheap commodity ram and it's still $1000. How much ram do you get for a reasonable console BOM of around $75? Memory prices are not dropping as fast as they did during previous generations. HBM failed to meet both cost and performance expectations. It's not looking good!

Sony's gddr5 procurement negotiation during PS4 launch was not luck as some are suggesting. Sony said in an interview, without details, that they were proud of their procurement team who negotiated the gddr5 contracts, the rest is history.

My guess is... They convinced at least one memory supplier that giving them a lower price for twice the number of 4gbits parts would net them a more profitable contract in the long run, maybe it included tied future contracts for 8gbit parts, and with sony taking the lowest speed bin gpu manufacturers don't want, it increases the number of the very profitable high speed bins for other clients immediately and for many years of ps4 production. Stable volume contract for years allows to dedicate production capacity. In order to get suppliers to drop the price below market demand, they need to make it profitable for both sides of the table.

Mid-gen had to stay at 8GB for a 399 console (or a year delayed and 499 for 12GB) with both around 350mm2 die area. I think there is a reasonable limit of 24GB and a 7nm soc below 400mm2 if we're thinking 2019.
 
Perhaps, although I'm struggling to imagine devs who would have such a large dataset who wouldn't optimize in subsequent titles, so it feels like a bandaid. I guess I'm more wondering what the cost of implementing HBCC is silicon-wise, and how it might affect bus traffic in a shared environment - potential for falling off a performance cliff.

I guess it'd be curious how Scorpio handles BC titles with the extra 4GB for caching.
I see it more as an aid, as opposed to a last minute fix me, it may be extremely useful as a prototyping tool as well. All in all, I can't see it being a downside, provided it doesn't cause issues during a thread swap, I can see it handling awkward code scenarios that cause hiccups and are hard to track down. As for HBCC, it feels like something similar as well, some minor use cases where the streaming engine has exceeded your code and HBCC is there to help.

But on top of all of that, if a developer is falling behind schedule, I anticipate that the developer will be leaning on these types of technologies to handle poorer optimization.
 
I'm curious, has there ever been a generational transition where there hasn't been an increase in memory bus width?

Straight forward comparisons aren't always so easy, but if you look at the previous gen you see the 360 with a 128-bit bus + 512 bit (iirc) edram bus, and the PS3 with 128 + 128 leading into the PS4 with 256-bit. Bus width has increased far more slowly than flops, GB/s etc.

Looking further back, DC has 64 + 64 + 32 bit (plus fast embedded tile buffer), OG Xbox was 128-bit and BW limited at times etc. Physical IO on the chip and traces on the motherboard won't scale as easily as transistor density, unfortunately.

While I personally think a 256-bit bus with 16x GDDR6 chips would likely be a good cost effective solution, I wouldn't rule out a 384-bit bus width.

I think we'd only see that if there was a plan for a rapid narrowing of the bus (which won't happen if you start with GDDR6) or if there was going to be a lower end model to support it.

Also, there's no fathomable justification in my mind for beginning a new generation with two entirely separate console hardware configurations—certainly not ones with entirely different hardware configs, e.g. I could maybe see at a stretch a case for a "lower" end APU which just fuses off CUs on the GPU to improve yields at launch.

Some possible justifications might be:
- Covering a range of market price points
- Having a halo product
- Replacing your previous, last gen console with a faster, more attractive product at lower and more profitable price point.

Otherwise, you're simply doubling up on engineering design costs, merely to achieve what? A cheaper, weaker "next-gen" console that only serves to lower the baseline for game development. I can't see why anyone would want that.

It's extremely unlikely you'd double up on engineering costs. Large parts of the chip would be the same, the customisation would be shared, the software engineering and API development would be shared, the dash/OS/UI would be too, the devkits would be shared, licenses would be shared. In addition marketing would be shared, as would developer relations, bug fixing, aspects of console design, peripherals, using volume of memory and fab space bought to negotiate pricing, shared mobo components and things like optical drives and HDDs ....

If your CPU and supported features on the GPU are the same (shared architecture) then the baseline for game development is effectively unchanged. Resolution (framebuffer, texture, shadows, particles etc) are scalable with no impact on simulation complexity.
 
Looking at dramxchange 2017 contract prices for 128GB of cheap ddr4, the problem becomes evident. It's cheap commodity ram and it's still $1000. How much ram do you get for a reasonable console BOM of around $75? Memory prices are not dropping as fast as they did during previous generations. HBM failed to meet both cost and performance expectations. It's not looking good!

Indeededly. It's another reason why I think spanning two price points for your platform might make sense. Some people will pay the extra $100 ~ $200 for more memory and faster uncompressed pixels. But most won't....

Mid-gen had to stay at 8GB for a 399 console (or a year delayed and 499 for 12GB) with both around 350mm2 die area. I think there is a reasonable limit of 24GB and a 7nm soc below 400mm2 if we're thinking 2019.

I'll spread my bet around the 16 ~ 32 GB range. If it's single unit perhaps 16 or24 GB, if there are two performance profiles then 32/16 GB or 24/12 GB.
 
Last edited by a moderator:
Looking at dramxchange 2017 contract prices for 128GB of cheap ddr4, the problem becomes evident.
In addition to that, storage isn't getting 8x faster to match 8x more RAM. We already have games taking minutes to load. Factor in download times and install times...128 GB would be unusable. If you want to use that much storage, you'll need it to be persistent to spend time just once during 'installation'. So n TB HDD, 128 GB SSD/fast flash, and 24 GBs RAM, sort of thing.
 
I don't think the storage is treated like a typical system drive, however, so a console using that particular feature may have to include another drive that the main system can use.
As I understood, that distinction is simply a lack of a file system as SSG uses block IO no different than memory. Not unlike a swap partition on Linux. So it should be possible to partition, however separate drives may be more economical.

It will be interesting too see just how much paging can reduce storage needs. Beyond some early AMD Vega demos, I haven't seen anyone test HBCC with limited capacity. Must be some way to do it, although wasteful. 8GB HBM2 with everything paged from disk may be sufficient for an upcoming console and economical.
 
Two tiers at launch seems silly. Lauch units are already for big spender consumers. The more concious ones wait out, and the really cheap ones are still buying the old gen models. There, all segments covered. Its better to lauch a premium version a few years in when it like sony and MS are doing, and spreading the hype through the gen, and even getting some to buy your machine twice.
 
Status
Not open for further replies.
Back
Top