Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Near Woking, Surrey. I'm currently on uncapped 12 Gbps - it's a great deal with TalkTalk (was Tiscali when I subscribed ears ago). Since fibre was rolled out here maybe a year ago, we get junk-mail from BT et al offering faster speeds. But if they cap them, that's ridiculous. How much do you pay for your Unlimited Infinity 2? I'm guessing a lot more than I'm paying for my broadband!
My Infinity 2 gets to 78meg/20meg and all in all I pay BT around £50 a month. But for all intents and purposes its uncapped and totally unrestricted.
It's not the most stable connection, I must say.
 
My Infinity 2 gets to 78meg/20meg and all in all I pay BT around £50 a month.
Yep. The deal they were offering me was something like £10-15 pm month (plus £16 per month line rental but let's pretend that isn't part of the cost of ownership and put it in an aside). Which is about what I'm paying now (£25 pm). So effectively I'd pay more for faster yet gimp'd functionality. For an uncapped service I'd have to pay more than I pay now. So £25 a month more, say, for the option to run a download a console and buy download titles at greater than disc prices. Or keep my current setup with fast enough BB for my needs, lower running costs and cheaper game prices.

That might just be a really shitty deal from BT though and perhaps other providers aren't so restrictive at the same price, but I haven't seen anything to suggest as much. They all have to pay BT for the service.
 
If we are lucky next generation, we will have an APU with power level of an AMD Fury X with better featureset and 32 GB of HBM2 or maybe 64GB of HBM3. I am worried by the process node shrinking slowdown.
I'd really hope that the next generation of consoles can have GPUs that outperform PC graphics cards from 2015. If they could match something approximating the PC cards of the year they're released, I'd be an awful lot happier.

Edit: I don't see Fury X specs on a 2019 console as 'lucky'.
 
I'd really hope that the next generation of consoles can have GPUs that outperform PC graphics cards from 2015. If they could match something approximating the PC cards of the year they're released, I'd be an awful lot happier.

Edit: I don't see Fury X specs on a 2019 console as 'lucky'.

Since December 2011 the same process node for GPU 28nm. It is more than 3 years and a half, maybe one GPU will be release last quarter of 2015 and this process node cycle would have a duration of nearly four years or it will change only next year. Next batch of console need to have an APU with the same die size than in the PS4, it will be a problem if process node shrinking duration continue to be between 4 and 5 years or even worse...

Another problem is AAA development cycle last gen it was 2 years, current gen it seems it is 3 years....
 
Last edited:
Indeed a low cost design may have two stacks of HBM not four.

I'd really hope that the next generation of consoles can have GPUs that outperform PC graphics cards from 2015. If they could match something approximating the PC cards of the year they're released, I'd be an awful lot happier.

Edit: I don't see Fury X specs on a 2019 console as 'lucky'.

But the question may be who wants a 200W or 300W console. We're almost at the point of considering liquid cooling here or the need to install room A/C to play in summer.

Besides, I believe 2019 is too early.
Perhaps 2017 for a shrink to 16nm FF of the current consoles, then a new console on 10nm process in 2020, or even later. Diminishing returns is a big problem.
 
Indeed a low cost design may have two stacks of HBM not four.



But the question may be who wants a 200W or 300W console. We're almost at the point of considering liquid cooling here or the need to install room A/C to play in summer.

Besides, I believe 2019 is too early.
Perhaps 2017 for a shrink to 16nm FF of the current consoles, then a new console on 10nm process in 2020, or even later. Diminishing returns is a big problem.

Last gen the 360 or much worse the PS3 show it is not economically viable to work with top class GPU for console manufacturer. I expect a custom mid range GPU with a low CPU like in PS4 with much better bandwith via HBM 2.
 
Perhaps "big" CPU cores in the form of AMD's Zen (or Zen v2, v3) if they're low power enough. Better CPU might be half of the show, if only for going from a complex 900p30 game to a 1080p60 one.

Today's laptop CPU tech may be an indication that full fat CPU cores are good for lowish power use : Haswell, Broadwell etc., even AMD Carrizo.
Current consoles got Jaguar cores because that was what's available and suitable (with TSMC as the first used foundry, even)
 
Last edited:
Manufacturers and developers wanted unified memory and low latency between the CPU and GPU. The only reasonable choice was to integrate the CPU and the GPU in the same die. Both next gen consoles have very large dies (exceeding much more expensive chips of that era). It would have been difficult to produce bigger chips (at sane price per chip).

HBM(2) is interesting, because the memory controller is much smaller than a GDDR5 controller. It also solves the BW issue and improves perf/watt compared to other off die memories, bringing most advantages of ESRAM (without wasting big part of the die to hold the fast memory). This allows you to put more computational units on the same chip. Too bad HBM was not available 2 years ago.
I see the positives and performance benefits from a 2 separate chip system with $280 of BOM towards the cpu and gpu chip with a lower latency than PC pcie interconnect (easily attainable given on the same circuit board and at shorter traces with chips at closer physical distances) far outweigh the performance benefits of what an even fast 30GB/s & 20GB/s interconnects between these circa 2013 $110 & $100 chips could provide. It also wouldn't have the contention issues unified memory has.

The fact the fabs now offers larger dies for lower price would reflect positively in a $280 BOM for discreet cpu and gpus.

A 3GB GDDR5/6GB DDR3 setup would allow them to go with 2Gb 6Gps modules for the gpu as opposted to the 4Gb 5.5Gbs modules, and thus have 192GB/s, have no contdntion , and have total overall system memory prices around the same as the PS4's.

PS4 8GB GDDR5 is $88 = $33 for 3 GB (to make things difficult lets say a $10 premium for the 6.0Gbs modules)
XboxOne 8GB DDR3 2133 modules is $60 = $45 for 6GB
Overall 3/6 memory BOM = $88

If a more powerful discreet cpu gpu setup connected by a slower interconnect is this $280 discreet setup, its very much similar in limitations to a PC gaming setup. Why is it likely that there won't be many AAA games (or possibly any) that are not possible to port in full to pc, hence not capable of being designed on a $280 BOM console with slow cpu/gpu interconnection?
 
Contention has it's issues but what you get for that issue is the ability for any processing element to access any piece of data without copying back and forth. That is a far bigger win than the grief of dealing with contention is a loss. Of course devs can and do deal with this on PC all the time but if you were to design from a clean sheet (as we do with consoles) then you would avoid this from the start. PS3 suffered very badly early on because it was a 50/50 GPU/CPU ram split until Sony started improving their API and sharing 'lessons learnt' style presentations on their own titles.

A 3/6 split is further hampered by the fact that it would represent a downgrade on what we have today in PS4 for VRAM. Most slides I've seen from prolonging tools suggest that ~4GB of that is texture data today. Reducing that would force the next gen to look decidedly less next gen than today's consoles.

Edit: I wanted to type 'profiling' but my phone went with 'prolonging' and it's just too perfect, 'We'll ship just as soon as I resolve this edge case I can see in the profiler here <days later> it's all on fire now where are the backup tapes?'
 
Memory profiling apps are a poor way to measure memory requirements as many games will liberally cache more textures than needed.

Have you not seen example after example that in multiplatforms the PC versions can employ superior textures?

In practically no circumstances do the PS4/Xbox One having superior texturing to whatever conceivable hardware limit 3GB discreet memory for a gpu would have, as at the equivalent texture settings top end 3GB videocards keep up perfectly fine and if the developers so chose can surpass the ps4/xbox one in texture (quality) size. Infact games like Titanfall were given relatively pointless "ultra" texture settings which gave pc gamers uncompressed textures and shot up memory usage and it still remains under 3GB vram usage. 4GB is not a prerequisite for any videocard to employ textures seen in the ps4/xboxOne version of any modern multiplatform game.

Texture compression is a universal practice in the industry. The limitation to texture quality in the current console games is memory bandwidth. That is why you see pc gamers who have high end 3GB videocards employing superior textures/mipmaps or anisotropic filtering to the ps4/xbox one versions of PC games like BF4, Tomb Raider, Watchdogs, Witcher3, Thief and GTA5.


In Tomb Raider: Definitive Edition, although Laura Croft model is higher quality with subsurface scattering on the PS4/Xbox One, I've seen several screenshots with reduced environment textures compared to the earlier released PC version.
2426488-sxxxrxt.gif


Now there may in the future be situations where the 3GB becomes a limitation compared to the consoles unified memory, such as games with lots of texture variety and no loading screens may necessitate the need for more loading screens, but overall 3GB of discreet GDDR5 192GB/s will allow for overall better texturing of scenes. Another reason on PC we may see 3GB on PC not be sufficient is studio and publisher heads deem low pc sales would dictate fewer manhours to porting the pc version of a particular game, however if the consoles themselves were to have this memory system it obviously would not be the case.
 
Last edited:
In Tomb Raider: Definitive Edition, although Laura Croft model is higher quality with subsurface scattering on the PS4/Xbox One
In fact Lara's mesh is the same, but overall model is worse on the PS4/Xbox One due to the lack of any tesselation, hence the old good triangular boobs on PS4/Xbox One, the same can be said about your gif above
08_20140128111241-pc-games.jpg

I am quite annoyed by the directions of modern graphics, instead of having moderate amount of highly detailed models near the camera, we are having thouthands of 10 poly meshes for all kinds of tiny invisible shit on the background, someone call this 100+ miles draw distances, I call this - console BS, why do we have all this tiny flickering, shimmering, aliasing pixel mess which requires supersampling to look at least clean? Small objects on the background are not the attention attractors, why developers don't move detalisation closer to camera? Because consoles can't handle anything different than thouthands of draw calls for tons of small pieces of unnecessary geometry and because consoles have 8 GBs of memory which need to waste on huge sandboxes games?
 
I am pretty sure they could get tessellation on Ps4 if they aimed for a 30 fps lock. That would mean visual disparity between the platforms which is a huge taboo this gen for Ps4 and X1. Overall i am happy they went the 60 fps route for Ps4 though, it's a looker and runs great. I can't even sustain a solid 60 on PC with a GTX 970 which is almost 3 times as powerful. Would you prefer a capped 30 fps Tomb Raider with tessellation and better quality environments? I believe these are things you can only notice when comparing shots between the different versions, and 60 fps is something you notice immediately.
 
I can't even sustain a solid 60 on PC with a GTX 970 which is almost 3 times as powerful.

If you're running with supersampling turned on maybe. Otherwise a 970 should be able to max this game out at 60fps with ease. I can run it maxed with FXAA at around 40-50fps on a GTX670.
 
In closeups during cutscenes when i have TressFX turned on it's still dropping frames with no SSAA, just FXAA. That on an overclocked 970 to hell with a flashed custom bios.
 
I'm not sure the odd close up in a cut scene is enough to justify claiming the game is unable to maintain 60fps. Especially as you describe the PS4 version st 60fps which from what I've heard it rarely actually reaches that frame rate.
 
I said it is 60 fps, not stable 60, nor capped 60. I am just saying that even on a system at least 3 times as powerful i cannot sustain a fully locked 60. So what they achieved on Ps4 is still impressive given the hardware.
 
Status
Not open for further replies.
Back
Top