Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Who cares whats marketable? Your average attachment rate doesn't jibe well with employing super sized HDDs or SDDs as a core component of a console.

True. My point was more that any console maker will go with a reasonable size that has a good trade-off between cost and size. Yes to mechanical sufficient harddrive, no to more expensive smaller but quicker sdd. What is marketable matters because in the end, the console maker will want to hit the market with a console in the right price range (I assume 399) as cheaply as possible. They'd rather invest in more silicon for more realtime performance rather than a quicker drive that will only benefit loading times. That's my guess anyway. My bet is they would sooner include some form of cheap flash memory as a buffer to speed up slow I/O off the harddrive.

With the likely increase in software in size (50GB?), I would expect them to target 1TB drives with the launch of the next consoles. I don't think SSDs at 1TB will be cheap enough (compared to mechanical drives) to make it a viable prospect. They'd rather save that money for profitability (and less risk) and go for some flash + mechanical harddrive solution. Think of it; a flash buffer would make it a SSHD hybrid solution. A game designer could declare parts of the code as long term data that sits and stays on that flash buffer, while other less frequently used data is fetched more randomly off the hdd.
 
My prediction is that the consoles will have SoC where everything is a co-processor/accelerator much like we have now but even the rendering part of the GPU will become it's own DSP like TrueAudio & Shape.
 
My prediction is that the consoles will have SoC where everything is a co-processor/accelerator much like we have now but even the rendering part of the GPU will become it's own DSP like TrueAudio & Shape.
We should go back to vertex shaders and pixel shaders.
 
No lol. I'm saying it will be a SoC with CPU , GPU compute units , TrueAudio & other accelerators like a ray tracing co-processor.

Ok, because your post above says, "but even the rendering part of the GPU will become it's own DSP like TrueAudio & Shape." Probably not written how you meant it.
 
I don't know if that is the route next gen console SoCs would be taken or not, but I would look at next year's highend GPUs, the render backends / ROPs, if they are different and how many there are. Currently, the Fury X has 64 and the 980Ti has 96. The current families of GPU have color compression as well, saving some bandwidth. But neither FuryX nor GM200 are well suited to handle 4K at 60fps with graphic options in current games maxed out, as far as single GPU/card systems. That may change with AMD Greenland and Nvidia GP100, I'm guessing they'll have 128 ROPs, ones that are improved in whatever ways made sense at the time they were engineered. Then the question becomes, can a high end GPU that was designed during 2012-15, released H2 2016, become a high end APU in 2017 and can that be semi-custom'ed into a console APU for a $400 consumer machine by 2019.
 
I was wondering about something close are consoles manufacturers come back from the "General Purpose everything" the "one side fits it all' and move to more heterogeneous hardware.
I think old the old running argument Nick, from Softshader, used to have others members about software rendering, I also think of the POV Tim Sweeney was defending on the matter years ago. There is one point Nick was right about that is that unification of the graphic pipeline did not come for free even the cost was well hidden by the steady process improvement back in time.
* I would state the same thing with regard to Intel CPU, Intel pushes some really wide SIMD units and it got a huge impact on their designs, those things are huge. it may make sense for Xeon class type of CPU as you can only add so many cores but for the public offering... it is balls, look at the Core i3 I wonder who would not trade those 2 cores with massive AVX2 SIMD units for a three core design with lesser SIMD.
* Another example of things gone too far is GPGPU, it is nice but we are actually finding out that FP32 for example is not really mandatory for pixels calculation (iirc some Sebbbi's and others members posts). I may not cost much, but still. Manufacturers are aware of the change in the technological environment years ago they started to push DP64 on their design, as Moore's law was starting to show its age and power concerns were growing, Manufacturers like Nvidia quit providing the option on their mainline GPU. Had the Moore's law continue to provide cheaper, faster more energy efficient silicon they would have not.
* Focusing on the GPU a little longer it is interesting to notice that ARM still think it is worse it to keep it Utguard architecture around. It is also interesting to look at the Tegra 4 SoC and the performances they were delivering against their silicon footprint. I do remember that not a few months ago I was told on those forum that we would see a 14/16nm GPU this year, it was a given. Well we have to bow to evidence it did not happened. Actually the only devices that used advanced silicon (be it 20nm or 14/16nm) were mobile SOC, it is definitely coming our way but people have to face the evidence of the disruptive impact of the end of Moore's law (nb not the end of the silicon roadmap). Long story short I would bet ARM think those Mali 470 are the last renditions of their Utguard architecture yet it would not surprise me ( the odds are lower though I may not bet... or less) if they have to reconsider their pov a couple extra times.
* Back to CPUs, SMP has been the design of choices for many years, it is the easiest but it is also costly, ASM allows for more processing within the same power profile and silicon footprint. Soon may be it will no longer be seen as a power optimization but a way to the most out of a slab of silicon which price is no longer going down (slightly up actually).
* Not exactely the same argument but the breaking of Moore's law also affects storage. Stacking is nice but it will not resurrect Moore's law, and HDD size no longer augments that much. I suspect it is the same with optical media.

It might be time to consider less general purpose units that the ones usually found in PC and the last round of consoles. We may see units already at play in PC/Consoles architecture take a greater importance, it seems the me the mobile computing is pushing technology into that direction, power and cost are great driver for efficiency. Actually I expect nothing really new, no physics or dedicated compute units, those specialized units as found in mobile Soc for image processing and for compression/decompression could definitely proves useful. I wish we could see a sound processing units but the market does not seem to care so without volume...
Overall the next gen system will be asked to do more with a relatively tiny increase in available resources and I believe it is doable. I expect the next round of consoles to have both a lesser silicon and power budget and it would not surprise me if the amount of RAM does not increase.
It would interesting the image processing units doing preprocessing of texture ahead of the GPU, overall it could be interesting to give up some quality and storage (HDD, media) at the cost of extra computation done efficiently.

I read some post about the hypothetical disappearance of ROPS and whereas I sort of get the point I do not believe in it, again I believe mobile designs in some aspects of the design we see in console/PC. I wonder if it not more sense to more proper GPU Cores aka having the ROPs and a couple number of what are now call shader cores way more tightly packed together and move to a more 'self reliant" building blocks. I've this feeling that should help with the design of datapath and making the most of data locality, etc. Speaking of data path it would be nice if GPU manufacturers go back to designing for FP16 instead of FP32.
It is perfectly fine to be able to process FP32 at half speed but it is different altogether than designing for FP32. Nvidia designed its GPU with warp of 32 FP32 elements (f****D up wording), AMD 64 FP32 elements. Even if you can process FP16 at twice the speed it has overhead on the design, you have twice the elements in flights which may have a little impact on many things. Speaking of Nvidia if 32 element is the good with for the SIMD, then designing for FP16 their SIMD would half as wide as they are now, the register files, in turn datapath could be half what they are, bandwidth-memory usage, etc.
People will be say "and ultimately it pushes half the FLOPS" but the things is the result on screen will have nothing to with say halving the resolution, it is a lot more subtle and if there few calculation that needs FP32 you still can do it half speed on a modern GPU is still fast. It is a personal belief from an outsiders of the industry but with the end of Moore's law manufacturers might have to reconsider what was considered "free" as it may amount in fact to a lot of silicon unnecessary for the main (by a giant extent) usage of a modern GPU 3D graphics computations.
 
Maybe of interest for next gen consoles, Oculus has come out with recommended PC spec they say provides a good VR experience.

http://www.neogaf.com/forum/showthread.php?t=1166048

NVIDIA GTX 970 / AMD 290 equivalent or greater
Intel i5-4590 equivalent or greater
8GB+ RAM
Compatible HDMI 1.3 video output
2x USB 3.0 ports
Windows 7 SP1 or newer

Not too crazy on the flops front. R9 290 is 4.84 teraflops according to google. We are already at 8GB RAM in theory...Given 1.3 and 1.8 TF GPU's, ~5 TF doesn't seem a problem at all for a theoretical next gen.

Of course there'll be no "magic threshold" for good VR, it'll be a sliding incremental scale that increases with time like the rest of PC gaming.
 
Last edited:
Developers will use whatever oculus tells them to test against during development, and it's arbitrary in the sense that it's not about what is needed for VR. It could have been a 950 or a 980 or whatever. With consoles this point is moot as the target platform is fixed. But it will have an impact if there is a big discrepancy between PC minimum VR specs and consoles (difficult porting). So far there isn't, the minimum specs are said to be fixed for the lifetime of rift, and ports seem to happen with ease. Once we move to next gen, VR developers have a major incentive to establish the minimum specs to match whatever consoles will be then, just like they did now. Third party developers want the widest market, and easiest porting.
 
Light version of nvidia px2 seems somehow doable as a next gen console

http://wccftech.com/nvidia-pascal-gpu-drive-px-2/

Take away the low power cores, hope for some power optimization on 10nm manufacturing process, add console specific stuff and tweak here and there. And ofcourse integrate all those cpu and gpu cores to single chip instead of current multi chip configuration.

I doubt something much faster is possible if next gen consoles come around max 150W for whole system and nothing unexpected happens on manufacturing processes. PX2 is great glimpse at what is possible when pushing the power envelope(250W).
 
Last edited:
Nothing nVidia is doable as next gen console. They only take high margin contracts.

nVidia has made GPUs for two consoles, plus they have their own Shield console. Right now, their 200€ Shield K1 tablet is definitely not getting high margins. I doubt that Tegra 3, which was in a huge number of tablets, was a high-margin SoC.

nVidia will do anything that makes them money, as they should.
 
Not making money on their own brand tablet isn't the same thing as not making money on a contract, as the tablet can be a loss leader to promote hardware adoption. I wouldn't rule them out of a new console though, but the pervading theory is, like Intel, nVidia doesn't need to fiddle about with low margin contracts and wouldn't offer the console companies a good deal. But even if so, there's enough margin shown on PS4's success that it'd make sense for a console company to use a better class of GPU at added cost if it'll mean more sales by having a competitive advantage.
 
Yeah. Both nVidia console options in the past weren't great partnerships, while AMD's been pretty great for consoles.
 
Status
Not open for further replies.
Back
Top