Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
The 7970 was high-end and had over twice the power budget of the entire console.
The more mainstream 7770-7850 range came a little later, and is a much closer match to the consoles.

The stagnation in process tech meant that in this gen that it took that long for manufacturing to become cost-effective enough to go from a discrete card with its own margins to a more cost-sensitive component of an entire system. Due to the slowing economic benefit, not much changed on the power front, so the high end is out of reach for far longer.
 
The 7970 was high-end and had over twice the power budget of the entire console.
The more mainstream 7770-7850 range came a little later, and is a much closer match to the consoles.

The stagnation in process tech meant that in this gen that it took that long for manufacturing to become cost-effective enough to go from a discrete card with its own margins to a more cost-sensitive component of an entire system. Due to the slowing economic benefit, not much changed on the power front, so the high end is out of reach for far longer.

The 7850 and 7870 were available in march 2012. The GPU architecture and functionality will be better than a 2016 GPU but in raw power it will probably be comparable to middle range 2016 GPU If the new standard for process node shrinking is 4 years and a half to 5 years...

Next process node will arrive in 2020 2021, for a launch at 14nm or 16nm in 2019 or 2020 it will be impossible to build an economically viable console more than 2 to 3 times more powerful than a PS4 if the slowdown in process node shrinking continue...

Something between a mid to high range GPU of 2016 just talking about Raw power...
 
You'd want some LOD filtering, but yes, it does increase some overhead. There's a quote in my above Ars link from a researcher mentioning "the significant overhead of foveated rendering."

I was wondering about non-rectilinear rendering too, but I don't think that's supported on the GPU, though I'm far from knowledgeable there! But I believe they transform everything based on a standard flat camera projection. This is demonstrated by super wide game FOVs lacking barrel distortion and being unable to render fish-eye. Being able to render to the lens's distortion would be the ideal. That's two areas for future VR development - efficient LOD rendering for foveated rendering so the one scene is processed at the required detail for each pixel based on view (possibly doable now in software in the shaders using existing LOD techniques) and spherical optics for rasterisation. All of which is easily solved by transitioning to realtime raytracing. :runaway:


*cough*

From Anandtech:

t05o0Ak.jpg


kBdhNct.jpg
 
So if this requires specific support in the GPU architecture, does AMD have something similar?
 
Heh... this is "fixed" foveated rendering. :)

Its great how many new techniques were introduced in the past 2 years to get VR rendering as fast as possible.
 
The 7850 and 7870 were available in march 2012. The GPU architecture and functionality will be better than a 2016 GPU but in raw power it will probably be comparable to middle range 2016 GPU If the new standard for process node shrinking is 4 years and a half to 5 years...

Next process node will arrive in 2020 2021, for a launch at 14nm or 16nm in 2019 or 2020 it will be impossible to build an economically viable console more than 2 to 3 times more powerful than a PS4 if the slowdown in process node shrinking continue...

Something between a mid to high range GPU of 2016 just talking about Raw power...

the 28nm planar node was (is) long lived for a number of technical reasons. However, the next couple of steps look set to be rather quick. Samsung (GF) 14nm is already in production, as is TSMC16nmFF, and 16nmFF+ is ramping for full volume production as I write this. Now, these two processes are unusual in how much they share with the previous (and not very widely utilised) 20nm planar nodes. If you want to be uncharitable, you could say that TSMC and Samsung has taken 3-3.5 years to move to 20nm with FinFET transistors, but there seems to have been some additional improvements between 20nm planar and 14nm LPP, and 16nm FF+ respectively beyond just the transistor structure. Furthermore, both foundries look very committed to bring out the next full node lithographic step at 10nmFF at the end of 2016, which actually seems doable, although the number of process steps (and thus cost/wafer) will increase quite a bit. It may well take a while before 10nm is attractive cost wise to GPU manufacturers.
The next step after 10nm, 7nm, is trickier. Both TSMC and Intel seem to build production capability based on EUV. Extreme Ultraviolet (EUV) Lithography uses light of a shorter wavelength (13.5 nanometers) than the current standard in volume production of the most advanced chips, immersion lithography (193 nanometers). EUV can thus image smaller features without the need for multiple exposures, and allows semiconductor device makers to simplify the manufacturing process, exposing a critical layer of a chip in a single step. This can actually decrease cost/wafer, but it requires that the output of the light sources can be increased, to keep exposure times short and thus maintaining production rate in the lines. TSMC and Intel obviously thinks this will be achieved in the necessary time span, but it is not there yet. Also, FinFET may have played out its role, and new transistor designs will probably take its place which may or may not be a smooth process.
So at 7nm, the crystal balls get a bit murky. 10nm simply requires "more of the same" (hah!), while 7nm holds potential for both completely new cans of worms but also for smoother production with EUV.

Also, it bears mentioning that node designations is strongly related to marketing. A node change these days may not bring the overall benefits of a node change 20 years ago. So when for instance TSMC says they will start 7nm production in 2017, grains of salt should be distributed liberally all around when it comes to assuming what that actually means from a purely technical point of view, as well as the date.
 
So it seems to me, given the timeline most of us expect for next-gen consoles from Sony and Microsoft, 10nm sounds like it would be a given.
 
There was some disappointment for the peak performance scaling with this gen, which had three node transitions from the prior gen's launch.
90 to 65 to 45 to 32, or the partial nodes a notch below each. The scaling was not ideal, but I'm handwaving it to three distinct transitions.
A next gen launch at 10nm is two distinct transitions from 28nm, assuming 10nm can make a full transition. The hybrid nodes represent a clean jump in density and power efficiency over 28nm, maybe a bit better than an ideal node jump since they have had the time and parts of two transitions plugged into them.

Speculation would have to focus on where the designs and architectures could differentiate themselves performance-wise when there may be only a 2-4x improvement if everything was scaled naively from the current gen.
 
Is there any hope that monolithic 3D-designs will be possible soon?
Possible? Sure. Widely applicable and economically justifiable are the real questions though, and there the prospects are worse. Many don't believe that monolithic 3D structures will truly break through until traditional scaling has really hit the wall. (3D is an example of a technology that could get a "new node" designation even though it doesn't really have anything to do with feature size.)
There is a great number of technologies under exploration for 7nm and below - gate-all-around, tunnel-FETs, III-V materials, vertical nanowires et cetera are all terms you can expect to hear getting tossed around along with parasitic capacitance as a big issue.

It may be that 10nm technology gets pushed one further step down the nodal ladder. Intels Mark Bohr certainly made a case for 193nm immersion some time ago. On the other hand Intel recently ordered 15(!) EUV machines from ASML and that seems like a strong indication of how Intel will proceed at 7nm. We'll see if Samsung follows suit. Lithographic technology is surprisingly dynamic, it isn't the lock-step march down in feature sizes that the regular 130nm-90nm-65nm-45nm-32nm nodal designations make it appear. Freakishly complex though since producing actual devices on a process is ever more intimately coupled with an entire eco-system of software tools at all levels.

This is getting off topic. For relevance for the thread, just a nodal number doesn't say all that much about the overall properties and benefits of a lithographic process for a given application such as a console APU.
There was some disappointment for the peak performance scaling with this gen, which had three node transitions from the prior gen's launch.
90 to 65 to 45 to 32, or the partial nodes a notch below each. The scaling was not ideal, but I'm handwaving it to three distinct transitions.
A next gen launch at 10nm is two distinct transitions from 28nm, assuming 10nm can make a full transition. The hybrid nodes represent a clean jump in density and power efficiency over 28nm, maybe a bit better than an ideal node jump since they have had the time and parts of two transitions plugged into them.

Speculation would have to focus on where the designs and architectures could differentiate themselves performance-wise when there may be only a 2-4x improvement if everything was scaled naively from the current gen.

10nm is indeed a bit early for a large jump in capabilities, unless the target power draw is allowed to escalate, and I don't see that happening really. The processes target lower power operation for one, and ergonomically most prefer quiet devices in their living rooms. I can't see noise such as the original XBox360 or PS3 made being acceptable anymore. That said, a factor of four from the PS4 in 2018 seems relatively straightforward, but if that is enough to set consumer hearts on fire, well....
 
The recently released nvidia 980ti (particularly the OC one) is a great card for 3.4k and 4K gaming, particularly if theoretically combined with a gsync monitor according to DF:

Regardless, it's at the high end that GTX 980 Ti hits its stride. Combine the new card with one of the upcoming 4K G-Sync monitors and we could be onto a winning combination...

http://www.eurogamer.net/articles/digitalfoundry-2015-nvidia-geforce-gtx-980-ti-review

The Witcher 3, High, HairWorks Off, Custom AA 40.7 fps
Battlefield 4, High, Post-AA 69.6 fps
Crysis 3, High, SMAA 59.7 fps
Assassin's Creed Unity, Very High, FXAA 29.0 fps
Far Cry 4, Very High, SMAA 50.9 fps
COD Advanced Warfare, Console Settings, FXAA 96.9 fps
Ryse: Son of Rome, Normal, SMAA 45.6 fps
Shadow of Mordor, High, High Textures, FXAA 59.7 fps
Tomb Raider, Ultra, FXAA 66.0 fps

Those numbers are at 4K during gameplay and without directx 12 alleged improvements, @iroboto confirm me I am not wrong here. ;)

That's for those who are still thinking that next gen won't be 4K gaming + adaptive vsync tech...And we are still maybe 4.5 years before the PS5 and XBTwo...
 
That's for those who are still thinking that next gen won't be 4K gaming + adaptive vsync tech...And we are still maybe 4.5 years before the PS5 and XBTwo...
I think it's not about whether next gen will be 4k or not. It's about whether it should be!
Personally, I don't want to play today graphics at 4k, in 5 years time.
I will want to play much, much better graphics even at 2k (which is 1080p really).
Who cares about playing Uncharted 4 in 4k in 2020?? I will want Uncharted 6 with insane levels of detail, at whatever resolution they manage to run it.
But knowing Sony in particular, they have to shift those new TVs and they will be pushing for 4k and HDR and everything else.
 
There no doubt the next gen will be capable of doing 4K.

But the above games are current gen level of graphics quality. With a next gen engine, 1080p instead of 4K would allow 4 times more shaders per pixel. Dev would probably decide to use that power to significantly improve lighting, for example.

EDIT: nevermind, london-boy is a ninja.
 
So you think this gen will be 7 years? I hope not, the consoles will be very long in the tooth by then.
Until 7nm is ready. 14/16 nm will be for actual gen cost reduction. 7 nm will allow 20 tflops medium range GPUs (more or less Pitcairn size GPUs with that kind of power).
 
My kids still watch a lot CGI DVD on a 60" LED TV, stuff like Froze, etc. While the image is soft since it's upscaled DVD, there is still such a gap in per pixel quality of the movie versus any game. Now that's obvious, but the point I'm trying to make is that I'd rather have a really high quality 1080p image upscaled to 4K, then a native 4K image that had to sacrifice rendering alogrithms to achieve that resolution.

Maybe we'll see a return of more ASIC like features in the next gen to free up performance similar to the Xenos daughter die.
 
Until 7nm is ready. 14/16 nm will be for actual gen cost reduction. 7 nm will allow 20 tflops medium range GPUs (more or less Pitcairn size GPUs with that kind of power).

Sorry, that's all way over my head - I assume you're saying that it won't be technically possible to have the cost/size/heat required sooner...but I don't understand all that, if this gen is 7 years unless the GPUs are as helpfull as Cerny said then the gap between PC and consoles will be too big IMO...it was bad enough this gen and that was with a comparitively more powerful PS3.
 
Status
Not open for further replies.
Back
Top