Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
If there will be a 9th console generation, where computation are done locally, I guess: quite full programmable API (ie, focused on compute shader) with some fixed function for base geometry/primitive processing, some kind of micro-polygon tesselation support, and ad conservative rasterizer. Yes, traditional shader stages must die.
 
If there will be a 9th console generation, where computation are done locally, I guess: quite full programmable API (ie, focused on compute shader) with some fixed function for base geometry/primitive processing, some kind of micro-polygon tesselation support, and ad conservative rasterizer. Yes, traditional shader stages must die.

To me (someone without a background in 3D graphics) in a general sense that kind of reminds me of Intel's Larrabee.
 
To me (someone without a background in 3D graphics) in a general sense that kind of reminds me of Intel's Larrabee.
Larrabe is x86 based. You can put over 9000 x86 CPUs, but it is always the same issues GPUs vs CPU about the kind of works they are meant to execute. Compute shaders allow you to build your personal graphics pipeline but on the other hand there are still some works where fixed functions have an higher efficiency.
 
We can safely say that nextgen will use some iteration of hbm, and that by that time substrates will be cheaper than now.
We can say too that one of the engineering limiting factor this generation was early yelds.
Sooooooooooo... what about a splitted multichip soc?
Not a crossfire/sli setup, but maybe soc A with cpu, memory controller, fixed function and... well... stuff..., soc B with a sea of shaders, esram and more stuff.
Being so close and on a substrate they can connect with a big parallel bus.
Of course when a new node shrink is available they can go for a monolithic chip.
Another advantage is that you have some upgrade advantage.
I know that a console is almost fixed spec, but say that 8k bluray became all the rage and you need a new decoder standar (say h267), you can respin only half soc adn ship the new playbox+ with a new bluray drive and the ability to play transformer 6.
 
Multi chip solutions are hellborn as the kind of bus necessary to get good comms are usually super fussy, expensive and just more hassle than just fabbing a single giant ASIC. It's why all those multi-chip, multi-bus server solutions died off so quickly once massive monolithic chips became cost effective.
 
http://wccftech.com/exclusive-next-generation-consoles-expected-2018-amd/

From WCTech: Next Generation Consoles Will have 5x Performance/Watt with Strong Focus on VR – Expected by 2018

So 1.84TF*5 = 9.2TF on the same thermal budget if we don't take in consideration any other variable and take this litterally

Random questions:
1) Nintendo announced zelda and starfox for wiiu on spring 2016, so where's the wii2 from amd?
2) Now that the jaguar core will be replaced by zen/zen+ in tablet, will it end in next gen console? It's a scalable design like arm's?
3) 5x improvement in 5 yeaaaaaaaaars? Really? GTX 980 is at 4.6TF on notebooks now!
4) Focused on VR because in three years it will be able to play games on oculus with the graphic scaled to current standard? Why don't you focus on VR now simply scaling the graphic to that of 3 or more years ago upping the framerate?
5) Why not a denver + gtx?
6) Isn't 2018 too soon? I feel like current generation has been in beta until now and will begin to start next year
 

I think this sounds about right. It would put the design in the ballpark of 10 Tflops as many are already expecting.

I would also say that delivering chips from AMD to console manufacture doesn't necessarily mean that it's the same time frame for consumer. It could be prototype or a dev kit which would ideally come a year or so before consumer launch. I think we're still looking at a 2019 launch.
 
I dont know about 2018 because we dont even know if AMD will be around in 2017 in its current form. They may go completely belly up or sell off the GPU business or the CPU business.

Despite this console generation being early in its lifetime, if the next gen consoles are backwards compatible (forward compatible software now) then consumers wont mind and will welcome the quick upgrade.
 
http://wccftech.com/exclusive-next-generation-consoles-expected-2018-amd/

From WCTech: Next Generation Consoles Will have 5x Performance/Watt with Strong Focus on VR – Expected by 2018

So 1.84TF*5 = 9.2TF on the same thermal budget if we don't take in consideration any other variable and take this litterally

Random questions:
1) Nintendo announced zelda and starfox for wiiu on spring 2016, so where's the wii2 from amd?
2) Now that the jaguar core will be replaced by zen/zen+ in tablet, will it end in next gen console? It's a scalable design like arm's?
3) 5x improvement in 5 yeaaaaaaaaars? Really? GTX 980 is at 4.6TF on notebooks now!
4) Focused on VR because in three years it will be able to play games on oculus with the graphic scaled to current standard? Why don't you focus on VR now simply scaling the graphic to that of 3 or more years ago upping the framerate?
5) Why not a denver + gtx?
6) Isn't 2018 too soon? I feel like current generation has been in beta until now and will begin to start next year

I know it is only a rumor but it is credible at least... I think 2018 is a bit early maybe Microsoft yill try to come release Xbox Two early? 2019 is better...
 
If it's 5x the performance with a 1080p/30 or 60fps goal then the leap would practically be the same as this gen to last gen. For VR games we probably wont see a huge leap to the current gen and you can forget about 4k gaming with substantial leap in graphics. I guess they'll be selling us two versions of the same game soon, one standard, one VR and that would make some sense in terms of player choices.
 
While it sounds interesting, I wouldn't put too much faith in Wccftech's rumors. They're known for being a rumor mill website that will post anything, no matter how credible this so called "source" is. Of course it's always fun to speculate.
 
3) 5x improvement in 5 yeaaaaaaaaars? Really? GTX 980 is at 4.6TF on notebooks now!
Imagine now that same 980 being taped at 14nmFinfet that has 2x performance per watt. There you go, nice 9.2TF chip that will be available in 2016. :) By 2018 they can reduce its footprint.

And I would not call that notebook. That GPU is in a fat laptop.
 
While it sounds interesting, I wouldn't put too much faith in Wccftech's rumors. They're known for being a rumor mill website that will post anything, no matter how credible this so called "source" is. Of course it's always fun to speculate.


So this but let's indulge the idea. 5 x perf/watt != 5 x perf with the increasing focus on energy costs and the fact AMD is an ARM licensee they could achieve this by offering an ARM solution today. The success of these consoles shows that for most folks their console does not need to be bleeding edge merely a clear improvement on what came before if that could be bundled with being cheaper there's a good case for offering a PS5 based on ARM.

In reality I would expect something based off the next gen low cost x86 AMD platform with a mid range 2018 GPU equivalent to maybe 2 x GTX 970 levels of perf. Enough to run a really nice 1080p framebuffer as current adoption trends for 4k tvs don't suggest that they'll be the standard until 2024 (July 2014 IntelSat survey (PDF) http://www.intelsat.com/wp-content/...efinition_TV_Adoption_and_Business_Models.pdf - IntelSat are a satellite company focusing on tv and film content) allowing for at least one more console cycle before 4k support becomes a necessity.

Edit: Another report form Business Insider suggesting that by end 2018 an adoption rate of only 10% rising to >50% by 2024 http://www.businessinsider.com/4k-tv-shipments-growth-2014-9
 
Does the reported performance advantage of A9X in iPad Pro over Core M in bottom-tier Surface Pro 4 indicate a viable future for ARM in a powerful console?
 
I don't believe the rumor... but 5x perf per watt, and back to a 200W console could be a nice 15TF.

Looking at the PS4 enclosure, they could have made it maybe an inch wider and half an inch taller, with bigger heat sink for a 200W console.

(if we don't get at least 15TF, I cancel my preorder)
 
Does the reported performance advantage of A9X in iPad Pro over Core M in bottom-tier Surface Pro 4 indicate a viable future for ARM in a powerful console?
Twister CPU core is already significantly faster than current gen console CPU cores. Twister beats Jaguar in all areas. It has significantly higher IPC (wider execution, deeper OoO), it has slightly higher max clocks and it has much faster and larger caches. Comparing performance/watt is not easy, as the core counts differ, and the manufacturing process differs. However I would expect performance/watt to be one of the key design points in Twister CPU design (as mobile phones are it's main market segment).

That 51.2GB/sec bandwidth figure is also awesome. Especially as it is combined with a PowerVR GPU that uses tiling to save considerable amount of bandwidth compared to traditional GPU architectures. But it is still far away from PS4s bandwidth figure. A big ramp up would still be needed to match 9th gen expectations.
 
Twister CPU core is already significantly faster than current gen console CPU cores. Twister beats Jaguar in all areas. It has significantly higher IPC (wider execution, deeper OoO), it has slightly higher max clocks and it has much faster and larger caches. Comparing performance/watt is not easy, as the core counts differ, and the manufacturing process differs. However I would expect performance/watt to be one of the key design points in Twister CPU design (as mobile phones are it's main market segment).
I did some rough pixel counting from die shots, and it looks like the CPU area of the Samsung version of the A9 dedicates ~13mm2 to the two CPUs. The L3 cluster is an additional ~5.75mm2.
The PS4 APU dedicates roughly 55mm2 to the Jaguar clusters.

Just going with an assumed 2x density difference between 28nm TSMC to 14nm Samsung, then bumping Twister up would give 26mm2.
One could double the core count to 4 for roughly the same area as what the PS4 gave to Jaguar, but this would go without the L3 cache. I think it might be argued that Jaguar and Twister's density/perf tradeoffs were different, and that there might be further wiggle room between Samsung and TSMC's processes.

AMD's later reveal of functionally identical Puma cores with functional turbo and fixed DVFS may indicate that one of the architectures came out significantly more half-baked than the other, but Apple could hardly be blamed for that.
Twister's power consumption would likely at least double if it were to appear on TSMC 28nm, but where that puts it relative to the TDP budget given to Jaguar and the clock impact is unclear.

It would be a question of whether 8 small cores versus 4 large would be preferable, with the large cores being potentially hobbled somewhat to fit the area budget by losing the L3 and getting a very inferior process in the act of normalizing things. Granted, Jaguar by default is hobbled given its hackish cross module L2 penalties.

The 3 year difference is something Jaguar does have over Twister.
 
Status
Not open for further replies.
Back
Top