Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
And of course with stacked CPU/GPU, one would expect a return to the wonderful world of weird processor naming. Pancake will do. I miss the Emotion Engine days.
 
I concur with the x86 basis. There seems little reason to change from that. The limiting factor at the moment seems to be fabrication and what'll actually be producible en masse. How much silicon is going to be available for use? What's the fastest bus that'll be possible?

Other than that, it's quite possibly the most boring set of options ever. CPU is x86 or ARM - can't see any reason for anything else. GPU is PC architecture (unified shaders, compute) as that's what everything's doing including mobile. Wild-card Raytracing processor? Those days are over! About the only possibility for something interesting is if I reawaken my Grand Vision and look at something like a gaming tablet with dock. Device-level connections should be extremely fast allowing a local network of processors across two devices to work fairly transparently. I doubt anyone has enough courage to try something as different as that though. Maybe Nintendo, wanting to be different.

I have wanted the later since first seeing a GRID tablet when I was a younger one. For me the perfect device would be pretty near to Surface Pro 3 size, external water cooling that you wear on your back. ;) A docking station for desktop use, where you can run three monitors up to 4k each. A controller dock similar to what the Razer tablet has for semi-mobile gaming (this is on the MS idea site, and has what 3 votes). Then a living room dock that would already contain DLNA 2.0 features for TV work even without the main tablet docked, controller support, AV support, etc.

I will work on the specs required and the TDP and get back to you. ;p
 
From GAF

http://seekingalpha.com/article/273...supply-chain-conference?page=2&p=qanda&l=last

The design wins are interesting because the funding, the R&D dollars for customizing the parts for the products to our customers is precisely pre-funded by the customer and like I said the workload is started and we are spending the money and the resources and the work to go ahead and design the parts to be introduced sometime in 2016.

We didn’t say at which space it is in. I’m not going to give too much detail. I’ll say that one is x86 and one is ARM, and atleast one will be on gaming, right. But that’s about as much as you going to get out me today, because the customers from the standpoint to be fair to them. It is their product. They launch it. They announce it and then just like the game console or the parts you find out that its AMD’s APU that’s been used in those products.


I'm guessing the game one is ARM, possibly for Nintendo, if there's anything to this.
 
Who would be after the other device? I wouldn't say Nintendo is 100% keen of cross-platform compatibility, so perhaps the other party would have more interest in an x86 device and Nintendo will focus on ARM and handheld compatibility? Whereas if the other party is someone producing a tablet or something (eg. Amazon??), they'd probably be after an ARM and that put the x86 with Nintendo, probably.
 
I also think it would be ARM just because they want a common platform for handheld and console. I don't think AMD has good enough X86/64 CPU to go into a handheld. Using X86 would only be possible on handheld if they go with Intel, but it would cost a lot.
 
May be a dumb question so don't slam me to hard. Since Amd is getting into ARM now could they make it x86-64 compatible?
 
May be a dumb question so don't slam me to hard. Since Amd is getting into ARM now could they make it x86-64 compatible?

If I think I understand what you're saying (make an ARM chip that runs both) no, it'd be a bit like having a petrol car and putting diesel in it. It just wouldn't work to the best of my limited knowledge.
 
Nvidia wanted to emulate x86 on ARM but to emulate x86 you have to an x86 license. Thats not a problem with AMD.

But does ARM have such a power or performance advantage that is more advantageous to emulate x86 than going with a native processor?
 
Nvidia wanted to emulate x86 on ARM but to emulate x86 you have to an x86 license. Thats not a problem with AMD.
If this ties into Denver, it wasn't using ARM either, but a VLIW architecture that relies on a software translation and optimization layer and a hardware fallback path for when translated code is unavailable.
The custom internal architecture differs from the one being emulated, and it has features and behaviors that neither ARM or x86 need because they don't need to emulate themselves in software.

The hardware ARM decoder is a fallback path that Nvidia may have needed to be an architectural licensee to add, since that portion of the chip has to plug into something decidedly not from ARM's product portfolio.
If there was to be an x86 version, there would either need to be a purely software fallback similar to Transmeta's step-by-step emulation, or a hardware front end that could plug into the core--which would run into IP problems.
I believe the most recent Intel-Nvidia settlement already precluded the software solution.

But does ARM have such a power or performance advantage that is more advantageous to emulate x86 than going with a native processor?

What the current ARM and x86 ISAs do is not sufficiently different to make this plausible from a physical standpoint, if much of the execution process is found outside of the core.
They present a set of bit patterns and behaviors that the chips will recognize and act upon in a manner set down by the specifications. Barring something crazy, ISA differences at this point are a second-order concern that might influence things from the single digits to some tens of percent in efficiency or performance.
Most x86 cores these days have their own hardware translation to their own internal formats, but even if it is wasteful in terms of the extra steps involved, these are extra steps that are orders of magnitude cheaper and faster than software because they are optimized for this purpose and because their activity occurs in a physically small area with physically small devices.

Anything that gets taken out of wholly on-core, incurs some 1-2 orders of magnitude in power cost that corresponds to extra steps in the execution loop that start traversing cache hierarchies or external memory (longer distances, longer delays, larger wires, bigger devices relative to a transistors inside of the decoder/cracking logic).
Architectural emulation itself can require an order of magnitude more performance on the part of the emulating architecture versus what it is emulating for equivalent performance, hence why straight emulation of the prior gen of consoles by the PS4 or Xbox One is not likely to be a thing for anything performance-demanding.

In the case of Denver, the translation layer converts hot code to native instructions, and it optimizes the traces on top of that. This tries to minimize the number of times it incurs the orders of magnitude cost of going non-native; it has extra hardware fallbacks in the times it does, and it uses extra tricks to optimize on top of that. There are other design reasons for that layer for Denver, but that is outside of the ISA question.
 
I don't see the problem of an ARM decoder that decode ARM instructions into current x86 micro ops, the simple thing is just probably nobody would want that. x86 can already emulate ARM. I don't see that happening from a efficiency stand point. There is really no reason to run 2 ISAs on a single chip because you won't be running code for both natively. Also seems like more work than AMD would be able to do due to R&D constraints.
 
Slight correction, you could have two different processors on the same chip with one being an ancillary processor, like the ARM's in the consoles. They'd be to serve different purposes though, and the meat of your point is true, that you wouldn't particularly want to switch from running native x86 apps to native ARM apps on the same device by and large.
 
I don't see the problem of an ARM decoder that decode ARM instructions into current x86 micro ops, the simple thing is just probably nobody would want that. x86 can already emulate ARM. I don't see that happening from a efficiency stand point. There is really no reason to run 2 ISAs on a single chip because you won't be running code for both natively. Also seems like more work than AMD would be able to do due to R&D constraints.

I think that maybe there are not things that are x86 uops as much as there are uops that serve an internal architecture designed to emulate x86. They would contain sufficient information to conform to the needs of x86, but may not have sufficient payload in terms of operands or the specific state variables that differ between ISAs without modifying the uops.
The uops also do not encode enough information to cover the full behavior of the ISA. A good amount of it is embedded in the core's pipeline, execution units, schedule/retire, and memory pipeline. A decoder is not going to be intelligent enough to handle this, nor would it have the necessary level of control over the whole core and cache subsystem to change their behaviors.
Assuming that an ARM decoder could spit out uops into an x86 engine, it wouldn't mean that the ARM instructions would behave like ARM instructions.
If AMD were to do something like that, I would imagine that while significant portions of the logic could be reused, there would be important portions of the units, pipelines, and memory system that would be redesigned depending on what the ISAs dictate, without making the two run together.

If an architecture did want to have software emulation of multiple ISAs for a base architecture on the table, it probably starts looking as weird as Crusoe and Denver look at a low level, with hefty software involvement on top of unusual microarchitectures to get decent performance above the slower safe path.
 
Even if it's a moving target, what's the "so good that I barely can see a difference with double the polygons" level?
 
ARM V8 and x86 are just two very different beasts, semantically.

There are a number of micro architectural features in a x86 core that allows it to perform that aren't needed if all you do is execute ARM instructions:
1. x86 sets flags on a lot of operations.
2. Split register tracking
3. Increased cache bandwidth because of memory operand instructions formats and fewer GPRs.
4. Advanced memory reordering because of (3) (you don't want to hold up spill/fill code because you had a store miss in the heap).

At the same time you fail to take advantage of the one feature that gives x86 an edge: Higher instruction density; Because each instruction slot in the ROB can hold multiple ops, the effective size is increased, as is the internal issue width.

Cheers
 
Nintendo

Nintendo needs a new console soon. They have been already talking about the possibility of a new console, so Christmas 2015 is my bet.

Nintendo has always done their operating systems and libraries themselves. They wouldn't gain that much by using x86 or ARM + an existing operating system (such as Android) as a base of their console OS. I don't see IBM as a valid option anymore (as they don't have a integrated GPU available), so my wild guess is this.

Hardware setup:
- 6 core / 12 thread MIPS64 CPU (http://www.anandtech.com/show/8457/mips-strikes-back-64bit-warrior-i6400-architecture-arrives)
- PowerVR Series7XE based (32 core) GPU (http://www.imgtec.com/news/detail.asp?ID=933)
- 8 GB of economically viable memory

Both the CPU and the GPU would be obviously clocked slightly higher (~20%) than the mobile-based estimates in the marketing materials. This would result a raw CPU and GPU performance pretty much on par with PS4. The system would be equipped with 8 GB of the most economically viable low cost / low power memory in a "quad channel" setting, resulting in a similar bandwidth than Xbox One has to it's main memory (~70 GB/s = around 3x higher than WiiU). PowerVR on chip tiling buffers should be able to save around the same amount of main memory bandwidth as small EDRAM/ESRAM memory pools (seen in Xbox One and WiiU), making the slightly lower main memory bandwidth a viable option.

Nintendo would obviously also include some of those optional PowerVR ray tracing hardware blocks to the GPU. This would take just a few percents of extra die space, and would allow some nice effects in their own games. Obviously third party developer support for it would remain minimal (except for some small indie game that would become the next "Minecraft"). No matter what, ray tracing would be an excellent marketing tool ("The first gaming console capable or ray tracing. Games will look as good as Avatar."). This would be a very good deal for Imagination, as selling MIPS CPU cores to mobile devices and ray tracing hardware + 32 core GPUs to servers are not easy tasks. A gaming console deal would be a jackpot, making both MIPS and ray tracing hardware more mainstream, and also making it possible for them to expand their GPU lineups even wider in the future (allowing new market segments in the future).

There would be some kind of a new controller, or a much improved version of either the WiiU or Wii controller included. Games would be distributed mainly as BR optical discs (not compatible with BR standard to save Nintendo licence costs and to reduce piracy).

Apple

There has been rumors about Apple invading the living room. Jobs had that dream long time ago, but so far we have seen nothing (except for the Apple TV). My next wild guess goes to Apple releasing their console for Christmas 2015.

Hardware setup would obviously include Apple's own 64 bit CPU cores (successor to the Cyclone v2) + Imagination technology GPU cores (as all their mobile devices do). I would estimate 6 CPU cores at around 2 GHz + 32 Series7XE cores. I can't see Apple equipping the device with more than 6-8 GB of memory (depending on memory bus).

This device would run on iOS, and the games would be programmed using the new low overheard METAL graphics API. There would be no 100% "close to metal" APIs to access the GPU or the system directly. This would keep the games forward compatible with the iTV (*) 2016 (released in the next year). (* Apple could definitely buy the iTV trademark if they wanted it badly enough). Apple would give all the 6 CPU cores to the active application (stealing only tiny CPU slots for very limited background tasks, just like the current iOS versions). This would mean that the gaming CPU performance of the Apple device would exceed the current gen consoles by 20% or so. GPU performance would be similar to Xbox One.

The console would be fully integrated to Apples devices and you could control it with the touch screen of your iPhone or iPad. Apple's console would not have any optical disc drive or any memory card slot. Everything would be downloaded from the internet (iTunes, app store).

Sony

Sony has no need to release a new console soon, as they have the current performance lead and their hardware and games are selling very well. Christmas 2017 is the earliest time they could release a brand new console.

64 bit ARM was one year too late for PS4. Sony chose ARM for PS Vita (and thus have existing ARM based ecosystem). I don't see a good reason why they wouldn't have done so for PS4 as well, if the hardware would have been available. Now that 64 bit ARM is available (and already proven by Apple) this road block is no longer there. Also both Imagination and NVIDIA have GPU products integrated to ARM based SOCs available (at least in their marketing materials) that could match the current consoles already this year (2015) in performance.

My bet goes for an all NVIDIA solution. NVIDIA wants Tegra to succeed badly (Kepler/Maxwell GPUs and the Denver CPU), and they are lacking partners and market penetration. Sony would be perfect fit for them in the high end. I think Sony would like the Denver CPU over a traditional out-or-order ARM CPUs (as their developers are used to low level hardware access + low level optimization). Imagination could also be a black horse here, as they provided the GPU for the PS Vita. However time will tell if they can scale up their designs fast enough to match Sony's performance needs. This console will not be a PS4+, it needs to dominate the existing market at it's launch.

At launch there will be a version of the console with an included 3d-headset (improved "Project Morpheus"). Not everyone likes to play like this, so Sony will also ship the console without it. However developers are required to support stereoscopic rendering in all the games. 3d-headsets will not become popular this gen, because current gen consoles lack the performance to render games at locked 60 fps for both eyes (= 120 frames per second). Also the headsets have still some problems (image quality not good enough, latency still a little bit too high). PS5 will fix both of these problems. It will be the first console so support the new HDMI standard for adaptive vsync (reducing latency and eliminating all tearing).

Sony PS5 OS would be based on Android (like PS Vita), but they could customize it to allow the games more direct hardware access.

Microsoft

I believe Microsoft console will be also (64 bit) ARM based. Microsoft has full ARM software ecosystem (Windows Phone, Windows RT, lots of libraries) already available. Microsoft has used Tegra in their Windows tablets, so a NVIDIA based solution would be likely, assuming they can overcome their bad feelings about the original Xbox NV2A GPU deal. Another solid option would be AMD. Microsoft has used AMD hardware in both Xbox 360 and Xbox One. AMD already has 64 bit ARM server CPUs available with 8 cores and with their integrated GPUs. Scaling these up to meet the console requirements in the next two or three years would be straightforward.

Microsoft console will definitely include at least 16 GB of memory, even if Sony decides to go with just 12 GB. Neither console will be fast enough to run 4K games at 60 fps, so most games will be 1080p. Only a few percents of consumers will have 4K TVs so this is not a problem. Obviously both will be able to play 4K movies. If Apple's console launch without a disc drive and with "always online" requirement would succeed, Microsoft might try to launch the console without a disc drive as well. Maybe this time "always connected" would actually work. But I don't believe they have the guts to do it first (after the failure last time).
 
Nintendo

Nintendo needs a new console soon. They have been already talking about the possibility of a new console, so Christmas 2015 is my bet.

Nintendo has always done their operating systems and libraries themselves. They wouldn't gain that much by using x86 or ARM + an existing operating system (such as Android) as a base of their console OS. I don't see IBM as a valid option anymore (as they don't have a integrated GPU available), so my wild guess is this.

Hardware setup:
- 6 core / 12 thread MIPS64 CPU (http://www.anandtech.com/show/8457/mips-strikes-back-64bit-warrior-i6400-architecture-arrives)
- PowerVR Series7XE based (32 core) GPU (http://www.imgtec.com/news/detail.asp?ID=933)
- 8 GB of economically viable memory

Both the CPU and the GPU would be obviously clocked slightly higher (~20%) than the mobile-based estimates in the marketing materials. This would result a raw CPU and GPU performance pretty much on par with PS4. The system would be equipped with 8 GB of the most economically viable low cost / low power memory in a "quad channel" setting, resulting in a similar bandwidth than Xbox One has to it's main memory (~70 GB/s = around 3x higher than WiiU). PowerVR on chip tiling buffers should be able to save around the same amount of main memory bandwidth as small EDRAM/ESRAM memory pools (seen in Xbox One and WiiU), making the slightly lower main memory bandwidth a viable option.

Nintendo would obviously also include some of those optional PowerVR ray tracing hardware blocks to the GPU. This would take just a few percents of extra die space, and would allow some nice effects in their own games. Obviously third party developer support for it would remain minimal (except for some small indie game that would become the next "Minecraft"). No matter what, ray tracing would be an excellent marketing tool ("The first gaming console capable or ray tracing. Games will look as good as Avatar."). This would be a very good deal for Imagination, as selling MIPS CPU cores to mobile devices and ray tracing hardware + 32 core GPUs to servers are not easy tasks. A gaming console deal would be a jackpot, making both MIPS and ray tracing hardware more mainstream, and also making it possible for them to expand their GPU lineups even wider in the future (allowing new market segments in the future).

There would be some kind of a new controller, or a much improved version of either the WiiU or Wii controller included. Games would be distributed mainly as BR optical discs (not compatible with BR standard to save Nintendo licence costs and to reduce piracy).

You detailed the thoughts of my previous post, except I don't think Nintendo will unplug the Wii U that soon, the company supported its past hardware including the N64 & NGC for about 7 years, which would push the next console around 2018. I could see Nintendo shortening this cycle, but hardly shorter than 4 years which would be 2016. (Despite latest Satoru Iwata interviews revealing work being done on next gen and mistakes having been done on Wii U.)

I'm not even considering what MS/Sony will do, and I have no idea what Apple might try, given how poor that Android console turned out to be I'd be cautious, but well... it's Apple, they could sell anything to Rys ;p
 
Last edited:
I could see Nintendo shortening this cycle, but hardly shorter than 4 years which would be 2016. (Despite latest Satoru Iwata interviews revealing work being done on next gen and mistakes having been done on Wii U.)
Most consumers don't even know WiiU is a separate product from Wii. The name was a huge mistake. If they notice the U at the end of the name they probably think it's a slightly improved new version like NDS->3DS. Nintendo could release a completely new console with a completely new name (and no backwards compatibility), and the big crowd wouldn't even notice they killed WiiU in the process (Wii is already old, so the time has come to replace it).

I agree with you that 2015 is too early, since they need to have almost finished hardware year before the launch to allow developers write some launch games for it. And they also need to finish the OS and all the network software. Nintendo NEEDS to be connected properly this time. And they NEED to launch the console with the newest Super Mario AND Zelda to succeed. So 2016 Christmas seems more likely.
 
Status
Not open for further replies.
Back
Top