Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Most consumers don't even know WiiU is a separate product from Wii. The name was a huge mistake. If they notice the U at the end of the name they probably think it's a slightly improved new version like NDS->3DS. Nintendo could release a completely new console with a completely new name (and no backwards compatibility), and the big crowd wouldn't even notice they killed WiiU in the process (Wii is already old, so the time has come to replace it).

I agree with you that 2015 is too early, since they need to have almost finished hardware year before the launch to allow developers write some launch games for it. And they also need to finish the OS and all the network software. Nintendo NEEDS to be connected properly this time. And they NEED to launch the console with the newest Super Mario AND Zelda to succeed. So 2016 Christmas seems more likely.

I already expressed the same opinion regarding the very poor naming of NES6 and as a Wii U customer also agree on the OS & Networking needing to be done right next time, and big titles being ready for launch.
I would think Nintendo is now aware of those problems and will indeed make sure things are as good as possible for next gen.
That solution also lets Nintendo create a unified architecture for both its home & handheld consoles, which might be interesting...
 
Nintendo

Hardware setup:
- 6 core / 12 thread MIPS64 CPU (http://www.anandtech.com/show/8457/mips-strikes-back-64bit-warrior-i6400-architecture-arrives)
- PowerVR Series7XE based (32 core) GPU (http://www.imgtec.com/news/detail.asp?ID=933)
- 8 GB of economically viable memory

Both the CPU and the GPU would be obviously clocked slightly higher (~20%) than the mobile-based estimates in the marketing materials. This would result a raw CPU and GPU performance pretty much on par with PS4.

How is a Series7XE with 32 cores ever going to be on par with the PS4's 18 CUs @800MHz?
According to Ryan's analysis, the single-cluster/16 pipeline/32 core 7XE is capable of 64 FLOPs/clock FP32 and 128 FLOPs/clock FP16.
Even if you add them together for a perfect utilization of all ALU units, you'll get 196GFLOPs at 1GHz, together with a really low 4 GPixel/s fillrate.
This would have a lower performance than Tegra K1 and it's hardly able to match a X360. Unless you clocked this thing to over 6GHz, it wouldn't reach PS4 levels of performance..


Perhaps you meant a Series7XT with 16 clusters (and 32 cores per cluster)? At 1GHz, we'd be looking at around 1TFLOPs FP32 + 2TFLOPs FP16 (+ optional 512GFLOPs FP64) (+ 256GFLOPs FP16/32 from the SFU? I'm not sure those are usable for these calculations..).
Imgtech's top end solution is in fact capable of competing with the new-gen consoles, but the 7XE would be fit for a handheld successor at best.
 
Apple

There has been rumors about Apple invading the living room. Jobs had that dream long time ago, but so far we have seen nothing (except for the Apple TV). My next wild guess goes to Apple releasing their console for Christmas 2015.

I can't see this happening unless it's an resource-lite evolution of the existing low-cost AppleTV. You can infer from Apple's financial reports and statements that games account for the majority of the iOS App Store revenue that Apple are making 5-9x profit per month from games on iPhone/iPad than Sony make in a year - using the last set of complete figures for both companies.

Even if you use Sony's killer 2004/05 PS2 figures and adjust for inflation as a benchmark, Apple are crushing Sony on profits in the video game sector while doing very little. Spending actual effort on a platform, other than initiatives like the Metal API, seems entirely unnecessary.
 
Most consumers don't even know WiiU is a separate product from Wii. The name was a huge mistake. If they notice the U at the end of the name they probably think it's a slightly improved new version like NDS->3DS. Nintendo could release a completely new console with a completely new name (and no backwards compatibility), and the big crowd wouldn't even notice they killed WiiU in the process (Wii is already old, so the time has come to replace it).

I agree with you that 2015 is too early, since they need to have almost finished hardware year before the launch to allow developers write some launch games for it. And they also need to finish the OS and all the network software. Nintendo NEEDS to be connected properly this time. And they NEED to launch the console with the newest Super Mario AND Zelda to succeed. So 2016 Christmas seems more likely.

Most of the big crowd might not notice but those who are loyal and support Nintendo would definitely notice that their $250/$300 investment is suddenly at EOL after only 3 years.

Nintendo needs to do a lot of things you are advocating but I don't think that Nintendo, in an effort to do so, basically sh&* on those that are supportive of its platform and ecosystem because it chasing those more incline to buy the xb1 or ps4.
 
Most of the big crowd might not notice but those who are loyal and support Nintendo would definitely notice that their $250/$300 investment is suddenly at EOL after only 3 years.

Despite its abysmal cumulative sales, the Wii U is already over 2 years old.
By late 2016 (as per sebbi's suggestion), the Wii U will be 4 years old, not 3.

4 years is how much it took between launching the original XBox and the X360. Did that hinder X360's sales and "loyalty factor"?
Moreover, the original XBox sold something like 25 million units during its lifetime, whereas the Wii U is lucky if it ever reaches 20 million.
BTW, Nintendo took less than 5 years between the Gamecube and the Wii.


Given how the Wii U has been dragging Nintendo down into unprecedented losses (except for last quarter), I think these "loyal Nintendo supporters" aren't within their top concerns. Especially not after 4 years of "service".
 
Nintendo

Nintendo has always done their operating systems and libraries themselves. They wouldn't gain that much by using x86 or ARM + an existing operating system (such as Android) as a base of their console OS.
This has been historically true, although is it certain that the situation hasn't changed? Part of Nintendo's slide in relevance can be attributed to parts of the market writing it off due to its reduced device capability and lack of platform services, and a fully featured OS can facilitate that. The Wii U's system software at roll-out was in a very bad state, so if Nintendo wants to take the fight more directly to the PS4 (FreeBSD) and XBox One (Windows), is there evidence that they're more comfortable in this zone than previously? If bringing in MIPS and PowerVR, which have existing kernels and platforms, why not draw from those since the rather non-standard memory hierarchy and old customized PowerPC architecture is being dropped?
I have doubts, given the troubles Nintendo's had of late that its internal platform has any significant insights on working with modern architectures relative to platforms that have been evolving so much more quickly.

Hardware setup:
- 6 core / 12 thread MIPS64 CPU
Is there no way to get to 8 physical cores? There are still some reasons to keep at least once core for the system layer, but taking a reserve core out of 6 is going to have an impact.
I'm also not entirely sold on the performance since they're going with DMIPS as a measure, which is an optimistic one used primarily in the mobile/embedded space.
Jaguar does fine with Dhrystone as well, so there may be small but noticeable deficit if the core count starts lower and if the current consoles decide to unlock one of the reserved cores.

All this aside, I would be curious how Imagination fares at the level of system integration, power, and performance in question.
Mobile does encourage a very large amount of integration, but there are different criteria at the console level and I haven't seen something to compare with the current or previous gen consoles.

Sony

My bet goes for an all NVIDIA solution. NVIDIA wants Tegra to succeed badly (Kepler/Maxwell GPUs and the Denver CPU), and they are lacking partners and market penetration. Sony would be perfect fit for them in the high end. I think Sony would like the Denver CPU over a traditional out-or-order ARM CPUs (as their developers are used to low level hardware access + low level optimization).
Perhaps if Nvidia opens up the internals of their optimizer and provides Sony with a way into the secure memory space for the processor.
Otherwise, I am not sure I would characterize Denver as being lower-access than a regular OoO core. It may be lower-access for software, but it would software that would exist as CPU-only code that exists even below Sony's hypervisor.

Not knowing when whole subroutines are going to spontaneously become optimized, become re-optimised, become mis-optimized, or kicked out of the optimization cache is a far bigger unknown than knowing if an OoO core moved a load past a multiply in a ~100 instruction window. The minimum time budgets are potentially far more wobbly with this, and the optimizer shies away from code that changes privilege levels (something the DRM-crazy console platforms may not like).
While it would always come down to what Sony and Nvidia would get out of the deal, I feel at this point that going with Denver is saddling a console with the cost of Denver's other ambitions outside that space. Denver's not that far ahead of the best custom OoO cores in its general performance range, and the console space is much more willing to use many more cores. I would look forward to seeing how Denver scales to higher core counts, particularly since there may be unusually heavyweight requirements for syncing the instruction path now that there are variable amounts of optimized code and Denver already partially reserves one of two cores for the optimizer.

Frankly, I think Sony would benefit from a more focused and physically developed core that doesn't need a almost fully-expanded uop format in memory and a pipeline that probably could be tightened up if it didn't leave open the possibility for supporting the quirks of an arbitrary architecture. The software optimizer's job could even be helped if at least some OoO functionality were in the core, since it could leave unoptimized and unexpanded routines or not invoke the optimizer as much when the power costs can be higher versus an OoO muddling through mostly optimal code. Other dynamic translation/optimization schemes in the past like Dynamo indicated that they found a benefit with software that can optimize itself with an OoO target.

I was considering posting in the Denver thread and here whether Sony of all the manufacturers would be the most paranoid about an architecture that puts a portion of the hardware execution loop off-die, particularly since an ancestor of Denver was Transmeta's Crusoe, and that chip's secure memory partition's DSA key was compromisable.
(edit: http://www.realworldtech.com/crusoe-exposed/3/ )
Perhaps if the memory is moved onto an interposer, it would be harder for a less-resourced hacker to bus glitch or use a DRAM analyzer, but I think it would be preferable if that memory were stacked or placed on-die to make it hard even for a well-funded criminal organization or reverse-engineering group to crack a program that would let anything running on the CPU be monitored. The optimizer's ability to evaluate much more code also makes it much more capable of determining the value of code it is monitoring than a similar hack of a standard CPU's firmware (security through obscurity rules the day there, although it's not mentioned much).
I'm assuming Nvidia has already thought of this without disclosing it, but I'm also pondering whether ARM's weak Icache consistency could--without further changes like hardware Icache snooping--allow an idling core's stale instructions to execute after the optimizer has reoptimized code past a branch target, potentially allowing execution results to change if the stale ARM code branches to an optimizer cache address for a different program segment.


Microsoft
AMD already has 64 bit ARM server CPUs available with 8 cores and with their integrated GPUs. Scaling these up to meet the console requirements in the next two or three years would be straightforward.

This isn't exclusive to Microsoft, but I am curious going forward if either x86 APU console maker is going to find a significant win going to ARM while still using AMD. AMD is going to have a mostly equivalent x86, assuming AMD is still around.
However, if AMD's x86 is in the cards, I'm curious if Intel will be interested enough to make another run like it was rumored to have been doing for the current gen. With the gradual expansion into more generalized foundry work and hints at diversifying in the face of maturing markets and increasing low-end competition, it might be able to float an offer.
I think there are enough genuine questions about everyone's ability to provide a workable process scaling trend, with Intel the most likely to keep its schedule, and Intel has just as much or more integration research compared to just about everyone else.
They also continue to push the envelope on interfaces and communication methods, while AMD trades away engineers and IP.

AMD's not provided evidence of sustained and significant improvement in their tech, and I think there may be a long-term criticism that relying on semi-custom to fund R&D means getting paid insufficiently to do R&D whose scope is as limited as the needs of customers whose ambitions do not go as far as AMD needs to go. AMD's leakiness in terms of project disclosures and cross-project comparisons that leaked out concerning the current gen consoles months ahead of everything may indicate that the current console makers might have an interest in making sure they don't share this contractor again.
 
Last edited:
Despite its abysmal cumulative sales, the Wii U is already over 2 years old.
By late 2016 (as per sebbi's suggestion), the Wii U will be 4 years old, not 3.

4 years is how much it took between launching the original XBox and the X360. Did that hinder X360's sales and "loyalty factor"?
Moreover, the original XBox sold something like 25 million units during its lifetime, whereas the Wii U is lucky if it ever reaches 20 million.
BTW, Nintendo took less than 5 years between the Gamecube and the Wii.

Given how the Wii U has been dragging Nintendo down into unprecedented losses (except for last quarter), I think these "loyal Nintendo supporters" aren't within their top concerns. Especially not after 4 years of "service".

MS launched the Xbox late and the 360 came at the tail end of the gen at a time when the market was expecting new gen hardware to appear.

Nintendo launched early and would be relaunching mid gen. Thats a different proposition.

The 360 marked the beginning of the seventh gen, I doubt anyone is going to consider the release of new nintendo console in 2016 as the start of the ninth.

I believe Nintendo has a better chance of emulating Sega than MS if it tries to relaunch so soon.
 
MS launched the Xbox late and the 360 came at the tail end of the gen at a time when the market was expecting new gen hardware to appear.

Nintendo launched early and would be relaunching mid gen. Thats a different proposition.

Nintendo launched the Wii U early?
How is 2012 considered early for a console that merely met the specs of its 6-year-old competitors, while its predecessor Wii had fallen into oblivion for 2 years in terms of sales and mindshare?


The Wii U is a very late 7th gen console - in terms of graphics/CPU power, online capabilities/services, multimedia support, etc.
Anything coming from Nintendo in late 2016 will be just that again: a very late 8th gen console. No one is even entertaining the idea that Nintendo will surpass the PS4 in processing power.The later this console gets released, the deeper will be the hole Nintendo are putting themselves into.
 
Nintendo launched the Wii U early?
How is 2012 considered early for a console that merely met the specs of its 6-year-old competitors, while its predecessor Wii had fallen into oblivion for 2 years in terms of sales and mindshare?


The Wii U is a very late 7th gen console - in terms of graphics/CPU power, online capabilities/services, multimedia support, etc.
...

People expected new hardware for 2013, and Nintendo delivered it in 2012, one year before its competitors, therefore was early. The hardware itself is irrelevant, only the release date matters, since it was the new console from Nintendo for the next 5-7 years to come.
(ie new generation = new console, no matter what the hardware can do.)

Anyway, we are drifting, the topic at hand is predicting next generation console hardware from the different companies and possible upcoming companies.
(Apple, Amazon ?...)
 
Death of AMD and price of Intel too high? Or the ARM Koolaid was served. ;)
I don't think anybody needs any Koolaid to understand that ARM based chips are currently selling more than any other chips combined, especially in consumer oriented devices. Software development takes nowadays more time than hardware development. Gone are the days of completely custom console hardware designs. The new consoles will be based on commonly used CPU and GPU cores, and commonly used memory architectures. There might be some minor tweaks in the way the cores are connected or some added DSP-style processors in the same die (such as audio mixing or video encoding/decoding) or some latency reducing technologies if 3d-headgear is going to be the big new thing. Hardware is not going to be the biggest cost. It's going to be the new software.

Simple console "game launcher / dashboard" style software is no longer going to be enough. I think we will see custom Android versions, iOS and Windows (on ARM) that all support their respective app/game stores. Consoles are consumer devices with simple controllers, so simple mobile UIs designed for touch controls are much more easy to use compared to PC UIs needing mouse and keyboard. Running ARM based (phone & tablet) software is thus much more important than running x86 desktop applications.
How is a Series7XE with 32 cores ever going to be on par with the PS4's 18 CUs @800MHz?

Perhaps you meant a Series7XT with 16 clusters (and 32 cores per cluster)? At 1GHz, we'd be looking at around 1TFLOPs FP32 + 2TFLOPs FP16 (+ optional 512GFLOPs FP64) (+ 256GFLOPs FP16/32 from the SFU? I'm not sure those are usable for these calculations..).
Imgtech's top end solution is in fact capable of competing with the new-gen consoles, but the 7XE would be fit for a handheld successor at best.
Yes, I was talking about that Series 7 model that was claimed to scale up to 1.5 TFLOP/s. I didn't do the math myself. Just quickly browsed through the blog + articles around the internet. A chip that can do either 1 TFLOP/s FP32 or 2 TFLOP/s FP16 (they can't do both at the same time) should be enough to compete against PS4 (assuming similar efficiency), as all post process effects and big chunk of the lighting math can be done with FP16 (with practically zero image quality degradation compared to FP32).
Most of the big crowd might not notice but those who are loyal and support Nintendo would definitely notice that their $250/$300 investment is suddenly at EOL after only 3 years.
300$ is not a big investment anymore. People are upgrading their 600$ iPhones every other year (when the 2 year contract ends) and at the same time upgrading their 500$ iPads (maybe a little bit less often). Obviously for the phones, you don't need to pay the whole price immediately, and you can get tablets with contracts as well nowadays. Maybe console manufacturers should also offer 2 year contracts (50$ price + 10$ per month) to make the price seem compatible.
Is there no way to get to 8 physical cores? There are still some reasons to keep at least once core for the system layer, but taking a reserve core out of 6 is going to have an impact.
I don't know. The only 64 bit model currently announced (I6400) can be configured to have up to 6 cores, and up to 4 threads per core. Maybe there's a client scaling limit for the coherency protocols or for the L2 cache. AMD provided Sony and Microsoft CPUs that have two 4 core Jaguar modules (with no shared cache). A similar configuration would yield 12 cores + 24 threads (assuming 2 threads per core). That should be enough even if they fully reserved some CPU cores for the OS. But traditionally iOS and Android (MIPS based Android exists already and would be a good candidate for the OS) do not reserve full cores for OS, and do not have that much background tasks running. Mostly it is just running the active software while all the other software is just waiting (zero CPU usage). Memory is still reserved for background software, meaning that 8 GB+ RAM would be necessary (many Android phones already are sporting 3 GB of memory).
Perhaps if Nvidia opens up the internals of their optimizer and provides Sony with a way into the secure memory space for the processor.
Otherwise, I am not sure I would characterize Denver as being lower-access than a regular OoO core. It may be lower-access for software, but it would software that would exist as CPU-only code that exists even below Sony's hypervisor.
I didn't actually mean that Denver provides lower level access. I meant that Sony developers are used to optimizing code at quite low level. Denver's wide in-order design definitely benefits from code that is specially optimized for that CPU (and generally from code that minimizes cache misses). Not having full control over the actual processor level code might obviously cause grey hair for some console programmers :)

Denver is currently NVIDIA's only 64 bit CPU. Tegra K1 model with four Cortex-A15 cores is 32 bit. Obviously they could integrate a 64 bit core from ARM, but I believe NVIDIA would prefer selling their own CPU IP instead.
This isn't exclusive to Microsoft, but I am curious going forward if either x86 APU console maker is going to find a significant win going to ARM while still using AMD. AMD is going to have a mostly equivalent x86, assuming AMD is still around.
ARM would allow them to run existing (phone/tablet) OS and software. x86 would allow them to port their current console OS and software easier to the new platform. It all depends on how tightly their services and consoles are tied to the existing consumer devices (mostly running ARM software).
However, if AMD's x86 is in the cards, I'm curious if Intel will be interested enough to make another run like it was rumored to have been doing for the current gen. With the gradual expansion into more generalized foundry work and hints at diversifying in the face of maturing markets and increasing low-end competition, it might be able to float an offer.
I think there are enough genuine questions about everyone's ability to provide a workable process scaling trend, with Intel the most likely to keep its schedule, and Intel has just as much or more integration research compared to just about everyone else.
Intel's focus doesn't match console designs that well. They have been scaling up heavily in the core counts and cache sizes while improving the performance per watt to satisfy the server segment. And they have also been scaling down to meet the demands of light weight battery driven devices (such as ultraportable laptops, tablets). Crystalwell EDRAM L4 die is clearly targeting both of these markets. It both improves the performance and reduces the power usage (less main memory accesses) on high end laptops. It is important because it allows Intel to price their chips higher (because a EDRAM based laptop doesn't need a discrete GPU). But I believe the main long term goal for it will be to compete with IBMs EDRAM based memory solutions in the high performance markets. Intel has now hardware transactional memory support as well (TSX extensions).

However Intel must have noticed already that the current generation consoles have been perfect promotion for GPU compute. These machines have limited CPU power, but very flexible modern GPUs that can feed themselves. For example we have moved most of our graphics engine code to the GPU. We used to spend half of our CPU cores to setup graphics rendering, but this is no longer the case. GPU is just so much more efficient for handling this kind of massively parallel code (huge amount of objects being animated/culled/drawn per frame). Some developers are also moving physics simulation to GPU. The question becomes, what high performance game tasks remains on the CPU? If Intel sees a threat coming from the GPU direction, they might act.

There is no better way to ensure that TSX and AVX-512 become widely used than putting these technologies in the next generation of consoles. I love both of these extensions. TSX is brilliant (opening up completely new possibilities) and AVX-512 is the best vector instruction set I have seen for long long time. Intel has definitely listened to the programmers and gave them very nice tools. 16 core (32 thread) Intel CPU with AVX-512 + TSX (+ good parallel programming tools + debuggers) in a next gen console would definitely stop the slow (but deadly) migration to GPU compute. Give me a console CPU like this in 2017 and I will be happy :)

But I doubt Intel would be willing to sell their big EDRAM dies and 16 core CPUs at the price point suitable for consumer boxes. All these technologies have been designed for high profit segments. I highly doubt that this path is economically viable for them.
 
Last edited:
Wow for the ARM love! Why is x86 off the cards in your opinion?

Here is why I think the consoles will sway towards ARM next time...this old article. They were a near miss for X1/PS4.

http://www.forbes.com/sites/patrick...ns-microsoft-and-sony-chose-amd-for-consoles/

Of course I guess it's possible AMD will make those companies offers they cant refuse, especially in AMD's condition. But then again isn't AMD making ARM CPU's soon anyway?

So if one/both/all three go with ARM that opens up Nvidia as well.
 
2013 was the absolute worst year to launch a console from a technology perspective. Nextgen was very late. While the XB1 has superior hardware with the exception of the GPU, they will struggle to maintain parity in 3rd party games. If I were MS, I would launch a new, high-end console as soon as they can with 16 GB of 1 TB/s HBM, substantially better CPU and 4 TF GPU. Something like fall 2016 at $499 with XB1 at $249. That would be the last console ever produced.
 
2013 was the absolute worst year to launch a console from a technology perspective. Nextgen was very late. While the XB1 has superior hardware with the exception of the GPU, they will struggle to maintain parity in 3rd party games. If I were MS, I would launch a new, high-end console as soon as they can with 16 GB of 1 TB/s HBM, substantially better CPU and 4 TF GPU. Something like fall 2016 at $499 with XB1 at $249. That would be the last console ever produced.
That might be suicidal.
 
300$ is not a big investment anymore. People are upgrading their 600$ iPhones every other year (when the 2 year contract ends) and at the same time upgrading their 500$ iPads (maybe a little bit less often). Obviously for the phones, you don't need to pay the whole price immediately, and you can get tablets with contracts as well nowadays. Maybe console manufacturers should also offer 2 year contracts (50$ price + 10$ per month) to make the price seem compatible.

Thats the iphone and ipad. Not a console. The PS4 and XB1 can't sell at ipad volumes at ipad prices. What one is willing to pay for one device doesn't naturally translate into a price point one is willing to pay for any device.

If Nintendo relaunches in 2016, what does it do when the ninth gen rolls around? Pray for MS and Sony to wait until 2021 or beyond? Cut short the life of its platform again? Or accept the prospect of launching 2-3 years later than its competitors with each new gen going forward.

The Wii U, XB1 and PS4 will probably have ate through 60-80 million gamers by the end of 2016. Furthermore, those consoles will have gone through several price drops. Nintendo will end up releasing a new console while looking at a 1/3rd -1/2 of the market already locked up. It will also have to forego the premiums that a new console usually enjoys at the start of a gen in order to be price competitive with the older XB1 and PS4. And Nintendo's new console will have to compete with the maturer libraries and feature sets of those older consoles.

In my opinion there is a greater chance for failure than success.
 
I don't think anybody needs any Koolaid to understand that ARM based chips are currently selling more than any other chips combined, especially in consumer oriented devices. Software development takes nowadays more time than hardware development.

My understanding about the existence of more ARM based chips comes from the fact that Intel has pretty much optimized their money-making abilities by not allowing any more companies to make x86 chips, while ARM makes money mostly on selling IP to fabless companies.
Looking at how 22nm Silvermont chips behave against the latest ARM cores, my take is that performance/power efficiency isn't really an inherent advantage of ARM architectures anymore. This goes in-hand with Mark Cerny's statements about the x86 vs. ARM debate for consoles.

So if Nintendo was to order a SoC from AMD, why would it be ARM instead of x86? ARM licenses are substantially cheaper?


2013 was the absolute worst year to launch a console from a technology perspective. Nextgen was very late. While the XB1 has superior hardware with the exception of the GPU, they will struggle to maintain parity in 3rd party games. If I were MS, I would launch a new, high-end console as soon as they can with 16 GB of 1 TB/s HBM, substantially better CPU and 4 TF GPU. Something like fall 2016 at $499 with XB1 at $249. That would be the last console ever produced.

What do you mean with "superior hardware"? Isn't the GDDR5 in PS4 considered "superior" to XBone's DDR3 too?
AFAICT, a statement saying "The PS4 has superior hardware with exception of the sound DSP" would be a much more accurate description.
 
What do you mean with "superior hardware"? Isn't the GDDR5 in PS4 considered "superior" to XBone's DDR3 too?
AFAICT, a statement saying "The PS4 has superior hardware with exception of the sound DSP" would be a much more accurate description.

Let's not turn this into Xbox vs ps4. I believe the spirit of his post was Xbox One had much better hardware than xb360 (previous gen), that this gen is much better off hardware wise than previous gens.
 
If there's a market for gamers that upgrade pcs annually or semi-annually, then there's probably a market for consoles as well. They would just need to be forwards and backwards compatible.
 
Status
Not open for further replies.
Back
Top