Business Approach Comparison Sony PS4 and Microsoft Xbox

I don't necessarely disagree, just wanted to point out that ESRAM isn't completely trade-off free. It's transistor expensive - transistors that especially could be used for other things, like more CUs like in Sonys case...

Exactly. FWIR, ~1.5B dedicated to the 32mb esram.

Cape Verde = 1.5B transistors

I would have thought a smarter design would stick with 1tsram and increase the size six fold. 192mb of scratch pad memory could have yielded something more interesting than what they ended up with. Or MS could have designed the esram to provide enough bandwidth to do something interesting and unique only possible with esram. Or (ideally) dedicate 64mb 1tsram/edram (high bandwidth) and the rest of the transistors to more CU's.

Given AMD's ability to source memory controllers in their gpu's which can support either ddr3 or gddr5, I'd think they could have provided this option to MS as a way out to drop this boat anchor on their design.

Honestly I'm still dumbfounded that the team green-lighted a design which called for 1.5B transistors just to make up for anemic ddr3. Why not 4gb ddr3 + 4gb GDDR5? The whole thing is just a lot of spend for not much performance. Just to avoid GDDR5.

Bottom line, their business decision to dedicate 3gb to non-gaming functions is what lead them into this cluster **** design. From there I'd call it a failure on the design team not to push back against the suits for more performance.
 
Last edited by a moderator:
I don't necessarely disagree, just wanted to point out that ESRAM isn't completely trade-off free. It's transistor expensive - transistors that especially could be used for other things, like more CUs like in Sonys case. It's a simple tradeoff. One doesn't balance out the other.

The ESRAM, as good as it is to solve part of the bandwidth requirements, also leaves less room for computational resources.

As others have pointed out - in the design phases, it was more sensible to go with high memory and a ESRAM solution, than to bet on (too) expensive / lower capacity GDDR5 memory that may or may not be available in the timewindow you want to launch. Sony struck gold, because they clearly didn't think they would be anywhere near as close to offer the same amount of memory. How much will the "paper-difference" turn out to be on the screen? Depends on the software IMO - but as always in such cases, a lot will be down to subjectivity and how critical the user will be looking.

The biggest draw-back to launching with less performance is IMO not the fact that you are, but more perhaps in image and how news like this spreads. There is a bit of damage if the general acceptance is that one is "better" than the other in technical terms. I also believe that general consumers are becoming more aware of technical-specs (no matter how ineffective they are at giving a true picture) thanks to an increased interest in tablets and smartphones where specs are thrown around with every new model that comes out. Won't matter to the loyal brand enthusiast but perhaps will sway the one or the other neutral undecided customer if he isn't associated to any specific platform and simply wants the "best" purchase for himself or his kids.

I would also agree though, that features and overall business model would / could have a bigger impact than specs - but this is still yet difficult to assess, since little is known about Sony's features outside of gaming and Microsofts true support to games.


the esram vs gddr5-no esram decision is a screwup/not a screwup is difficult to really assess without access to complex cost spreadsheets, now and 5 years from now. and as well till we see how performance goes and how much it turns up to matter.

it's possible xbone leverages cheaper ddr3 to undercut ps4 by $100, and thus gain a sales edge throughout the generation. it's possible they screwed up, ps4 has a similar bom and more power. i'm not sure how we'll ever really know.

for example i've been of the opinion right along that xbone will have a lower bom than ps4, but that ms wont price it lower, at least initially. they will count on kinect and apps and tv stuff to give it value parity in the consumer eyes. they will be glad to keep any extra money a lower bom could give them. we've heard they dont want to lose money on hardware anymore.

so whatever the pricing situation we still wont know whats going on behind the scenes, who is making more or losing less on the hardware.

but i'm guilty of making surface judgments too, like "damn that xbone soc die is big, that esram not looking good"
 
I think some of you are not giving enough credit to the engineers at Sony and MS. Clearly they knew what the priorities were for their respective companies and proceeded to design systems around these requirements within their budgets. The fact that one machine has superior specs in no way should be construed as an error on their part - its a difference in priority.

There is no way that the leadership team would sign-off on a silicon budget that increased cost that could not be tapped into. Console system design is always about tradeoffs: faster chip more money on cooling, more cooling less money for chips. The engineers go thru countless iterations of prototypes to determine where the sweet spot is.

Further I find it more than ironic that many of those praising the new Xbox for offering services that extend beyond gaming crucified the PS3 for a similar approach with regards to bluray. And likewise many who are hyping the PS4 for having a gaming focus were critical of the 360 for having modular HD Drive, no HDMI and so on because they felt it made sense to provide additional value outside of gaming.

Now that the priorities are switched the right answer has changed. This says a lot more about manufacturer allegiance than it does about the pros and cons to either approach.
 
...
Further I find it more than ironic that many of those praising the new Xbox for offering services that extend beyond gaming crucified the PS3 for a similar approach with regards to bluray. And likewise many who are hyping the PS4 for having a gaming focus were critical of the 360 for having modular HD Drive, no HDMI and so on because they felt it made sense to provide additional value outside of gaming.

Now that the priorities are switched the right answer has changed. This says a lot more about manufacturer allegiance than it does about the pros and cons to either approach.

I've been reading this board for years and back in the day, many questioned PS3 design for cost implications. I was in that camp that thought that's kind of a greasy thing to do. Bundling BRD in the box to win a format war on the backs of ps fans wallets.

BRD wasn't just expensive laser diodes, it also saddled them with a mandatory HDD raising costs further.

Sony no doubt took a hit for this. See ps3 sales vs ps2.

Now it seems the shoe is on the other foot, but with much less compelling hardware and questionable policies to boot. We'll see if the suits pegged their consumer well, or if we see a similar erosion of their base (~50%).
 
Except on a console theres nothing stopping you from using GPGPU for whatever you feel like? also do you really think Sony hasn't thought about this and you are the only one to catch it? they employ people who's literal job is to make this design.

I want to take an opportunity here to point out again that GPGPU is not a magic bullet and point out how inefficient GPGPU can be on problems that are not a match for paradigm.
For the sake of argument we'll claim that GPU's have 100x the flops of a single CPU thread because it makes the math easier.

This is not an uncommon scenario, you take problem A which needs a lot of flops, you write a simple single threaded CPU based solution and it runs in we'll call it time 1, and if you do the math you get perhaps 10% of the peak flop usage of the CPU - we'll call this 10% efficiency.

You very excitedly convert problem A to GPU, and run it (using the entire GPU) expecting to see huge improvements, instead it runs in time 1.5, yes actually slower than the single threaded CPU version, using an astonishing 0.066% of the overall GPU flops.

You then stare blankly at the screen and start a series of seemingly astonishingly counter intuitive optimizations, that involve copying the source data many times, building LOD like pyramids out of it. You run the optimized version on the entire GPU and it runs in 0.1 time, 10 x faster that the original, utilizing a whopping 1% of the GPU ALU resources. And worse you spent a ton of bandwidth with all that data copying.

That sounds fictitious, but that's why almost no one does general rigid body dynamics on the GPU.
There are a lot of problems where it has substantial value, particle systems will run 100x (assuming our original fictitious multiplier) faster than the CPU implementation, because they are a good fit. Which is why systems like Physix and Havok FX accelerate them.
 
I think we can all agree that this will pretty much be settled with the quality of 3rd party games. Difference is that before when there was any gap like this (PSOne vs N64, PS2 vs. Gamecube/Xbox), the latter console came out 1.5-2 years later and never caught up. I'd also say that it's not about initial pricing (since they are absolutely going to sell out this year), but what does pricing look like next holiday? Who is going to shift over to 20nm first? Will Sony really be able to get a steady supply of 4Gb GDDR5 chips? And so on...

Grade-school level thinking would say "PS4 more powerful than Xbox One, therefore PS4 wins", but any student of video game history would know that's almost never been true (I think the SNES was the last time it was 20 years ago).
 
I think some of you are not giving enough credit to the engineers at Sony and MS. Clearly they knew what the priorities were for their respective companies and proceeded to design systems around these requirements within their budgets. The fact that one machine has superior specs in no way should be construed as an error on their part - its a difference in priority.

My sentiments. Plus, it's more than the hardware. The software and workflow tools are also very important.

There will always be challenges at some point in the dev cycle (e.g., stalling). It's the developers that make the magic happen.

[size=-2]Don't forget the testers[/size]
 
I want to take an opportunity here to point out again that GPGPU is not a magic bullet and point out how inefficient GPGPU can be on problems that are not a match for paradigm.
For the sake of argument we'll claim that GPU's have 100x the flops of a single CPU thread because it makes the math easier.

This is not an uncommon scenario, you take problem A which needs a lot of flops, you write a simple single threaded CPU based solution and it runs in we'll call it time 1, and if you do the math you get perhaps 10% of the peak flop usage of the CPU - we'll call this 10% efficiency.

You very excitedly convert problem A to GPU, and run it (using the entire GPU) expecting to see huge improvements, instead it runs in time 1.5, yes actually slower than the single threaded CPU version, using an astonishing 0.066% of the overall GPU flops.

You then stare blankly at the screen and start a series of seemingly astonishingly counter intuitive optimizations, that involve copying the source data many times, building LOD like pyramids out of it. You run the optimized version on the entire GPU and it runs in 0.1 time, 10 x faster that the original, utilizing a whopping 1% of the GPU ALU resources. And worse you spent a ton of bandwidth with all that data copying.

That sounds fictitious, but that's why almost no one does general rigid body dynamics on the GPU.
There are a lot of problems where it has substantial value, particle systems will run 100x (assuming our original fictitious multiplier) faster than the CPU implementation, because they are a good fit. Which is why systems like Physix and Havok FX accelerate them.

Are the current CUs less generically applicable than SPEs were in Cell? More, or less bandwidth efficient?

Anyway, you make a good point regardless, but I do wonder how much of a difference the direct path from CPU core to GPU/CU that is now made available makes i. Tis regard. I think that difference may be quite ... large!
 
How much support did PS2 get after Xbox 360? Let alone after it's actual successor, PS3?

And it was the most popular console ever.

Old consoles die pretty fast.

I am not talking after. If MS scheduled 15 exclusives for the XB1's first year then it had to divert resources to that endeavor at least 2-3 years ago. Meaning instead of spending resources on the 360 which targets 77 million users, it targeted XB1. MS did this even though it plans to sell another 25 million 360s.
 
I don't know. Ask Sony. 15 games would be pretty average in terms of output for them on a single platform. And it sure doesn't appear like there will be a shortage of first party Sony games for the PS4 in the first 12 months. We already know about, what is it? Six so far?

Average of Sony. But for MS? In the first year? A lot of us came to the conclusion that MS's strategy going forward would simply be to spit out a handful of first and second titles while investing in third party exclusives (timed exclusive, DLC) and wide third party support.

When has there ever been a time where MS has spit out 15 exclusive pieces of content over a year's time?

If MS has lost focus on gamers shouldn't we see a decrease in the amount of resources MS will pour into gaming content for the XB1 versus the 360?
 
I am not talking after. If MS scheduled 15 exclusives for the XB1's first year then it had to divert resources to that endeavor at least 2-3 years ago. Meaning instead of spending resources on the 360 which targets 77 million users, it targeted XB1. MS did this even though it plans to sell another 25 million 360s.

we just got halo 4, forza horizon, and gears judgement.

most games are third party games, those dont go away.
 
MS has invested in new studios and added talent so not all of those titles are diverted resources.

My point is that MS has been accused of losing it's focus on gamers. From a hardware perspective that would make sense. From a software perspective, that makes no sense at all.

If you are investing in new studios, when last gen was all about you closing studios, then how is that a loss of focus? A company that suddenly doesn't "care about gamers" shouldn't be expanding its development group to accommodate gamers. New studios represent increasing the fixed costs of your development group. Something MS seemed to want to minimize last gen. Paying second or third parties for exclusive is a way more frugal approach because they represent at the very least a one time expense.
 
My point is that MS has been accused of losing it's focus on gamers. From a hardware perspective that would make sense. From a software perspective, that makes no sense at all.

I think we need to be careful of what is being accused and what not. There are different groups of people with different interests here, there, everywhere.

Some people here on this board, like to discuss from a neutral point of view. For them, the main point of discussion is if Microsoft will be successful with their business approach or not. They don't necessarely have a vested interest in Microsofts console.


Some other people, also here on this board, are current Xbox owners. They might be 'accusing' Microsoft of losing its focus because there are reasons to believe that Microsoft is going after a much bigger wider audience then they did in the past, to some degree at the expense of their current userbase. They are not really interested in Kinect nor the entertainment features of the next Xbox, but are more interested in the top-end games they've enjoyed on this and last generations Xbox.


Not all have the same interest. Even if it turns out to be correct that Microsoft is investing their billions into new games to appeal to a much wider market and have success with it - still probably wouldn't change much about those peoples general disappointment.

On the contrary, Microsoft might pull off appealing to a very wide audience, but at the same time cater to their existing userbase in a way that they are not swayed over to other platforms. That is the question. And to some other degree, how many of the now existing Xbox loyal fanbase is willing to overlook the performance difference (that may or may not be detrimental to them) and/or will be happy with the tradeoffs that were made.
 
I think we need to be careful of what is being accused and what not. There are different groups of people with different interests here, there, everywhere.

Some people here on this board, like to discuss from a neutral point of view. For them, the main point of discussion is if Microsoft will be successful with their business approach or not. They don't necessarely have a vested interest in Microsofts console.


Some other people, also here on this board, are current Xbox owners. They might be 'accusing' Microsoft of losing its focus because there are reasons to believe that Microsoft is going after a much bigger wider audience then they did in the past, to some degree at the expense of their current userbase. They are not really interested in Kinect nor the entertainment features of the next Xbox, but are more interested in the top-end games they've enjoyed on this and last generations Xbox.


Not all have the same interest. Even if it turns out to be correct that Microsoft is investing their billions into new games to appeal to a much wider market and have success with it - still probably wouldn't change much about those peoples general disappointment.

On the contrary, Microsoft might pull off appealing to a very wide audience, but at the same time cater to their existing userbase in a way that they are not swayed over to other platforms. That is the question. And to some other degree, how many of the now existing Xbox loyal fanbase is willing to overlook the performance difference (that may or may not be detrimental to them) and/or will be happy with the tradeoffs that were made.

To piggy back on your excellent points... If MS is directly investing more in gaming than it did at any time in any of previous hardware cycles, its kinda of chitnzy to accuse them of not making a "gamers" console. Its a gamers console plus.

Its also a unification of services and capaibilites across hardware devices based around a single OS kernel. I dont think there are any real downsides to this approach.
 
Major Nelson said somewhere that E3 will be about games exclusively, for what its worth ...
 
Back
Top