Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
Not planning on wading too much into your guys' convo here but I can tell you from experience that engineering projects ALWAYS have wiggle room designed in specifically for this sort of thing. Especially projects that are competitive in nature and in areas of such projects that can be modified late in the game (like clocks).

Right, usually room for wiggle down.
 
More likely is that its mostly marketing BS. Has everyone lost their marbles :???:

There may be some useful use cases of cloud computing but saying its going to make a console X times more powerful is just rubbish.

I think it's less marketing lies and more buzzwords with some truth hidden behind them. The topic isn't obvious. This isn't the same cloud gaming we all know doesn't work (yet) a la OnLive or even Gaikai's more limited version of it. This is a more refined approach in the interim. On the comment about graphics being improved, I think semantics may be confusing some ppl. I think visuals can be improved in three potential ways, but I may be wrong:

1) Using the cloud to do things like complicated physics computations that govern animations beyond the player's "sphere of direct influence" so to speak. This could be global stuff, like natural disasters deforming the terrain, weather fx, etc.

2) Baking GI in pseudo real time (aka real time but with a delay/lag) for the environment beyond the player's "sphere of influence".

3) Moving GPGPU compute tasks out to the cloud and thus freeing up local GPU resources to worry primarily about rendering. As time goes on, in theory devs would become more proficient at finding latency insensitive computations to be moved to the cloud, gradually improving the 1TF+ graphics pipeline's rendering resources locally.



Scott_Arm, I like that you want to pick a small example and analyse it with some data and move on from that. That's a good starting point for a semi-quantitative discussion. Just a note though, whatever conclusions you arrive at in terms of what is realistic for streaming conditions should also consider that devs can always adjust the size of the player's "sphere of influence" in response to those changing conditions.
 
Given that Gaikai was made to render games being played in real time, I don't understand how it also couldn't just be used in the same way that Xbox One's cloud setup is allegedly able to. I have my doubts that either will be used in a tangible way. Too many variables like what happens to the game when you aren't online to benefit from cloud? The only way to make it consistent is to make even single player games online only.
 
Given that Gaikai was made to render games being played in real time, I don't understand how it also couldn't just be used in the same way that Xbox One's cloud setup is allegedly able to. I have my doubts that either will be used in a tangible way. Too many variables like what happens to the game when you aren't online to benefit from cloud? The only way to make it consistent is to make even single player games online only.

Gaikai renders everything remotely there is no division of gamestate between the cloud and the local render device. In MS' cloud vision only some subdivision of the gamestate is remote returning data to the local render device for final display. Gaikai is simply a remote gameplay or streaming tech there is no way to use it for this purpose
 
In the Ars Technica interview, Booty said the cloud could do things like cloth or physics simulation.

So if a character is moving or in motion of any kind, his clothing would have to move in real time.

How practical would it be to delegate that cloth simulation to the cloud and integrate it seamlessly in real-time with whatever the player who's controlling the character may be trying to make the character do?

If the rendering and animation of the character has to wait for the cloth simulation to come back while the input from the player has been registered, would it delay the animation waiting for the cloud data?

Or will developers just use canned mo-cap animation, as they do with sports games for instance, showing canned collision animation rather than generating the animation from physics calculations from the cloud?
 
I don't think I'd can the idea of the cloud being able to help out in real time.

If I have a 16ms rendering budget per frame it doesn't matter where the 16ms elapsed comes from, as long as it completes before the cycle is due. So if I had a 6ms RTT to an MS data centre I've got some 10ms of high end server compute just begging to be used. The only issue I really see is how MS is going to distribute enough servers, close enough to people to make that a reality.

They blew the load on building Azure and most businesses just shrugged their shoulders. May as well use it for something.
 
So the Xbox One will be the next Game Cube then?
GC was the most balanced console of its generation. It had a great CPU and fast memory back then. In fact the GameCube and the Xbox were fairly close, saving memory.

In that sense, power wise, I consider the Xbox One to be what the GC was to the great 128 bits generation.
 
Last edited by a moderator:
You people want to see additional details and how all this technical debate translate into actual games? Thanks Metodologic for the links.

Drive Club vs Forza Motorsport 5 face-off. :smile:

1 m 13 seconds video.


8 seconds video.

 
Pictures comparison

DRIVE CLUB

driveclub2wvjih.jpg


playstation-4-driveclub-premier-jeu-de-course-4.jpg


FORZA MOTORSPORT 5

image_forza_motorsport_5-22122-2721_0007.jpg


forza-motorsport-5-xone-002.jpg
 
Gaikai renders everything remotely there is no division of gamestate between the cloud and the local render device. In MS' cloud vision only some subdivision of the gamestate is remote returning data to the local render device for final display. Gaikai is simply a remote gameplay or streaming tech there is no way to use it for this purpose

Understanding this difference, it seems though that Gaikai is fast enough to offer playable real time experiences. Could Gaikai's framework not be adapted in a similar manner? The powerful foundation technology seems intact. Again I have doubts of this being used in a manner much different than the way an MMO set ups scenarios and AI. My biggest concern with cloud compute is a future of inconsistent experiences or online 24/7 necessity.
 
Understanding this difference, it seems though that Gaikai is fast enough to offer playable real time experiences. Could Gaikai's framework not be adapted in a similar manner? The powerful foundation technology seems intact. Again I have doubts of this being used in a manner much different than the way an MMO set ups scenarios and AI. My biggest concern with cloud compute is a future of inconsistent experiences or online 24/7 necessity.

In many ways it doesn't have to be adapted. The back end servers for Gaikai could be running SLI'd Titans (and beyond) and therefore rendering the games at top end PC level before streaming the result to the PS4.
 
In many ways it doesn't have to be adapted. The back end servers for Gaikai could be running SLI'd Titans (and beyond) and therefore rendering the games at top end PC level before streaming the result to the PS4.

Nothing would stop MS from doing the same thing (turning the same "sony can do the same" logic you always hear around)

Anyways in all cases cloud warfare is going to nerf Sony's supposed advantage, a superior local box.

But I dont see Sony doing anything of that sort for a long while. I dont know if Onlive/Gakai were quite ready for prime time. Picture quality sucked.

I never tried Onlive's update to better image quality that Eurogamer explored, so dont know how much they improved.
 
I have my doubts that either will be used in a tangible way. Too many variables like what happens to the game when you aren't online to benefit from cloud? The only way to make it consistent is to make even single player games online only.


Right, I dont think it's too big a stretch to soon see online only games.

They simply wont be playable without internet, and why not?

If game X has significantly better graphics but is "online only", people wont hesitate to buy it given a choice between that and lower quality offline games.

Lets say you have cloud enhanced Halo 5 with amazing graphics online only, versus local only Gears of War 4 not online only with lesser graphics. The former will be embraced by consumers I am sure of it.
 
Nothing would stop MS from doing the same thing (turning the same "sony can do the same" logic you always hear around)

Anyways in all cases cloud warfare is going to nerf Sony's supposed advantage, a superior local box.

But I dont see Sony doing anything of that sort for a long while. I dont know if Onlive/Gakai were quite ready for prime time. Picture quality sucked.

I never tried Onlive's update to better image quality that Eurogamer explored, so dont know how much they improved.

The question we have to wait for before we can say that it gives the Xbone a major advantage is how well it can used, and what they are going to use it for, it is only 306GFLOPs half the difference between the consoles.
 
The question we have to wait for before we can say that it gives the Xbone a major advantage is how well it can used, and what they are going to use it for, it is only 306GFLOPs half the difference between the consoles.

Yup. They need to actually demonstrate something concrete, in a real game. It's all so vague right now.

Not sure where you're getting the 306 Gflops thing? I thought it was an additional 3X Xbones?

Edit: Oh you're going by the 3X CPU thing...

fair enough but, they also called it as powerful as 40X 360's. Which doesn't make sense if they only had CPU, because the X360 CPU was ~100 gflops as well IIRC, so at best in cpu flops you'd be like 4x 360 with the cloud assist, not 40. So it would not be a true statement unless something else was going on. Anyway/
 
Yup. They need to actually demonstrate something concrete, in a real game. It's all so vague right now.

Not sure where you're getting the 306 Gflops thing? I thought it was an additional 3X Xbones?

It seems they like to play fast and loose with words.

"We're provisioning for developers for every physical Xbox One we build, we're provisioning the CPU and storage equivalent of three Xbox Ones on the cloud,"

http://www.oxm.co.uk/54748/xbox-one...e-equivalent-of-three-xbox-ones-in-the-cloud/
 
The question we have to wait for before we can say that it gives the Xbone a major advantage is how well it can used, and what they are going to use it for, it is only 306GFLOPs half the difference between the consoles.

Are you suggesting that 24 CPU cores in the cloud working in parallel is directly proportional to the output of 8 working in parallel? I'm not so sure that's right, but I may be wrong. Simply multiplying the assumed flops for the X1 CPU by 3 and calling it a day seems like rather slanted math to me. As time goes by it would seem via Amdahl's Law that the payoff would become steeper and steeper for such parallelization:

648px-AmdahlsLaw.svg.png


Also, isn't the basis of a flop/sec argument meaningless anyhow since we aren't talking about things that are being computed in a time sensitive manner to begin with?


Btw, does this cloud stuff explain what sweetvar heard about Durango from AMD's engineers? Something about it being like a supercomputer? Seems to make sense now in hindsight, no?
 
In the Ars Technica interview, Booty said the cloud could do things like cloth or physics simulation.

So if a character is moving or in motion of any kind, his clothing would have to move in real time.

How practical would it be to delegate that cloth simulation to the cloud and integrate it seamlessly in real-time with whatever the player who's controlling the character may be trying to make the character do?

If the rendering and animation of the character has to wait for the cloth simulation to come back while the input from the player has been registered, would it delay the animation waiting for the cloud data?

Or will developers just use canned mo-cap animation, as they do with sports games for instance, showing canned collision animation rather than generating the animation from physics calculations from the cloud?

Developers will have to do both because the network may be flakey. As such, I doubt the server power can be used for the core visual experiences. It's not fast enough, and there may be too much duplicated work. They may be good for extra AI logic, or physics effects. There is no general answer. The developers will likely have to try it out and think of their own solutions (e.g., interpolate, or estimate the next value; and then adjust by the server).

They can also be good for user generated content because the servers can take their time off to process the user content.

IMHO, the cloud power is better spent on new gaming ideas rather than visual effects.
 
Are you suggesting that 24 CPU cores in the cloud working in parallel is directly proportional to the output of 8 working in parallel? I'm not so sure that's right, but I may be wrong. Simply multiplying the assumed flops for the X1 CPU by 3 and calling it a day seems like rather slanted math to me. As time goes by it would seem via Amdahl's Law that the payoff would become steeper and steeper for such parallelization:

648px-AmdahlsLaw.svg.png


Also, isn't the basis of a flop/sec argument meaningless anyhow since we aren't talking about things that are being computed in a time sensitive manner to begin with?


Btw, does this cloud stuff explain what sweetvar heard about Durango from AMD's engineers? Something about it being like a supercomputer? Seems to make sense now in hindsight, no?

You make a good point, the speedup would probably be less then 3x the max GFLOPS output due to how hard it is to parallelise certain algorithms and how there are probably going to be other restrictions of the peak performance of a piece of code outside of the pure computational ability. But to have a rough idea of the max power it'll be able to put out you can simply just multiply the max GLFOPS by 3.

GFLOPS are still a decent (floating point) metric even if some of the stuff isn't time sensitive, it'll have to compute something or else it'll just be sitting there as a giant set of dumb storage just streaming something, and if it is computing something there is most likely going to be some kind of cap on how much time it has to compute.

I wouldn't call 300GFLOPS a 'supercomputer' your going to need thousands more times that to be even anywhere near one.
 
Status
Not open for further replies.
Back
Top