Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
Better quality titles. Maybe it would bring more versatile applications too. Perhaps a 7" edition?

Right now I don't do mobile gaming as I don't ride public transportation, but I do use my Nexus-7 tablet a lot. So I'm likely looking to replace one with the other.

Oh, and the Vita would be of interest if I got a PS4. By itself, the Vita as it stands doesn't interest me. But with a PS4 system and an upgraded Vita, the possibilities that exist seem to be interesting.

FYI Uncharted on Vita is in many ways equivalent to the console counterparts, it really is no slouch... I enjoyed the SP of GA more than U3 by a longshot. GA does not have MP though, which is good fun in U3.

Add that to Muramasa, Dragon's Crown, and Jak HD Collection among many other nice titles and the Vita really isn't a bad deal :)

I would really re-consider the Vita, it needs no big horsepower to stream games. I have no time in my work day or transporting to use a Vita. That said, it's fun to use at home and it will seriously be awesome fun to stream games to after PS4 comes out...

Waiting for a Vita 2.0 to be "better" than the Vita is too foolhardy I think. It won't be better hardware, that will be for the actual next portable from Sony... the Vita itself is pretty great, I think you might be shooting it down a bit too much :)
 
let's take a gpu intensive game as the first PC version of crysis (I think the game that relies more heavily on GPU in our recent history)

the 6990 is not 50% but 100%+ faster than 6970
and how this is in real world?

crysis_1024_768.gif
crysis_1280_1024.gif


so using a GPU 100%+ faster the gain is 4 FPS in 1024x768 (5%) and 24 FPS in 1280x1024 with AA on (20%)

so in the worst scenario, (a) are you really think you'll see a 30 FPS VS 45 FPS or 30 FPS VS 32-36 FPS?

As Graham already pointed out, this is a totally ludicrous comparison and utterly invalid. Putting aside the fact that Crysis 1 is no-where near the most demanding game available today, and the fact that dual GPU's is a very sub optimal way of doubling GPU performance, there are still plenty of examples out there that will show greater than 80% scaling by going from 1 GPU to 2. In fact I think you must have had to hunt pretty hard to find an example that didn't directly disprove your point.

I'm not quite sure who you thought you were tryng to fool with that one.

So if the PS4 has 50% more CU's at the same clock speed then you can bet damn well that it'll have 50% more real world performance in purely compute limited scenario's. Obviously that won't translate to 50% more performance across the board as not every part of the system is 50% faster. In some cases it may actually be more (when setup or ROP limited) and others it may be less such as when bandwidth limited. But regardless, 50% more theoretical performance from one particular part of the system is going to result in 50% more real world performance from that particular part of the systsem when not limited by other parts.

Incidentally, PS4 doesn't seem like it has enough juice to take a 30fps Xbone game to 60fps or 720p to 1080p so I expect the improvements we see will be a little less straight forward. Perhaps a rock solid 30fps vs a shakey one. Or both consoles running 1080p but PS4 with some fancy shader based AA and Xbone with none. I'm sure there'll be subtle but consistent improvements in the core graphics as well, e.g. better shadows, draw distance, lighting etc... all pretty subtle though.
 
Same reason they didn't use the AMD nomenclature in there docs even know they could have, nobody knows.

IIRC Bkilian mentioned some time ago that it could be for legal reasons, not using AMD nomenclature affords them some kind of legal protection.

EDIT: this post: http://beyond3d.com/showpost.php?p=1708280&postcount=2423

And I'm definitely going to get a Vita if Remote Play is as low latency as Wuu gamepad mirroring and works in my house.
'Unwanted gift' Vita's are going for $150 here with 4GB memory cards, carrying case and even a game.
 
Last edited by a moderator:
FYI Uncharted on Vita is in many ways equivalent to the console counterparts, it really is no slouch... I enjoyed the SP of GA more than U3 by a longshot. GA does not have MP though, which is good fun in U3.

Add that to Muramasa, Dragon's Crown, and Jak HD Collection among many other nice titles and the Vita really isn't a bad deal :)

I would really re-consider the Vita, it needs no big horsepower to stream games. I have no time in my work day or transporting to use a Vita. That said, it's fun to use at home and it will seriously be awesome fun to stream games to after PS4 comes out...

Waiting for a Vita 2.0 to be "better" than the Vita is too foolhardy I think. It won't be better hardware, that will be for the actual next portable from Sony... the Vita itself is pretty great, I think you might be shooting it down a bit too much :)

I liked the game too, I'm curious what the budget was for this game...
 
Vita soc is going to be outpaced once devices with series 6 GPU cores come out.

If it hasn't been already.

Doubtful Sony is going to double down on a failed product, other than a cost reduction revision.
 
Vita soc is going to be outpaced once devices with series 6 GPU cores come out.

If it hasn't been already.

Doubtful Sony is going to double down on a failed product, other than a cost reduction revision.

No talking about 3DS then...My Nexus 4 outpaces Vita by a long run but, however there is no comparisson in gaming with one or with the another ( controls, marvelous OLED screen despite having less resolution, and games like P4 golden... ).
 
Vita soc is going to be outpaced once devices with series 6 GPU cores come out.

If it hasn't been already.

Doubtful Sony is going to double down on a failed product, other than a cost reduction revision.

It's not really relevant though that the Vita's chip is going to be outpaced. That kind of thing is as relevant as being certain that an ever increasing number of PCs out in the world will be faster than the next-gen consoles.

The value of the Vita's chip is that it's pretty good, and that it's library of games increases over the years and all Vita's will run all games that were released during that time equally well. In that sense, it has some definite value over the competition, but obviously releasing iterations that increase the hardware performance are in that sense pointless as they undermine one of the most important advantages of the system.

Whether that is still the most successful way of thinking remains to be seen, but I personally still really like that setup, and still struggle to find games on the iPad that are as fun and deep - for now, the iPad for me remains relegated to social time-wasters, and stuff for my kid, in terms of gaming - many other kids are just playing endless runners on their tablets, and that is something I feel is a bit of an evolutionary step down. I'd rather have my son play a real platformer (Rayman Origins, and soon Legends for instance, or Guacamelee, or a Zelda or Pokemon, or whatever), etc.

I do actually expect even my kid to outgrow most of the iPad stuff soon, and that he'll be playing on Vita mostly, or maybe a 3DS in his future (though he couldn't care less about the 3D part - I'm a huge fan of 3D myself, even really liked basic stuff like the way the latest Layton looks with it, but in that respect, the 3DS is a failure and is currently banking just on the success of the regular DS, basically, and would have been better off with two touch-screens).

By far the best thing about tablets is their openness. This I would still like to see more of on the Vita and 3DS - Playstation Mobile is a nice start, but it could be taken a bit further (also make proper devkits available for cheap), and Playstation Mobile is currently failing me by being available in too limited a number of countries still. I'm still waiting for access myself.
 
Incidentally, PS4 doesn't seem like it has enough juice to take a 30fps Xbone game to 60fps or 720p to 1080p so I expect the improvements we see will be a little less straight forward. Perhaps a rock solid 30fps vs a shakey one. Or both consoles running 1080p but PS4 with some fancy shader based AA and Xbone with none. I'm sure there'll be subtle but consistent improvements in the core graphics as well, e.g. better shadows, draw distance, lighting etc... all pretty subtle though.

About 30fps Xbone vs 60fps ps4... Sebbbi had a very informative post about how much performance gains are needed to get something 30fps to run at 60fps, but then again, are we actually missing the fact that, for some games, you may have 40- fps on Xbone and 50+ fps on PS4 (25-30% more frames per sec), where the game on the Xbone has to vsync down to 30, but you can get away with 50+ fps on PS4 without tuning that down to 30, allowing a smoother experience still?

Is it too much of a strech to think 50% more compute transistors could lead to 25-30% more performance?
 
About 30fps Xbone vs 60fps ps4... Sebbbi had a very informative post about how much performance gains are needed to get something 30fps to run at 60fps, but then again, are we actually missing the fact that, for some games, you may have 40- fps on Xbone and 50+ fps on PS4 (25-30% more frames per sec), where the game on the Xbone has to vsync down to 30, but you can get away with 50+ fps on PS4 without tuning that down to 30, allowing a smoother experience still?

Is it too much of a strech to think 50% more compute transistors could lead to 25-30% more performance?

There was also bkilians post on how multiple texture usage could flipflop those performances.

And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.
 
It's not really relevant though that the Vita's chip is going to be outpaced. That kind of thing is as relevant as being certain that an ever increasing number of PCs out in the world will be faster than the next-gen consoles.

The value of the Vita's chip is that it's pretty good, and that it's library of games increases over the years and all Vita's will run all games that were released during that time equally well. In that sense, it has some definite value over the competition, but obviously releasing iterations that increase the hardware performance are in that sense pointless as they undermine one of the most important advantages of the system.

Whether that is still the most successful way of thinking remains to be seen, but I personally still really like that setup, and still struggle to find games on the iPad that are as fun and deep - for now, the iPad for me remains relegated to social time-wasters, and stuff for my kid, in terms of gaming - many other kids are just playing endless runners on their tablets, and that is something I feel is a bit of an evolutionary step down. I'd rather have my son play a real platformer (Rayman Origins, and soon Legends for instance, or Guacamelee, or a Zelda or Pokemon, or whatever), etc.

I do actually expect even my kid to outgrow most of the iPad stuff soon, and that he'll be playing on Vita mostly, or maybe a 3DS in his future (though he couldn't care less about the 3D part - I'm a huge fan of 3D myself, even really liked basic stuff like the way the latest Layton looks with it, but in that respect, the 3DS is a failure and is currently banking just on the success of the regular DS, basically, and would have been better off with two touch-screens).

By far the best thing about tablets is their openness. This I would still like to see more of on the Vita and 3DS - Playstation Mobile is a nice start, but it could be taken a bit further (also make proper devkits available for cheap), and Playstation Mobile is currently failing me by being available in too limited a number of countries still. I'm still waiting for access myself.

Even Carmack mentioned that the Vita can withstand the forthcoming wave of next gen mobile devices due to not having the same API heavy, hampered environment.
 
There was also bkilians post on how multiple texture usage could flipflop those performances.

While what he said is certainly true, what he didn't address is that the GPU can do other work during that time, now if it takes up a huge amount of frame time its going to be a problem. put if its doesn't then it really won't be as the GPU is still doing work whilst its waiting on memory. He also didn't mention the texture cache that exists per CU whilst small it probably has a decent hit ratio reducing this problem even further.

Each SIMD unit also contains a 64KB register file, which seems huge to me. How much use for textures I dunno. but dayum.
 
Last edited by a moderator:
While what he said is certainly true, what he didn't address is that the GPU can do other work during that time, now if it takes up a huge amount of frame time its going to be a problem. put if its doesn't then it really won't be as the GPU is still doing work whilst its waiting on memory. He also didn't mention the texture cache that exists per CU whilst small it probably has a decent hit ratio reducing this problem even further.

he mentioned GPGPU physics calculations as an example....
 
The back and forth over the performance between eSRAM and a wide GDDR5 memory subsystem has a number of unknowns that can shift the debate either way.

Without implementation details, we can claim that the eSRAM is 10x lower latency than GDDR5. However, GPU memory subsystems are weird enough that there can be significant latency adders depending on how the eSRAM is connected, and what parts of the process it can bypass.
Another question mark is whether a CPU access to the eSRAM is on the same order as a remote L2 or L1 hit between the Jaguar cores (edit: clusters), which are listed in the leaks as being over a hundred cycles.
That's no longer 10x faster than GDDR5.

The implementation for Sony's memory bus and cache subsystem is also not known well enough to know what workarounds are available. Some optimizations such as assigning output and input buffers to certain addresses may allow for more optimal traffic patterns on a per memory channel basis, even if the global traffic appears to be pathological.
Sony may have opted for a known but consistent longer-latency base case on a multi-gigabyte allocation, whereas Microsoft has a shorter-latency case with a cliff at the end of the eSRAM allocation.

Depending on how the uncore and memory pipelines behave, even if operating in situations where one design should be much better than the other, certain choices in AMD's system architecture may keep things more equivalent than expected. It sounds like the uncore is better for Durango, given some of the diagrammed bandwidth numbers. On the other hand, I suspect some of the unappealing CPU latency numbers are going to be a shared problem across anything using Jaguar.
 
Last edited by a moderator:
The back and forth over the performance between eSRAM and a wide GDDR5 memory subsystem has a number of unknowns that can shift the debate either way.

Without implementation details, we can claim that the eSRAM is 10x lower latency than GDDR5. However, GPU memory subsystems are weird enough that there can be significant latency adders depending on how the eSRAM is connected, and what parts of the process it can bypass.
Another question mark is whether a CPU access to the eSRAM is on the same order as a remote L2 or L1 hit between the Jaguar cores, which are listed in the leask as being over a hundred cycles.
That's no longer 10x faster than GDDR5.

The implementation for Sony's memory bus and cache subsystem is also not known well enough to know what workarounds are available. Some optimizations such as assigning output and input buffers to certain addresses may allow for more optimal traffic patterns on a per memory channel basis, even if the global traffic appears to be pathological.
Sony may have opted for a known but consistent longer-latency base case on a multi-gigabyte allocation, whereas Microsoft has a shorter-latency case with a cliff at the end of the eSRAM allocation.

Depending on how the uncore and memory pipelines behave, even if operating in situations where one design should be much better than the other, certain choices in AMD's system architecture may keep things more equivalent than expected. It sounds like the uncore is better for Durango, given some of the diagrammed bandwidth numbers. On the other hand, I suspect some of the unappealing CPU latency numbers are going to be a shared problem across anything using Jaguar.
I wonder how games that take the PS4 specifications to the brink will look on both consoles. PS4 is going to have some in-house optmiziations created by Cerny and his team so only developers can know if there are any key dissimilitudes and what does actually make a difference.

We have our eyes too, if they are to be believed.

One thing we can assume is that all the engine work should be coded off the PS4 version, and then retouch it accordingly for the Xbone and PC.

Old code is going to disappear now that PowerPC code for the PS3 and X360 is out of the window.

Once a game is ported, I think the eSRAM can be a benefit in tiled deferred engines or in an engine made by Carmack where one gigantic texture is transformed into many small textures.
 
Status
Not open for further replies.
Back
Top