Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
Even if all those happened , it's still an ocean of difference .



I wouldn't call it a "few less shaders" ....
Carmack just being polite and diplomat , the machines are not out anyway .

They are not few shaders.Is like tapping a Wiiu to a X1!
 
Even if all those happened , it's still an ocean of difference .

i guess this is the big disagreement. no dev yet has said anything close to that! and i've seen about at least 3 comment now. i think devs are more in touch than fanboys on message boards.

and surely one eventually will, i am sure some said ps3 was hugely better than 360. have a hundred people comment on something and one will have an outlying view.

and a plausible upclock could get the xbone to 1.5-1.6 tf. that whittles the difference to almost nothing. but i dont want to tout an upclock since i dont think it likely it happened.

They are not few shaders.Is like tapping a Wiiu to a X1!

are you making my point for me? :p
 
30% upclock is plausible now?

slightly, sure. given bonaire and cape verde clock at 1ghz retail stock, and upclocked versions easily hit 1100 mhz on newegg.

cape verde/bonair tdp is 80-85 watts too, not unreasonable.

1040 seems possible, yes.

most likely it's still just 800.
 
It sound like Wii vs PS360... Is really that big the difference? an ocean?

Obviously it's not a Wii vs PS360 , but yes , i believe the difference is huge - especially when we're talking about machines that will be out the same time .
 
slightly, sure. given bonaire and cape verde clock at 1ghz retail stock, and upclocked versions easily hit 1100 mhz on newegg.

1040 seems possible, yes.

most likely it's still just 800.

Bonaire and Cape Verde contain 8 Jaguar Cores and a 32MB of eSRAM ?

Your using discreet parts and comparing them to APU's.

Compare it to the APU's instead, none of which clock anywhere near the PS4/XBONE and all of which have multiple times less CU's.
 
i guess this is the big disagreement. no dev yet has said anything close to that! and i've seen about at least 3 comment now. i think devs are more in touch than fanboys on message boards.
So 3 haven't mentioned the power difference how many have 10. What's the betting these 3 have a deal with ms for content I wonder....
 
Compare it to the APU's instead, none of which clock anywhere near the PS4/XBONE and all of which have multiple times less CU's.

none of those clock near 800 gpu, much lower.

so i guess we dont compare to that either.

the gpu can clock discreetly from the cpu. cpu clock has nothing to do with it.
 
What do you want him to say? They probably just got final dev kits.
Anyways its just like comparing a high end card with a mid range card, big difference for hardcore gamer but no difference for the normal console gamer.
 
Obviously it's not a Wii vs PS360 , but yes , i believe the difference is huge - especially when we're talking about machines that will be out the same time .

Thats pretty much nothing more than a personal assertion. The specs on paper will not lead to the conclusions you imagine in a game.

its kinda like the Pope choosing to believe he found the Spear of Longinus.
 
Thats pretty much nothing more than a personal assertion. The specs on paper will not lead to the conclusions you imagine in a game.

its kinda like the Pope choosing to believe he found the Spear of Longinus.

And thats nothing more then a personal assertion of yours in itself.

The power difference is not minimal, it will be noticeable.
 
Maybe a proper evaluation of the merits and shortcomings of each platform is not something that can be expressed with a blanket statement.
Perhaps the implications of high-level specs for two very complex systems are insufficient to provide an answer without putting them through their paces with extensive testing and practical use. There are any number of implementation details, unexpected gotchas, and happy side benefits that can change how well they perform relative to each other, with evolving tools and shifting platform details.

There are likely quite a few scenarios where X is better than Y except for *insert completely reasonable and not uncommon scenario here*, or difference Z is of variable or uncertain importance because of *highly situational or design-choice related reason goes here*.


That two multibillion-transistor APUs and well-integrated boxes designed for mass production are impressive engineering is pretty much a given.
 
Obviously it's not a Wii vs PS360 , but yes , i believe the difference is huge - especially when we're talking about machines that will be out the same time .

Aside resolution and framerate... something that PS4 can do and One can't?
 
And thats nothing more then a personal assertion of yours in itself.

The power difference is not minimal, it will be noticeable.
And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.
 
And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.

they should be able to switch out to another thread of execution when they are waiting on memory, its not like they are doing nothing during that time.
 
The eSRAM as described seems to be very amenable to an equal mix of read/write traffic, and it shouldn't have the same page hit and read/write turnaround penalties a DRAM bus has.

Depending on how well the dev tools can pick up on bad patterns, and how well the memory controller can coalesce runs of write and read traffic, there can be scenarios where the acheivable bandwidth on the shared DRAM bus takes a hit that will effect everything running.
 
Status
Not open for further replies.
Back
Top