Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
What about heat? is GDDR5 "hotter" than DDR3?

I think someone suggested "up to" 10W for GDDR5, which I assume can be passively cooled?

Although in clamshell mode, I guess it might need heatsinks on the bottom of the board? (I wonder if that could be a reason for the missing case design? [as MrFox just posted above!])
 
Most high end video cards with GDDR5 are not even passively cooled with heatsinks, just normal case airflow which is unreliable at best.

GTX%20670%20reverse-580-90.jpg
 
@MrFox

DF did say that even with the increased amount of memory, I quote, "The design of its surrounding architecture would not need to change throughout this process - one set of 16 GDDR5 chips would simply be swapped out for another."
 
Last edited by a moderator:
@MrFox

DF says that even with the increased amount of memory, I quote, "The design of its surrounding architecture would not need to change throughout this process - one set of 16 GDDR5 chips would simply be swapped out for another."
Yeah, but I was thinking they might have planned 8 chips on one side for 4GB, and 16 chips for the devkits, and they ended up with having to cool the underside which wouldnt' have been the case with 8 chips. But that right they might have planned 16 chips either way too.
 
Sony's Gaikai ambitious detailed in February and Microsoft's game streaming plans described in the 2010 strategy doc both imply eventual Remote Play functionality to PCs, tablets and phones. As long as you can implement proper controls at the client end it's a pretty obvious extension of the built in video streaming capabilities of both systems. Vita is only the most obvious candidate, but you could see any tablet synced with a DS4 or any PC with an Xbox One controller should work just as well. I was thinking pre-reveal we could even see forward compatibility features announced where any PS3 or 360 on the same network as the successor could be used as a Remote Play client.

Yes, I do think they should open up Gaikai to "everyone", especially via the web (e.g., spectating). It would make the PS4 more relevant to consumers at large.

I wouldn't be surprised if Sony have thumbstick attachments to cellphones. The symmetrical stick layout should be compatible with most devices.

I half expect them to use the Vita memory card on these attachment for storing sensitive data. :p
 
Yes but that doesn't affect the layout, just needs some air to flow over the chips, which I think is the reason the console wasn't being shown, it had to be redesigned at the last minute (they presumably added chips on the underside when they moved to 8GB, air then must move under the board). The biggest size issue will be the heatsink design for the SoC, which I'm pretty sure will be a flat centrifugal design, not a "tower" like the xbox.

My theory is that when they dropped the speed from 192 to 176, it had nothing to do with the move to 8GB, it was to move from 1.5V to 1.35V. The lower power could make the difference between needing a heatsink on the chips or leaving them bare.

They could be using the Hynix H5GC4H24MFR-T3C which is the fastest 1.35V part available now (according to hynix it's full production Q2'2013). Magically it's also exactly 176GB/s :oops:

BTW, I'm just saying the motherboard itself would cost less... GDDR5 still suffers from the "chips-are-horrifically-expensive" problem. But each little bit of money saved left and right can make the cost difference less dramatic.

Thank you Mr. Fox that was very informative.

Do you have any insight into EDRAM/ESRAM design? I was curoius about Cerny's claims about creating 1TB bandwith EDRAM design...:oops: which is pretty remarkable imho but im not versed in memory design.
 

I didn't realize that image was available in so high res...

Do we think that's the SOC or a heatspreader? I think my vote was for a heatspreader, others?

If not, the usual nutso people can measure it by comparing to the USB ports or whatever they do :p

Hell, would it even be real silicon and not just some kind of mockup? I guess the whole thing looks real to me so I'd say real.
 
Thank you Mr. Fox that was very informative.

Do you have any insight into EDRAM/ESRAM design? I was curoius about Cerny's claims about creating 1TB bandwith EDRAM design...:oops: which is pretty remarkable imho but im not versed in memory design.
Don't believe too much what I say, I've been wrong often :D I'm just learning as I go along, the resident people of B3D know a million times more about actual GPU design.

Cerny was just saying what "could be done". Logically I question the idea of 1TB/s internal ram because if the GPU itself doesn't have the capability to process more than (random number) 100GB/s, what can it possibly be useful for? Can a normal GPU really take advantage of that much bandwidth?
 
Just to avoid confusion this is what Cerny actually said:

I think you can appreciate how large our commitment to having a developer friendly architecture is in light of the fact that we could have made hardware with as much as a terabyte of bandwidth to a small internal RAM, and still did not adopt that strategy," said Cerny. "I think that really shows our thinking the most clearly of anything."

Edit

Ah forget it MrFox was quicker than me.
 
Last edited by a moderator:
That memory arrangement and IO is crazy. How's that going to scale long term? Can 20nm APU and double density memory even support the IO pin out for that?
 
That memory arrangement and IO is crazy. How's that going to scale long term? Can 20nm APU and double density memory even support the IO pin out for that?
Isn't it the exact same problem for any chip with a 256bit interface?
 
This means Microsoft would be fine as soon as they can get DDR4 4166 at a low price, but Sony could be in trouble for a while! Maybe they can use an interposer to fanout the I/O for the first revisions, and later shrinks they could use stacked memory. I'm guess they wouldn't have much choice but to follow the same technological upgrade path as GPU cards will.
 
Looked at it again
wm1qfb.jpg


The APU seems to be almost exactly three times as big as the DRAM package. Assuming the DRAM uses a 9x12mm package, the APU is 327mm²



XB1 uses 16 4Gbit DRR3 chips. 4Gbit is currently the highest density. By the time it is the lowest density, I'd expect the XB1 to move to a 128 bit DDR4 memory system with twice the transfer rate.

Cheers
I'm pretty sure no manufacturer has moved to a completely different memory architecture mid-cycle. I'd expect every variant of the machine to have DDR3 for the simple reason that any game released needs to work on every XB1. It simply won't do for some machines to have half the bandwidth of others.
 
I'm pretty sure no manufacturer has moved to a completely different memory architecture mid-cycle. I'd expect every variant of the machine to have DDR3 for the simple reason that any game released needs to work on every XB1. It simply won't do for some machines to have half the bandwidth of others.

The idea is to move to a memory architecture that provides the same or better bandwidth and latency when the technology to do so is available and cheap enough to bring the overall BOM down. 128 bit DDR4 at twice the clock of the current DDR3 should provide the same bandwidth. Dont know what they do about potentially different memory latency, i thought latency tends to increase at higher clocks?
 
Last edited by a moderator:
The idea is to move to a memory architecture that provides the same or better bandwidth and latency when the technology to do so is available and cheap enough to bring the overall BOM down. 128 bit DDR4 at twice the clock of the current DDR3 should provide the same bandwidth. Dont know what they do about potentially different memory latency, i thought latency tends to increase at higher clocks?
Is that workable though? Console games are highly optimized, so they tend to be more sensitive to changes like latency. I recall reading about how MS had to build specific barriers between the CPU/GPU when they combined them on later 360 models to keep the systems running properly, so it's not as simple as replacing one part with another that performs better and calling it a day. Then again maybe that's the benefit of the VM setup? I dunno.
 
Is that workable though? Console games are highly optimized, so they tend to be more sensitive to changes like latency. I recall reading about how MS had to build specific barriers between the CPU/GPU when they combined them on later 360 models to keep the systems running properly, so it's not as simple as replacing one part with another that performs better and calling it a day. Then again maybe that's the benefit of the VM setup? I dunno.

so it seems they could do that again.
 
Didn't MS (and/or Sony) announced that the next gen console games will have to be forward compatible. And by that I assume, it will have take advantage of faster processor, memory and gpu. From a technical point of view, it's doable...look at all the games on the PC.

However, from a customers relationship point of view, it's kinda nightmarish. Imagine, buying a first generation console and not being able to play a game as good as the later generation. Imagine the advantages, players with new consoles vs players with older consoles....eh...oh...I see...they want us to continuously buying newer console....
 
Status
Not open for further replies.
Back
Top