Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
The X1 has roughly the same read bandwidth (176GB/s vs 170GB/s) but has drastically lower write bandwidth (102GB/s vs ~156GB/s). If you look at the vgleaks diagram you will see the that the X1 cannot fill faster then 102GB/s due to the ROP limitation.

According to AnandTech, the eSRAM on theX1 is roughly 50GB/s in each direction so the read bandwidth would be closer to 118GB/s vs the 176GB/s in the PS4.

Also, if Microsoft is indeed adding its coherent bus to get their 200GB/s total, wouldn't it be prudent when addressing PS4's bandwidth to add the onion buses? It's total bandwidth would be 196GB/s~ if that were the case.
 
According to AnandTech, the eSRAM on theX1 is roughly 50GB/s in each direction so the read bandwidth would be closer to 118GB/s vs the 176GB/s in the PS4.

Also, if Microsoft is indeed adding its coherent bus to get their 200GB/s total, wouldn't it be prudent when addressing PS4's bandwidth to add the onion buses? It's total bandwidth would be 196GB/s~ if that were the case.

That would seem fair to me too, however I think first we should see if we can't come up with a little bit better justification for assuming the 200GB/s+ figure is false. As of yet ppl seem to take that at face value as being false without any real rationale for doing so.
 
It's not wishful thinking at best. Read my post. It's speculation, but it's all based on some sort of evidence. It may yet be wrong, but trying to avoid discussing it simply on the basis of your assumptions isn't particularly helpful for forging interesting discussion either.



No, they didn't. As I pointed out, they didn't announce either the GPU or CPU specs, and they would have also been happy to boast about 170GB/s total bandwidth presumably. You're stretching the truth here in an effort to just dismiss the notion. :rolleyes:
You basing it all a comment the guy said in passing. We have detail leaks on everything on the console and we had insiders confirm the spec didnt change.

They did give out cpu specs of 8 cores. Like I said they gain nothing releasing specs that are less than the ps4. People like yourself now can run with anything to up the specs. Same thing happen with the wiiu.... we all know how that turned out. ;)

I think using names like "XBone" is more a behavior I expect from NeoGAF, not from Beyond3D.

http://business.financialpost.com/2013/05/24/why-xbone-has-stuck-for-microsofts-new-console/

:rolleyes: wow....
 
You basing it all a comment the guy said in passing.

No, not "all" on that comment. There are other reasons, which are explained in my posts if you take the time to read it.

We have detail leaks on everything on the console and we had insiders confirm the spec didnt change.

Where are these insiders saying this? Devs that got Beta kits months ago? Do you have any insiders saying everything is *identical* from within the past month? I don't mean ppl saying so within the last month, I mean ppl saying their info is current as of May. Got links?

Look man, nobody is forcing you to discuss it. If ya don't want to, then don't. I can promise I will take no offense. :)

They did give out cpu specs of 8 cores. Like I said they gain nothing releasing specs that are less than the ps4.

The clock speeds aren't less than PS4's. Nor is the overall GPU bandwidth if it really was the "170GB/s" figure from the leaked specs. My point is that if they were identical to the leaked specs MS has more to gain from drawing those parallels to PS4 hardware than have to lose. Yet, they chose not to. I find that intriguing. You can find out *why* I find it interesting in my earlier post that you evidently didn't read through.

People like yourself now can run with anything to up the specs. Same thing happen with the wiiu.... we all know how that turned out. ;)

I don't care what happened with the Wii U. Last I checked this thread wasn't about the Wii U.
 
Also, why is everyone assuming that the total bandwidth figure of 200GB/s+ is automatically just marketing speak? Just because adding in the CPU happens to add up to 200GB/s? If so, that wouldn't make sense for them to say MORE than 200GB/s...that's add up precisely 200GB/s, no?

Just because DF and Anandtech assume it's the CPU's contribution doesn't actually mean that's some given fact. We DID have that VGLeaks article which claims specs had been adjusted a month or so ago (same rumor as the XboxMini + BC info). And that bandwidth is clock dependent, so if they upped the clocks at all it'd point to a higher bandwidth figure.

Like Rangers, I too find it interesting that sebbbi noted the 200GB/s+ figure as a starting point when he could have just as easily qualified his estimates with a sentence like "if you assume the leaks are true and total bandwidth was 170GB/s...". Plus rumors of the consoles running hot and having relatively weak yields supposedly, slight delays in shipping beta kits out, and the fact that MS didn't disclose the clocks nor the bandwidth for the eSRAM/GPU/CPU at the reveal nor the hardware panel. Seems to me the reason they bragged about 8 core CPU and custom DX 11.1 AMD GPU and 8GB of RAM etc is because those all are identical with PS4 qualitatively on the surface. But...so are the leaked clocks.

If their agenda was to paint X1 as identical to SP4 hardware-wise they should have likewise trumpeted "an 800MHz GPU with 170GB/s of available bandwidth and a 1.6GHzCPU...", no? Unless, of course, they really did make last minute changes to the clocks as VGLeaks' source claimed.

Just for fun, how would having say 204.8GB/s of overall bandwidth change things? How does that affect flops, clocks, etc?

It'd be something like this perhaps:

204.8GB/s total to the GPU ("more than 200GB/s of available bandwidth"); does not include any CPU contribution
204.8-68.3=136.5GB/s for eSRAM
(136.5/102.4)*800MHz=1.066GHz GPU ==> 1.64Tflops

Assuming coherency between the chips on the SoC...that's a CPU at 2.13GHz.

Before you shout at me and tell me how that CPU clock can't possibly work because of the thermal considerations of Jaguar's spec, I'll note that X1's CPU isn't actually a Jaguar (even though everyone is reporting it as such by assuming).

:)

Oh jesus are you serious man when are you going to let it go and accept that its just normal parts?, does it really mean that much to you that its more power then the PS4? if the CPU is not standard jaguar why did it get no mention in the architecture panel?

Also GCN GPU's tennd to topout at 1Ghz for 28nm so yeah no.

Theres been 0 leaks of it being anything other then jaguar and normal GCN and yet here you come again with your secret sauce (how many times now?). Sure the clocks could change but nowhere near as drastic as you are suggesting, it just won't happen thermals and power would become a issue for one.

Also the XB1 does have over 200GB/s

120.4GB/s + 30GB/s + 68GB/s = 200.4GB/s thats > 200
 
They did say more than 200GB/s of bandwidth to the various caches, or something like that. Otherwise they left it at 8GB RAM, 8 core CPU and an AMD GPU.

They did leave it at that. That 200 GB/s was said in passing at the tech panel. Its funny now people are taking it and jumping all the specs we have on the console.

Like i have said in the past its a great move not to release the specs of your system. People with go off the deep end with this stuff with the "what if."

Occam's razor.... :cool:
 
I think using names like "XBone" is more a behavior I expect from NeoGAF, not from Beyond3D.

Oh come on. In the age where everything is super-appreviated, I find it utterly laughable that MS Execs wouldn't have thought this one out a bit more. The Xbox360 is frequently called XB360. It's only natural the Xbox One becomes XBone. And for heaven's sake, I coined the term and I'm not that savy with nicknames. If we didn't coin it here, it would have been coined elsewhere. [ http://forum.beyond3d.com/showpost.php?p=1737073&postcount=102 and http://forum.beyond3d.com/showthread.php?p=1737080&highlight=call+dibs+XBone#post1737080 ]
 
Has it been confirmed that of the 18CU's none of them are geared towards compute over rendering specifically? Are they all identical? I seem to recall everyone just assuming that since Sony lumped them all together qualitatively in their PR interviews they must all be identical, but that's not actually a necessary contradiction to the original VGLeaks info.

If they did clarify I'd appreciate a link and/or quote from someone at Sony to clear me up. :smile:

I think it's helpful at this stage everyone is just assuming that since a handful of things from VGLeaks Durango specs were correct therefore we can pretend every single detail was as a blanket statement. I'm not sure that the 14 + 4CU thing was actually addressed adequately

Well for one this is not the smartest idea because GCN is designed to be flexible and run multiple work loads, I really cannot see a point in ripping out parts of the CU cores that are there by default and replacing them with nothing, and also why GG wouldn't be using them, they are a first party tech studio.

This probably steams from vgleaks misintepreting something in the white papers.
 
According to AnandTech, the eSRAM on theX1 is roughly 50GB/s in each direction so the read bandwidth would be closer to 118GB/s vs the 176GB/s in the PS4.

Also, if Microsoft is indeed adding its coherent bus to get their 200GB/s total, wouldn't it be prudent when addressing PS4's bandwidth to add the onion buses? It's total bandwidth would be 196GB/s~ if that were the case.

You would have to reduce the ps4 bandwidth by the same percentage as a function of efficiency. So ps4 bandwidth would become roughly 130gb/s.
 
For example, could you not interpret the info from VGLeaks as saying the 18CU's could be used all for rendering, but 4 of them are geared towards compute and don't add much to system's rendering capabilities

I don't understand the modifications to a CU that would make it worse for rendering. Do they alter the TMUs? PS4 has 32 ROPs, so that's not it. It has 8 ACEs, which is already supposed to help with juggling simultaneous 3D and GPGPU workloads. Why would they want to commit extra engineering work to nerf a CU that already seems to do pretty well with both GPGPU and 3D? And how does that work with the PS4 architecture's general idea of big pools of unified resources (CPUs, RAM, CUs) that the dev can partition as they see fit (as opposed to split or specialized pools of anything)?

I don't mean to shoot down an avenue of exploration, but I don't see the benefit of nerfing (or specializing) a handful of CUs. I would be interested in knowing how a CU can be modified to help with GPGPU calcs while hurting its rendering ability.

Apologies if this was already covered, but if I look at the VGLeaks' specs leak:
GPU:

GPU is based on AMD’s “R10XX” (Southern Islands) architecture
DirectX 11.1+ feature set
Liverpool is an enhanced version of the architecture
18 Compute Units (CUs)
Hardware balanced at 14 CUs <--I'd love to see the actual source for this
Shared 512 KB of read/write L2 cache
800 Mhz
1.843 Tflops, 922 GigaOps/s
Dual shader engines <--if they mean geometry, not shader, engines, how did they get this wrong?
18 texture units <--apparently the "+4" CUs aren't lacking for texturing
8 Render backends

[...]

UPDATE: some people is confused about the GPU, here you have more info about it:

Each CU contains dedicated:
- ALU (32 64-bit operations per cycle) <-- Did someone explain this yet? It's not the 1/4 or 1/16 FP64 rate of Tahiti or Pitcairn/Bonaire
- Texture Unit
- L1 data cache
- Local data share (LDS)

About 14 + 4 balance:
- 4 additional CUs (410 Gflops) “extra” ALU as resource for compute
- Minor boost if used for rendering
^^^--Where's the "extra" ALU in their "800 Mhz, 1.843 Tflops, 922 GigaOps/s" or "410Gflops" figures? 410GFLOPS is just the standard "Islands" 4*64*2*800 math. The implication here seems to be that nothing is taken away from a "standard" CU, just a little something is added that helps compute more than it helps rendering

Dual Shader Engines:
- 1.6 billion triangles/s, 1.6 billion vertices/s <--I'm honestly curious why the "Geometry Engine" in a typical AMD diagram became a "Shader Engine"

18 Texture units
- 56 billion bilinear texture reads/s
- Can utilize full memory bandwith

8 Render backends:
- 32 color ops/cycle
- 128 depth ops/cycle
- Can utilize full memory bandwith

OK, so VGLeaks implies any extra compute sauce in those "+4" CUs would not hurt rendering. So you could see it as 18 typical CUs, with 4 of them slightly better at compute but no worse at rendering. Like I said, it would be interesting to know what programmers would view as a helpful "extra ALU" resource for compute workloads that wouldn't help rendering at all.

I'll bite my tongue on what I think marcberry's intent is.
 
Oh jesus are you serious man when are you going to let it go and accept that its just normal parts?

Normal parts like what? What does that even mean? Who said anything about 'abnormal' parts? Did you actually read what I typed before replying? I certainly didn't mention nor suggest or allude to anything abnormal. Just improved clocks over the leaks that you are *assuming* must be correct without any real reason to assume as much. There is more reason to think they may have improved clocks (or weakened them) than there is to just blindly assume no changes at all.

...does it really mean that much to you that its more power then the PS4? if the CPU is not standard jaguar why did it get no mention in the architecture panel?

It really does mean that much to me to base technical comparison discussions on things that are actually known instead of assumed. And I don't recall them ever mentioning Jaguar in the architecture panel. Do you have a direct quote for me?

Also GCN GPU's tennd to topout at 1Ghz for 28nm so yeah no.

Ummm, ok? A 1GHz GPU is still a fair bit improved over the 800MHz spec from VGLeaks. Not seeing much of a meaningful counterpoint here... :???:
 
That would seem fair to me too, however I think first we should see if we can't come up with a little bit better justification for assuming the 200GB/s+ figure is false. As of yet ppl seem to take that at face value as being false without any real rationale for doing so.

There has been rationale for why the figure seems disingenuous. Mostly from lack of legroom to reach that figure. Either they upped the GPU clock which is highly unlikely, or they added larger system bus which is even less likely. The most reasonable way for reaching the 200GB/s is by adding the coherent bus or by some other creative math.

I'm not saying i'm certain that is the case, but there seems to be little to no evidence of the X1 being able to reach that high of bandwidth without a bit of number sliding from irrelative parts.
 
There has been rationale for why the figure seems disingenuous. Mostly from lack of legroom to reach that figure. Either they upped the GPU clock which is highly unlikely, or they added larger system bus which is even less likely. The most reasonable way for reaching the 200GB/s is by adding the coherent bus or by some other creative math.

I'm not saying i'm certain that is the case, but there seems to be little to no evidence of the X1 being able to reach that high of bandwidth without a bit of number sliding from irrelative parts.

The math isn't really that creative. The coherent bus is an alternate memory pathway that is in full use by the gpu. Add in onion and garlic if you wish. More than one dev has pointed to 200gb/s on this board.
 
Because I can't stand another 14+4 debate, all 18 of the CU's are identical.
I can guess where VG Leaks got the 14+4 from, I think they were just taking a slide out of context.
 
I don't understand the modifications to a CU that would make it worse for rendering.

Yes laying out my interpretation of what marcberry was trying to get across. Or maybe not. Either way I would like to know if Sony has gone into more detail on the issue. We may never know.

I don't mean to shoot down an avenue of exploration, but I don't see the benefit of nerfing (or specializing) a handful of CUs. I would be interested in knowing how a CU can be modified to help with GPGPU calcs while hurting its rendering ability.

No idea. Maybe something to do with latency considerations? ...I got nothing. :LOL:

Apologies if this was already covered, but if I look at the VGLeaks' specs leak:

OK, so VGLeaks implies any extra compute sauce in those "+4" CUs would not hurt rendering. So you could see it as 18 typical CUs, with 4 of them slightly better at compute but no worse at rendering.

Ok, so then how would that fit in with their comment about a 'minor boost if used for rendering'? Any guesses? You make a good point about the texture unit figure though.

I'll bite my tongue on what I think marcberry's intent is.

Yeah no kidding, but that's ok I'll take over his line of "reasoning" from here on out because I know ppl will just ignore it otherwise. :LOL:
 
You would have to reduce the ps4 bandwidth by the same percentage as a function of efficiency. So ps4 bandwidth would become roughly 130gb/s.

I don't follow your meaning. If DDR3 is giving 68GB/s in either read or write, and the eSRAM has 50GB/s in both directions, it would give a max bandwidth of 118GB/s read.

The PS4 has a single pool of unified GDDR5 memory at 176GB/s in either read or write.

The total bandwidth for X1 would be 170GB/s and the PS4 would sit at 176GB/s, but that is beside my point. There is no efficiency to compute in my comparison, just maximum memory READ bandwidth in either system.
 
I don't follow your meaning. If DDR3 is giving 68GB/s in either read or write, and the eSRAM has 50GB/s in both directions, it would give a max bandwidth of 118GB/s read.

The PS4 has a single pool of unified GDDR5 memory at 176GB/s in either read or write.

The total bandwidth for X1 would be 170GB/s and the PS4 would sit at 176GB/s, but that is beside my point. There is no efficiency to compute in my comparison, just maximum memory READ bandwidth in either system.

Because bandwidth is never 100% efficient and ps4 will never use the full 176 for read or write. It will be used for simultaneous read and write.

Here is a real world example of Durango memory

durango_memory2.jpg
 
Because bandwidth is never 100% efficient and ps4 will never use the full 176 for read or write. It will be used for simultaneous read and write.

Here is a real world example of Durango memory

durango_memory2.jpg

So the PS4 will never reach a peak of 176GB/s

Yet the DDR3 in that very picture is running at its peak of 68GB/s?.

wth.
 
Status
Not open for further replies.
Back
Top