Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
I was sure that both are Jaguar processors, and the level of customization in the Xbox One CPU should be minimal, or basically non-existent, I think

It is weird, why they don't call it "jaguar"? Is it because copyright or something like this?
 
I think web browsers are memory hungry applications, Firefox eat up 400MB~500MB.
If you are switching back and forth quickly between a game and web browser, I do not see how the browser could need more than 5 or so tabs loaded in memory.
Any more than that could be suspended to the hard drive, if you were to load or resume a tab additional to those five a previous tab would be suspended to the hard drive, or you could tell the system to suspend the game to the hard drive while you browse.
 
Why wouldn´t Ms call them jaguar in their docs?
Devs have to target these processors following AMD recomendations, and not some generic X64 model...
Strange
 
I don't understand the validity of this comparison. Gains to Haswell of the on die cache will be largely due to the more than double bandwidth achieved though it vs a normal DDR3 interface.

Xbone on the other hand is gaining no bandwidth in using esram + DDR3 vs GDDR5 alone.

Of course if we're talking about comparing Haswell GTe vs the rumoured Kaveri using GDDR5m, both with similar overall bandwidth then it might make an interesting analog for PS4 vs Xbone performance (albeit a likely higher inaccurate one!).

The comparison would be between the EDRAM and non-EDRAM variants of Haswell to get an idea of the benefits the addition of said EDRAM brings. Regardless the point was less about measuring the performance but more about how Intel's flagship APU design is much more similar to XB1 than PS4.
 
But if we're trying to suss out low latency benefits, we'll have to separate that from bandwidth freeing benefits. If it's possible.

EG for Haswell if it doubles BW, we'd want to see more than 2X performance increase if I understand correctly.

Then again there are tons of other variables, such as if Haswell even has the execution unit headroom for 2X.

I think the Haswell NDA ends June 3 (dont quote me but I think) so that's good, benchmarks soon.

As we know Intel is super smart, so I'm sure they didn't do this lightly.

And expletive I agree to a point, but the fact is IGP's were running into a severe BW wall.

But for those who continue to act as if Xbone only has the 68GB/s, it's another hole in them. By their thinking Haswell would only have whatever comes over the main bus since ED/SRAM counts nothing in their book.
 
PC-BSD minimum requirements are 512MB and recommended is 1GB. I think web browsers are memory hungry applications, Firefox eat up 400MB~500MB.

PCBSD is just a variant of FreeBSD, which has many flavors. Depending on which you use, the RAM requirement goes as low as 24MB.

Sony of course will be using BSD, not FreeBSD and will tune it for their uses. And by not using FreeBSD, they won't have to expose their changes under GPL.
 
interesting that top end Haswell is 832 gflops.

kinda doesn't make ms look so good it's already creeping up on Xbone (probably on paper only though since texture rate etc is weak).
 
interesting that top end Haswell is 832 gflops.

kinda doesn't make ms look so good it's already creeping up on Xbone (probably on paper only though since texture rate etc is weak).

Makes everything look bad actually

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/6

Looking at 768p (as its actually playable) The i74950 is at 39fps it bats everything but the gt 650m it does much better than even the desktop trinity .

Would love to see a next gen radeon of GeForce come with a 128megs of edram/sram built in. The die size is 264mm2 + 84mm2 . ITs really nice
 
Iris Pro does really well in compute I wonder if anything related to the EDRAM? After all we have heard compute shaders should benefit the most from ESRAM.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/17

55300.png


Anand does say this

We see near perfect scaling from Haswell GT2 to GT3. Crystalwell doesn't appear to be doing much here, it's all in the additional ALUs

Crystalwell is the EDRAM...

Are we supposed to take Iris Pro results to correspond to the ESRAM so we can say it will not be a miracle worker (in games), or are they unrelated? I am unsure. Would you need to program more specifically for such a cache?

Drivers and other factors could play in here. Nvidia cards benched here would have better drivers and has more capability in most non-math areas such as texels rate.
 
Crystalwell is the EDRAM...

Are we supposed to take Iris Pro results to correspond to the ESRAM so we can say it will not be a miracle worker (in games), or are they unrelated? I am unsure. Would you need to program more specifically for such a cache?

Drivers and other factors could play in here. Nvidia cards benched here would have better drivers and has more capability in most non-math areas such as texels rate.

Well as Anand points out Iris Pro has just slightely less than double the ALU resources of the HD4600 and the compute performance seems to scale pretty perfectly with that. If the eDRAM were providing an additional benefit (other than providing sufficiently increased bandwidth to allow the ALU's to scale) then we should have seen an even greater performance increase.

That said, I have no idea how the latency of Crystalwell compares to the eSRAM in Xbone.

What we do seem to be able to take from this though is that Intels compute performance seems to be pretty spectacular - maybe even beter than GCN! That bodes well for the possibility of utilising the IGP's for compute work along side discrete GPU's for graphics in the future.
 
Status
Not open for further replies.
Back
Top