Well how many examples do you have of games primarily targetting the console? Here's
Bioshock (UE3 based, AFAIK), again showing G71 falling behind (scaling 7900GS, RSX would at best match the 2900XT if BW is ignored).
But is that down to unified shaders or something else? Note again that the non unified R580 is more than able to keep up with the unified GPU's.
I think its obvious that in some game types R6xx has an advantage over G71 but it seems to me that its games that put more emphasis on SM3 shaders which would of course be the more heavyweight console games. I don't really see why console games would try to leverage unified shaders anyway since the PS3 lacks them.
Lost Planet is another game were R6xx looks particulary good against G71 (although not to the same extent as CoD4 and Bioshock) and again, its a console originated game that puts a heavy emphasis on shaders.
http://www.firingsquad.com/hardware/radeon_hd_2600_performance_preview/page15.asp
However looking at the other games in that review, the 2600XT is gererally behind the 7900GS. This actually leads me to revise my conclusion of 2600HD + 25% for Xenos as when it comes to shader limited games (were the 2600XT does well vs G71) I expect its more in line with Xenos performance than that. After all it has 90% of Xenos's raw shader performance arranged in a more efficient design.
In other areas vs Xenos, the 2600 may fair even worse than 80% of its performance. Especially with 4xAA and framebuffer limited situations.
How? Assume that there is a 50% advantage. If it runs at 60fps on PS3, then you'll only see 60fps on the TV for the 360. No difference. If a frame takes 30ms to render most frames on PS3, you'll see 30fps. If it takes 20ms to render most frames on 360, again you'll only see 30fps on the TV unless triple buffering is enabled or v-sync is completely disabled.
I don't think framerates would change. Rather devs would add noticably more detail into the 360 verson of games. Afterall, 50% extra raw GPU power at the same framerate is quite a bit. And given how obsessive the internet is at picking apart the tiniest differences in cross platform games, i'm sure we would know about it by now
Either that or lower resolution for PS3 versions of games - again, something we would definatly know about.
The gamespot results also show the R580 fairly close to R600. It's RV670 that's farther ahead.
Memory isn't a limitation here. The 7900GTX 512MB is 36% ahead of the 7900GS 256MB, which is less than the clock/pipeline advantage. The 8800GTS 320MB is at no disadvantage either. This is 1280x1024 with no AA.
True, perhaps memory isn't the problem. However I still don't think its down to unified shaders. The R580 just performs too well in comparison to R6xx, expecially in the firing squad benchies.
R580 is only close to R600. It's advantage over RV630 is less than normal, and it's further behind RV670 than normal. R600 is the odd one out, probably with some bug.
I don't think there's anything particularly strange about R600's performance here. It performance pretty much in line with what its core/shader clocks would suggest vs the 3850 and 3870. Obviously memory bandwidth is not a limitation in this game. R580 (in the quite modest 1900XT 256MB form) is only 15fps behind the theoretically more powerful and unified 3850.
How many of those games were primarily targetted for the console? And how are you able to compare performance between the PC and 360, anyway?
What does it matter if they are targetted primarily at the console or not? In fact surely if a game is targetted primarily for one architecture over another then it tells very little of the relative performance of those two architectures. The fairer comparison is in games which are made with all platforms in mind with similar levels of optimisation on each.
And even then the PC is at a disadvantage given that as we know very well on this forum, consoles recieve much higher levels of optimisation in cross platform games than do PC's. Its one of the regularly used arguments for why consoles don't need as much power as PC's to achieve the same results.
I agree that a direct comparison between the 360 and PC using these benchmarks is impossible without having actual performance numbers from the consoles however what these benchmarks do show us is that the R580 is more than capable of playing the same games, at what we can presume are similar framerates at fairly significantly higher image quality/resolution settings.
Its not proof, but its certainly compelling evididence in favour of R580 being more capable. Combine that with the fact that its newer, bigger (in terms of trasistors), does not operate in such a heat/power restricted environment and is a fair bit more powerful on paper and it seems to me that there is a far more compelling argument for R580's superior performance over Xenos than the other way round.
That's not a particularly compelling anecdote. They're trying to pimp up R580 as well, and like I said I agree that PS power is higher. That's all the justification needed by the rep to make that claim.
But what i'm not understanding is why you don't think ROP performance or texturing performance is not also higher. R580 runs with 30% more clock speed than Xenos and the same number of texture units. Why would Xenos be faster in that area? In ROPs the situation is even more obvious since R580 has twice as many on top of the 30% clock speed advantage. No it doesn't have edram but as we have seen from the CoD4 benchmarks, even R600/R6xx isn't memory bandwidth limited at these levels so its unlikely that R580 with 64GB/s would be heavily bandwidth limited.
Sorry, I meant to say "as we saw with RV670". R600 doesn't get any advantage from the 512-bit bus. Comparing RV630 and RV670, BW is not a good explanation for the former performing well below 50% of the latter most of the time. BW per pipe per clock is nearly identical.
BW restrictions are very real for RV630 in fillrate limiting scenarios, because usually alpha blending is enabled then. Look up 3DMarks's single texturing fillrate test, and then note that the texture BW is very low and there's no Z-test either. In games Xenos will destroy RV630 when fillrate matters.
But if your saying that RV630 cannot perform even close to its ROP potential in fill rate limited situations because of memory bandwidth then thats obviously a memory bandwidth, and not fill rate limited situation. R600 has 4x the available bandwidth and thus there is the potential for R600 to perform 4x faster - if bandwidth is truly the limitation. Sure, percentage wise R600 may also be just as restricted in the ROPs due to bandwidth but that wouldn't stop it performing 4x faster in what is essentially a bandwidth limited situation.
Of course in reality, RV670 shows that R600 isn't bandwidth limited in pretty much any situation and so given that RV630 has, as you say the same bandwidth per ROP/clock then doesn't that also hold true for RV630?
I mean, if R600's ROPs aren't being limited by bandwidth and all the ratio's are the same with RV630, then why would RV630 be limited?
COD4 is an example of a console-like workload and has been quite useful to me in proving my points (high vertex load, beneficial for unified shaders, bad for G71). However, it is not a typical example of workloads for PC games. For other cross-platform games, please, give examples. Find me games that primarily target the consoles, have PC versions, and have benchmarks out there.
I think we are both assigning different meanings to how a workload applies to a platform.
I agree that CoD4 is a different type of workload to older games in that its much more shader heavy. I don't see that as making it a "console" workload as opposed to a PC workload though. I simply see it as being a more modern worload which is representitive of both modern console and PC games.
The older "PC" worloads which you refer to are simply older games which also represented worloads for previous generation consoles.
Even if CoD4 did specifically leans towards unified shader exploitation (which I personally don't believe) I still wouldn't consider it a console based workload. Afterall, only 1 of the 3 current generation consoles uses unified shaders were as every currently in production PC GPU actually uses a more sophisticated unified shader design.
Take a look at Crysis for example. That has a huge vertex load (much higher than CoD4) and yet its a PC exclusive. CoD4, Bioshock, Lost Planet, Crysis, Supreme Commander etc.. etc... these are all examples of modern game workloads, not console based worloads IMO.
It does seem to be modern workloads that G71 suffers in comparitively but I don't see that as being down to unified shaders as R580 shows none of the same symptoms. My guess is thats its weak when it comes to SM3 shaders but thats just a shot in the dark :smile:
I'm not challenging the fact that cross-platform games have similar workloads regardless of platform. I'm saying that your notion that R580 >> RV630+25% ~= Xenos and R580 ~= G71 is not based on modern cross-platform games, but PC games and often old ones.
I think you might have misunderstood my argument a bit there, apoogies if thats my fault. I'm not saying R580 ~ = G71 as I do completely agree with you that that doesn't seem to be the case for the more modern workloads. In older worloads yes G71 can keep up well with R580 and thus will probably outperform or at least perform similarly to Xenos but in more modern workloads it does fall behind both R580 and even RV630 in some cases suggesting that there would be scenarios were G71 might perform poorley next to Xenos (assuming Xenos shares R580's and R6xx's strength in moderns workloads).
However RV630+25% ~=Xenos does still seem to be a reasonable assumption in broad terms. In fact I expect the two to be more equal when shader limitations come into play - which is most likely the very same scenario where G71 is comparitvely weak next to RV630. Obviously there will be other scenario's such as the use of 4xAA or framebuffer bandwidth limited situations were Xenos could go beyond that 25%. Also, I think the benchmarks support R580 being generally faster than RV630+25%.
Well you'd be wrong. RV630 couldn't get close to 32 Gsamples/sec Z-only fillrate. It couldn't get close to 4GPix/sec textured, z-tested alpha fillrate. There's nothing that will take it's performance way beyond Xenos in a similar way except more registers (I think) and quasi-scalar shaders, which are more useful for GPGPU than console games.
But if those things were truly such huge performance limitations because of bandwidth restrictions then why, with the same ROP/bandwidth ratio as RV630 doesn't R600 significantly outperform RV670 which should be even more memory bandwidth limited?
i.e. the HD 3870 should be more bandwidth restricted for its given ROP power than RV630 and yet it shows pretty much no advantage when given more bandwidth (in the form of R600).