Console that fared best vs PC's available at launch

Mintmaster said:
That's why, in the era of 3D accelerators, I think Dreamcast came closest to a PC at launch with 360 a close second
If resolution is that relevant, I say PS2 should win by default. It's the only console of 3d accelerated era that actually ~matched the max supported PC resolution at the time of release. And purely on hw, it had an actual edge as well.

Thing is though - looking at actual launch-SW offering, nothing with 3d acceleration looked that good relative to PC(a genre or two might stand out, but overall it would go to PC) outside of PS1 which competed with non-accelerated PCs.
 
If resolution is that relevant, I say PS2 should win by default. It's the only console of 3d accelerated era that actually ~matched the max supported PC resolution at the time of release. And purely on hw, it had an actual edge as well.

I'm not sure if I understand this.

The majority of PS2's games are 640x480 interlaced (with only a few games supporting 480p and some other funky resolutions). Who has not played a game in 1024x768+ when they had a Geforce 2, which was also released in the same year (I think Geforce 2 was released around May 2000, a full 5 months before PS2's US launch)? In fact, Geforce 2 GTS was able to play Quake 3 and UT in 1600x1200 at very playable framerates. So how is the PS2 matching that?
 
I was playing in 1024x768 occasionally in 1998 with my Matrox G200. Banshee was around then, too, and could do 1024x768. In 1999, I was gaming almost exclusively at that resolution with a TNT and then a G400 MAX.

In 2000, >1024x768 was totally an option. I was on a 17" CRT so I stuck with it though.
 
Huh yeah I was playing games at 1024*768 and higher in year 1999. HD gaming since year 1999 for me. :???:
 
Thats fine, but my point is that just because a game is running at 30fps locked on a console doesn't mean the hardware isn't capable of doing 50 or more average. It really just means it is probably not fast enough to do 60. Showing average performance numbers of PC hardware doesn't really give a level comparison to consoles.

But they lock it to 30fps becouse the framerate fluctuates to much. And seeing how many games run at below 30fps on 30fps locked games then you know why they locked it. ;)

how is that comment useful to the discussion?

I could ask you the same about the above. I doubt devs would lock a game at 30fps that run at average 50fps instead of using the spare processing power to deliver more detail at the 30fps lock. And seeing how the 30fps "locks" are instable i would think it is the case! ;)

And there are several 60fps games that really would be average 50fps or lower, having the framerate bounce between sub 30fps and 60 fps. That was their choice wile others chose to add more "candy" at 30fps!
 
Last edited by a moderator:
Thats fine, but my point is that just because a game is running at 30fps locked on a console doesn't mean the hardware isn't capable of doing 50 or more average. It really just means it is probably not fast enough to do 60. Showing average performance numbers of PC hardware doesn't really give a level comparison to consoles.

But how many console games are locked at 30fps 100% of the time? I.e. have zero noticable slowdown, ever?

Its very few as far as I can see, at least at the higher end of the graphics scale.

38fps average is more than enough to throw a 30fps cap on there and stick at 30fps or close for the vast majority of the time IMO which seems to be the norm for most "30fps" console games.
 
I dont know how certainly each console is more or not powerfull than pc but im certainly about software efective/eficience :

poly.png


from here:
Siggraph 2002
http://trowley.org/sig2002/ea-shader.pdf (unfornatelly this link does not opem)
http://www.stoneschool.com/Work/Siggraph/2002/
http://trowley.org/sig2002.html
 
Last edited by a moderator:
Well how many examples do you have of games primarily targetting the console? Here's Bioshock (UE3 based, AFAIK), again showing G71 falling behind (scaling 7900GS, RSX would at best match the 2900XT if BW is ignored).

But is that down to unified shaders or something else? Note again that the non unified R580 is more than able to keep up with the unified GPU's.

I think its obvious that in some game types R6xx has an advantage over G71 but it seems to me that its games that put more emphasis on SM3 shaders which would of course be the more heavyweight console games. I don't really see why console games would try to leverage unified shaders anyway since the PS3 lacks them.

Lost Planet is another game were R6xx looks particulary good against G71 (although not to the same extent as CoD4 and Bioshock) and again, its a console originated game that puts a heavy emphasis on shaders.

http://www.firingsquad.com/hardware/radeon_hd_2600_performance_preview/page15.asp

However looking at the other games in that review, the 2600XT is gererally behind the 7900GS. This actually leads me to revise my conclusion of 2600HD + 25% for Xenos as when it comes to shader limited games (were the 2600XT does well vs G71) I expect its more in line with Xenos performance than that. After all it has 90% of Xenos's raw shader performance arranged in a more efficient design.

In other areas vs Xenos, the 2600 may fair even worse than 80% of its performance. Especially with 4xAA and framebuffer limited situations.

How? Assume that there is a 50% advantage. If it runs at 60fps on PS3, then you'll only see 60fps on the TV for the 360. No difference. If a frame takes 30ms to render most frames on PS3, you'll see 30fps. If it takes 20ms to render most frames on 360, again you'll only see 30fps on the TV unless triple buffering is enabled or v-sync is completely disabled.

I don't think framerates would change. Rather devs would add noticably more detail into the 360 verson of games. Afterall, 50% extra raw GPU power at the same framerate is quite a bit. And given how obsessive the internet is at picking apart the tiniest differences in cross platform games, i'm sure we would know about it by now ;)

Either that or lower resolution for PS3 versions of games - again, something we would definatly know about.

The gamespot results also show the R580 fairly close to R600. It's RV670 that's farther ahead.

Memory isn't a limitation here. The 7900GTX 512MB is 36% ahead of the 7900GS 256MB, which is less than the clock/pipeline advantage. The 8800GTS 320MB is at no disadvantage either. This is 1280x1024 with no AA.

True, perhaps memory isn't the problem. However I still don't think its down to unified shaders. The R580 just performs too well in comparison to R6xx, expecially in the firing squad benchies.

R580 is only close to R600. It's advantage over RV630 is less than normal, and it's further behind RV670 than normal. R600 is the odd one out, probably with some bug.

I don't think there's anything particularly strange about R600's performance here. It performance pretty much in line with what its core/shader clocks would suggest vs the 3850 and 3870. Obviously memory bandwidth is not a limitation in this game. R580 (in the quite modest 1900XT 256MB form) is only 15fps behind the theoretically more powerful and unified 3850.

How many of those games were primarily targetted for the console? And how are you able to compare performance between the PC and 360, anyway?

What does it matter if they are targetted primarily at the console or not? In fact surely if a game is targetted primarily for one architecture over another then it tells very little of the relative performance of those two architectures. The fairer comparison is in games which are made with all platforms in mind with similar levels of optimisation on each.

And even then the PC is at a disadvantage given that as we know very well on this forum, consoles recieve much higher levels of optimisation in cross platform games than do PC's. Its one of the regularly used arguments for why consoles don't need as much power as PC's to achieve the same results.

I agree that a direct comparison between the 360 and PC using these benchmarks is impossible without having actual performance numbers from the consoles however what these benchmarks do show us is that the R580 is more than capable of playing the same games, at what we can presume are similar framerates at fairly significantly higher image quality/resolution settings.

Its not proof, but its certainly compelling evididence in favour of R580 being more capable. Combine that with the fact that its newer, bigger (in terms of trasistors), does not operate in such a heat/power restricted environment and is a fair bit more powerful on paper and it seems to me that there is a far more compelling argument for R580's superior performance over Xenos than the other way round.

That's not a particularly compelling anecdote. They're trying to pimp up R580 as well, and like I said I agree that PS power is higher. That's all the justification needed by the rep to make that claim.

But what i'm not understanding is why you don't think ROP performance or texturing performance is not also higher. R580 runs with 30% more clock speed than Xenos and the same number of texture units. Why would Xenos be faster in that area? In ROPs the situation is even more obvious since R580 has twice as many on top of the 30% clock speed advantage. No it doesn't have edram but as we have seen from the CoD4 benchmarks, even R600/R6xx isn't memory bandwidth limited at these levels so its unlikely that R580 with 64GB/s would be heavily bandwidth limited.

Sorry, I meant to say "as we saw with RV670". R600 doesn't get any advantage from the 512-bit bus. Comparing RV630 and RV670, BW is not a good explanation for the former performing well below 50% of the latter most of the time. BW per pipe per clock is nearly identical.

BW restrictions are very real for RV630 in fillrate limiting scenarios, because usually alpha blending is enabled then. Look up 3DMarks's single texturing fillrate test, and then note that the texture BW is very low and there's no Z-test either. In games Xenos will destroy RV630 when fillrate matters.

But if your saying that RV630 cannot perform even close to its ROP potential in fill rate limited situations because of memory bandwidth then thats obviously a memory bandwidth, and not fill rate limited situation. R600 has 4x the available bandwidth and thus there is the potential for R600 to perform 4x faster - if bandwidth is truly the limitation. Sure, percentage wise R600 may also be just as restricted in the ROPs due to bandwidth but that wouldn't stop it performing 4x faster in what is essentially a bandwidth limited situation.

Of course in reality, RV670 shows that R600 isn't bandwidth limited in pretty much any situation and so given that RV630 has, as you say the same bandwidth per ROP/clock then doesn't that also hold true for RV630?

I mean, if R600's ROPs aren't being limited by bandwidth and all the ratio's are the same with RV630, then why would RV630 be limited?

COD4 is an example of a console-like workload and has been quite useful to me in proving my points (high vertex load, beneficial for unified shaders, bad for G71). However, it is not a typical example of workloads for PC games. For other cross-platform games, please, give examples. Find me games that primarily target the consoles, have PC versions, and have benchmarks out there.

I think we are both assigning different meanings to how a workload applies to a platform.

I agree that CoD4 is a different type of workload to older games in that its much more shader heavy. I don't see that as making it a "console" workload as opposed to a PC workload though. I simply see it as being a more modern worload which is representitive of both modern console and PC games.

The older "PC" worloads which you refer to are simply older games which also represented worloads for previous generation consoles.

Even if CoD4 did specifically leans towards unified shader exploitation (which I personally don't believe) I still wouldn't consider it a console based workload. Afterall, only 1 of the 3 current generation consoles uses unified shaders were as every currently in production PC GPU actually uses a more sophisticated unified shader design.

Take a look at Crysis for example. That has a huge vertex load (much higher than CoD4) and yet its a PC exclusive. CoD4, Bioshock, Lost Planet, Crysis, Supreme Commander etc.. etc... these are all examples of modern game workloads, not console based worloads IMO.

It does seem to be modern workloads that G71 suffers in comparitively but I don't see that as being down to unified shaders as R580 shows none of the same symptoms. My guess is thats its weak when it comes to SM3 shaders but thats just a shot in the dark :smile:

I'm not challenging the fact that cross-platform games have similar workloads regardless of platform. I'm saying that your notion that R580 >> RV630+25% ~= Xenos and R580 ~= G71 is not based on modern cross-platform games, but PC games and often old ones.

I think you might have misunderstood my argument a bit there, apoogies if thats my fault. I'm not saying R580 ~ = G71 as I do completely agree with you that that doesn't seem to be the case for the more modern workloads. In older worloads yes G71 can keep up well with R580 and thus will probably outperform or at least perform similarly to Xenos but in more modern workloads it does fall behind both R580 and even RV630 in some cases suggesting that there would be scenarios were G71 might perform poorley next to Xenos (assuming Xenos shares R580's and R6xx's strength in moderns workloads).

However RV630+25% ~=Xenos does still seem to be a reasonable assumption in broad terms. In fact I expect the two to be more equal when shader limitations come into play - which is most likely the very same scenario where G71 is comparitvely weak next to RV630. Obviously there will be other scenario's such as the use of 4xAA or framebuffer bandwidth limited situations were Xenos could go beyond that 25%. Also, I think the benchmarks support R580 being generally faster than RV630+25%.

Well you'd be wrong. RV630 couldn't get close to 32 Gsamples/sec Z-only fillrate. It couldn't get close to 4GPix/sec textured, z-tested alpha fillrate. There's nothing that will take it's performance way beyond Xenos in a similar way except more registers (I think) and quasi-scalar shaders, which are more useful for GPGPU than console games.

But if those things were truly such huge performance limitations because of bandwidth restrictions then why, with the same ROP/bandwidth ratio as RV630 doesn't R600 significantly outperform RV670 which should be even more memory bandwidth limited?

i.e. the HD 3870 should be more bandwidth restricted for its given ROP power than RV630 and yet it shows pretty much no advantage when given more bandwidth (in the form of R600).
 
38fps average is more than enough to throw a 30fps cap on there and stick at 30fps or close for the vast majority of the time IMO which seems to be the norm for most "30fps" console games.

This would depend entirely on the game and the kind of variance in different parameters - you could very, very easily have a situation in something like a FPS where a 38fps average with vsync off wouldn't give you an acceptable 30fps cap with vsync and double buffering on.

I'm used to PC frame rates that are all over the place (e.g. Team Fortress where I go from a steady 75fps to a fairly consistent 25 fps depending on where I stand in a map) but if a console game spends more than a very, very small percentage of its time at 20 fps instead of its 30 fps cap I consider it a bit unpolished. Like most PC gamers I know, I have a bit of a double standard when it comes to frame rates...

Triple buffering for great justice IMO btw - I don't see how you can justify not spending the memory on a 3rd AA - resolved buffer if your game is suffering from horribly ugly tearing on a remotely regular basis (given the enormity of its benefit compared to anything else you could cram into that space), and I don't see why MS don't want you using triple buffering on the PC. Puzzling.
 
If resolution is that relevant, I say PS2 should win by default. It's the only console of 3d accelerated era that actually ~matched the max supported PC resolution at the time of release. And purely on hw, it had an actual edge as well..

What???

The highest resolution supported in the PS2 was 1080i upscaled in GT4 if im not wrong.

Meanwhile i have been playing at HD resolutions with PCs since before 1999.
 
Ostepop said:
Meanwhile i have been playing at HD resolutions with PCs since before 1999.
I fail to see how that disagrees with me - I said that PS2 was the first console with HD support - capability which at time of release, was roughly on par with how high PCs could go. And that obviously wasn't the case for any other console in 3d era.

I made a point it took more time for SW to showcase it, but there's been a fair amount of debate in this thread focusing on purely HW aspects of respective consoles.
 
You had the option to run 2 Voodoo2 cards together to up the 3D resolution roof to 1024*768 smoothly some years before, does it count?
 
What???

The highest resolution supported in the PS2 was 1080i upscaled in GT4 if im not wrong.

Meanwhile i have been playing at HD resolutions with PCs since before 1999.

GT4 is the only PS2 game I can think of with any HD support at all.
 
If resolution is that relevant, I say PS2 should win by default. It's the only console of 3d accelerated era that actually ~matched the max supported PC resolution at the time of release. And purely on hw, it had an actual edge as well.
How many games actually had high resolution support? The only major one I know of the is the pseudo-1080i of GT4, and that was many years after launch. By the time PS2 was launched, we were already in the GeForce era.

Thing is though - looking at actual launch-SW offering, nothing with 3d acceleration looked that good relative to PC(a genre or two might stand out, but overall it would go to PC) outside of PS1 which competed with non-accelerated PCs.
Overall I agree, but I think DreamCast and 360 came closest since accelerators were available.
 
You could even run a few games in 800x600 on VOODOO Rush back in 1997.

960x720 on a RIVA 128. Voodoo1 actually could do 800x600 if the game skipped the Z-Buffer, so that's back to 1996. And to think that Xbox and Cube were running 720x480 (or 640x480?) almost exclusively throughout their entire existence. Just omgwow... ;) Of course, that was probably almost entirely caused by SDTV they had to output to. I'm sure it was nice to have a NV2x only required to push that many pixels though. Lots of fillrate to play with.
 
960x720 on a RIVA 128. Voodoo1 actually could do 800x600 if the game skipped the Z-Buffer, so that's back to 1996. And to think that Xbox and Cube were running 720x480 (or 640x480?) almost exclusively throughout their entire existence.

Ya know... it's funny... My TV detects Max Payne 2 or I suppose... the Xbox, as sending a 720x480 signal. Curious.
 
Back
Top