Xenos: How fast is it?

Even here you can't make a valid comparison as any game running on the console is going to limited to ~512 MB (or less) of combined video and system memory, with an added crutch of having to be designed to run directly from relatively slow Optical media.
Yes it is impossible to make an accurate comparison of individual pieces of the hardware, but I think it is definitely possible to build a PC that matches what you see a 360 dishing out. Call it "platform equivalency" or something. You can build a PC that is apparently as powerful as the 360. Surely it would vary from game to game though.
 
I think the only comparison we can really do is look at how well multiplatform games run on PC vs. the 360. One could conceivably build a PC that would match the 360's practical performance in a game. Just swap out parts until you've found a near match at the same resolution and similar detail settings and there you go.

My guess would be something like a GeForce 8600 GT / Radeon 2600 XT with a mid-range Athlon64 X2 or low-range Core 2 Duo.

In one of Capcoms older presentions they mentioned the Xenon optimal conditions being about on par with a Core2Duo 1.8GHz.

For GPU perfomance I had previously a 7900GT (500MHz, 512MB VRAM) and an opteron 185 (2x2.6GHz A64). Crysis averaged around 20fps at flat high 720p, Gears of War mostly locked 30fps (dips to 25fps at some cutscenes with heavy DOF) at 1280x1024, max, 16xAF. Assasins Creed at max 1280x1024, 4xMSAA/TSAA and 16xAF was about 25-30fps with mostly it being 30fps.

At the time I didn't have Orange Box nor do I think it was out but I had the lost Coast HDR demo and it ran fine at same settings as AC. Same with ME, solid performance but with no MSAA ofcourse.

All had triple buffering enabled and vsync. I also used 'very high' preset for quality in drivers.

Anyway not much but it was interesting back then to take the "temp".


Btw I remember the Nvidia 7xxx "Luna" techdemo running on PS3/RSX in a demonstration (video) and it was around 30fps while on the 7900GT it was around 60fps at 720p and IIRC near that at 1280x1024.
 
I seem to remember Capcom claiming Xcpu was on par with a dual core pentium 2.4 ghz too.
 
Last edited by a moderator:
I recently discovered that the Mafia 2 demo benchmark runs massively faster on my PC on 32 bit operating systems than 64 bit (with XP 32 bit being the fastest). There is a huge performance hit between Vista 32 bit and 7 64-bit, and the physics (physX) heavy bits (asplosions) seems the most heavily affected.

No-one will ever waste 3 days booting up their 360 under different OSes and with different drivers (and using different HDDs) to see how fast a benchmark runs. No 360 owner will ever have to wonder if the 64-bit version of physX is a straight recompile of the 32-bit version only with 64 bit variables and pointers.
 
Yeah but they dont get APEX PhysX system at all. Though I find your observation interesting as I have Vista x64.

However I have noticed APEX PhysX on CPU is actually no problem if a bug is fixed, a bug occuring under certain conditions. The game can stack upto 10k individual advanced physicallised persistent particles. The problem is that they dont get culled out and it seems the phsyics is constantly active despite particles not being affected by any force under X situation.

Thus for example first chapter at WWII with tons of blaxing guns the particle debris count increases making it slower until it reaches 10k particles. Now it doesn't mather where you are and where you look as all the particles will still have their physics running despite being out of scene and non visible and not moving and framerate will not change at all until a savegame reload is made.

This atleast for me in Vista x64.
 
I recently discovered that the Mafia 2 demo benchmark runs massively faster on my PC on 32 bit operating systems than 64 bit (with XP 32 bit being the fastest). There is a huge performance hit between Vista 32 bit and 7 64-bit, and the physics (physX) heavy bits (asplosions) seems the most heavily affected.

No-one will ever waste 3 days booting up their 360 under different OSes and with different drivers (and using different HDDs) to see how fast a benchmark runs. No 360 owner will ever have to wonder if the 64-bit version of physX is a straight recompile of the 32-bit version only with 64 bit variables and pointers
.

And no PC gamer has to do that either, so what exactly is your point? Even with very middling hardware (a 9600GT and any Core 2 duo say) Mafia 2 will run significantly better than the 360 release no matter what OS you run it under.


Yeah but they dont get APEX PhysX system at all. Though I find your observation interesting as I have Vista x64.

However I have noticed APEX PhysX on CPU is actually no problem if a bug is fixed, a bug occuring under certain conditions. The game can stack upto 10k individual advanced physicallised persistent particles. The problem is that they dont get culled out and it seems the phsyics is constantly active despite particles not being affected by any force under X situation.

Thus for example first chapter at WWII with tons of blaxing guns the particle debris count increases making it slower until it reaches 10k particles. Now it doesn't mather where you are and where you look as all the particles will still have their physics running despite being out of scene and non visible and not moving and framerate will not change at all until a savegame reload is made.

This atleast for me in Vista x64.

You know I think that's done on purpose. Its usually Nvidia that codes these PhysX effects and since Nvidia uses PhysX to sell high end GPUs, it makes little sense for them to allow the physics simulation to run well on ATI and/or low end Nvidia rigs.

Anyway, this is way off topic.
 
I think the only comparison we can really do is look at how well multiplatform games run on PC vs. the 360. One could conceivably build a PC that would match the 360's practical performance in a game. Just swap out parts until you've found a near match at the same resolution and similar detail settings and there you go.

My guess would be something like a GeForce 8600 GT / Radeon 2600 XT with a mid-range Athlon64 X2 or low-range Core 2 Duo.


Xenon is an in-order triple core with what has to be insufficient L2 cache to go around. I'm not convinced it holds up that well to a A64 X2. Of course, the A64 X2 line spans 1.6 - 3.2 GHz and Xenon probably fits in there somewhere. But I'm sure that if you have some code that just happens to work really well with Xenon that it will be a speed demon due to its clock speed, 3 cores and SIMD capabilities.

I think Capcom had made the claim to Xenon being roughly equivalent to a 3.0 GHz Pentium D, which would fall under your assumptions about Xenon being equal to Athlon x2s in that general region of clock speeds you mentioned. Here's an old test of Pentium Ds vs Athlon x2s back when AMD was still on top with performance. Pretty hilarious yet ironic seeing 2.4 GHz Athlon x2s beating 3.2 GHz Pentium Ds by a good margin in many of the benchmarks. At least it's comforting for users who still have those generation of Athlon X2s.
 
Last edited by a moderator:
I think Capcom had made the claim to Xenon being roughly equivalent to a 3.0 GHz Pentium D, which would fall under your assumptions about Xenon being equal to Athlon x2s in that general region of clock speeds you mentioned. Here's an old test of Pentium Ds vs Athlon x2s back when AMD was still on top with performance. Pretty hilarious yet ironic seeing 2.4 GHz Athlon x2s beating 3.2 GHz Pentium Ds by a good margin in many of the benchmarks. At least it's comforting for users who still have those generation of Athlon X2s.

The big question is how much more efficient the code has become with experience and whether or not they were also counting the VMX units in their calculations.
 
However I have noticed APEX PhysX on CPU is actually no problem if a bug is fixed, a bug occuring under certain conditions. The game can stack upto 10k individual advanced physicallised persistent particles. The problem is that they dont get culled out and it seems the phsyics is constantly active despite particles not being affected by any force under X situation.

That's a pretty big performance issue. After what we've learned about PhysX CPU performance this year though, maybe that shouldn't be a surprise!

Back before Havok was bought by Intel we were supposed to be getting GPU physics on DX9 class hardware. Do any 360 titles use Xenos for physics, and if so is it for anything other than simple particle systems?

And no PC gamer has to do that either, so what exactly is your point?

The point of my post was that the same hardware, at the same speed, with the same driver revisions can't reliably provide the same gaming performance. The implied point (I didn't think I needed to spell it out) being that picking a piece of hardware to compare Xenos gaming performance to is even less straight forward than one might think.

This rest was me just having a bit of fun with the fact I'd spent three days trying to get to the bottom of performance variations across three OS installs. I don't see how it led into this:

Even with very middling hardware (a 9600GT and any Core 2 duo say) Mafia 2 will run significantly better than the 360 release no matter what OS you run it under.

This has nothing to do with the topic at hand. It has nothing to do with anyone else's off topic comments, including mine. It would do all of us a favour (especially other PC gamers who post in the console forum) if you could reduce the volume of your PC advocacy a bit.
 
That's a pretty big performance issue. After what we've learned about PhysX CPU performance this year though, maybe that shouldn't be a surprise!

Back before Havok was bought by Intel we were supposed to be getting GPU physics on DX9 class hardware. Do any 360 titles use Xenos for physics, and if so is it for anything other than simple particle systems?

There certainly is more room for optimisations regarding PhysX on PC (reading how it runs x87 code and doesn't use SSE2 extension or better common in CPUs many years back).

About Havok, yes I do remember them showcasing GPU physics on ATI x1900 GPU back in 2006 (GPU cloth physics). One would expect the Xenos would certainly have the hardware to do it to. However at what penalty either of them would do it is another question.

GPU: R580 (x1900xt)
Core Clock: 625 MHz
Memory Clock: 725 MHz (1450 DDR)
Memory Bandwidth: 46.4 GB/sec
Shader Operations: 30000 MOperations/sec
Pixel Fill Rate: 10000 MPixels/sec
Texture Fill Rate: 10000 MTexels/sec
Vertex Operations: 1250 MVertices/sec

Framebuffer: 256,512 MB
Memory Type: GDDR3
Memory Bus Type: 32x8 (256 bit)
DirectX Compliance: 9.0c
OpenGL Compliance: 2.0
PS/VS Version: 3.0/3.0
Process: 90 nm
Fragment Pipelines: 48
Vertex Pipelines: 8
Texture Units: 16
Raster Operators 16

Doesn't seem like it feature wise (as in capabilities, not throughoutput/perfomance/units) would have more than Xenos. So it would most likely be able to do GPU accelerated physics but the question remaining is the perfomance cost and if it really is worth it vs just using simpler physics on CPU and leaving GPU to spend the cycles on graphical task in the end making a prettier/better game.

Anyway to this date I've yet to see a game using ATI GPUs for physics despite being more than capable. One could ask the same about Xenos tesselator/ATI HD3xxx or better tesselator that rarely if ever gets used.
 
Anyway to this date I've yet to see a game using ATI GPUs for physics despite being more than capable. One could ask the same about Xenos tesselator/ATI HD3xxx or better tesselator that rarely if ever gets used.

Or you just don't hear about it because it's not quite what you expect and thus "not-news" *cough*

*flees*
 
@function
Thanks for sharing those results, and for interjecting some reason into the discussion. It must have been a PITA to compile all that.
 
Pretty hilarious yet ironic seeing 2.4 GHz Athlon x2s beating 3.2 GHz Pentium Ds by a good margin in many of the benchmarks. At least it's comforting for users who still have those generation of Athlon X2s.

That's nothing compared to seeing a 13 watt dual core 1.66 GHz Atom CPU sometimes slap around a 90W 3.2 GHz P4 microfurnace. ;)
http://www.tomshardware.com/reviews/atom-d510-pentium-4-nettop,2649.html

The 360 CPU has some things in common with Atom, actually. Too bad it can't get anywhere near 3.0 GHz.
 
Last edited by a moderator:
According to Capcom, the vertex performance can match an 8800, but that report was made 3 years ago, who knows if or what kind of performance gains they have made with optimization since then.

R600's vertex performance is quite a bit better than G80's too. G80's low core clock also directly impacts its triangle setup rate. But neither of these aspects really hurt it in actual games compared to any of the competition.
http://ixbtlabs.com/articles2/video/r600-part2.html
 
That's nothing compared to seeing a 13 watt dual core 1.66 GHz Atom CPU sometimes slap around a 90W 3.2 GHz P4 microfurnace. ;)
http://www.tomshardware.com/reviews/atom-d510-pentium-4-nettop,2649.html

The 360 CPU has some things in common with Atom, actually. Too bad it can't get anywhere near 3.0 GHz.

Pretty interesting. I'd like to see an AMD Turion Neo K665 ( 2 MB L2, 1.7 GHz) go up against those. Probably would whip them pretty bad, though that chip and platform does use a bit more power to do so. Considering the experiences of the PS3 and 360's heat outputs, is it plausible that MS and Sony will retain their 3.2 GHz clock speeds in the next round of systems if they want to keep backwards compatibility? Thinking about it, something like a six or more core PPC successor to the Xenon that normally operates at a lower speeds for Xbox 720/whatever titles (2 - 2.5 GHz region) and ramps three cores up to 3.2 for running 360 games could make sense so MS lessens their chances of another system effected by RRODs. My assumption would be that a cooling system made to handle six/eight cores running at 2.00 GHz or so altogether could easily handle three at 3.2 GHz in a similar scheme to Intel's Turbo Boost with the rest at idle.

And on the G80s low clock vs ATi GPUs, I remember experiencing the difference between a G94m (laptop Geforce 9800M GS) and RV730 (desktop Radeon 4670). Sure the Radeon was a desktop model with much higher core clock (530 MHz vs. 750 MHz), but the 9800M had effectively twice the memory bandwidth and still the Radeon 4670 tore it to pieces. Both GPUs outperformed the 8800GTS 320 MB I had a while before those two in some titles like Crysis, where more VRAM came as a benefit, with the 9800M GS having 512 MB and the 4670 having 1 GB. Man that 4670 was one hell of a bargain with kick ass performance to boot. The 5570 1 GB I just replaced in my secondary desktop with a GTX 460 1 GB was a nice performer too, and OCed better than the 4670.
 
Last edited by a moderator:
The architecture is different, but from a throughput point of view it should be between Cedar and Redwood. Llano will be definitely faster than Xbox360.
 
I wish the hacking groups would really get to work on 360 so we could have homebrew folks writing interesting game ports for it. :D That Wii thread with normal mapped Quake was revealing. Would love to see something like that happen with 360.

XBMC on 360 would also be very nice, for that matter.
 
The architecture is different, but from a throughput point of view it should be between Cedar and Redwood. Llano will be definitely faster than Xbox360.

Well, looking at Mafia 2, Redwood actually achieves a better performance than the 360 by a margin. With everything turned on except the physics, my laptop does about 30fps on average, and that with a higher resolution and filtering too (no AA, as it chops the performance in half).
 
Well, looking at Mafia 2, Redwood actually achieves a better performance than the 360 by a margin. With everything turned on except the physics, my laptop does about 30fps on average, and that with a higher resolution and filtering too (no AA, as it chops the performance in half).

You should find most multiplatform games perform quite a bit better on a Redwood based laptop. I know for a fact that popular engines like UE3, Source and MT Framework certainly do.

It'll be really curious to see how Llano stacks up in similar tests, we know the actual chip itself is better but will the restricted bandwidth cripple it to the point where it performs below the 360 in popular multiplatform engines?
 
Back
Top