Xenos: How fast is it?

I think ATi ran the ruby demo on Xeno before and it was struggling to maintain 30 fps at 720p, but their X1800 can run it at a higher res and higher frame rate. But they are probably using a much more power CPU with the X1800. Cant find the link because its very old.
 
Why is are the old pre-unified shader cards the subject of comparison. Why not R600 cards? Yes, the 2900 would be faster but it would be unified vs unified. What would you all say, the Xenos is effectively 70% the speed of the 2900 if were to leave the eDRAM and 720p output out of this?
 
Last edited by a moderator:
All I know for sure is that my 8800GTX blows the 360 out of the water in the cross-platform games I've played. Mass Effect runs several times faster at the same resolution. Oblivion on 360 is pretty nasty. This is about the best comparison that I can come up with. Comparing spec sheets isn't going to get us anywhere.

Xenos is both benefitted and gimped by its EDRAM which isn't quite big enough but yet offers gobs of bandwidth that helps a lot. I don't play any games at 1280x720 or lower though. 1024x600? No way.

There are too many variables affecting the performance. The 360 CPU is obviously a question mark. Surely it's no match for a Core 2 or Phenom II most of the time.
 
All I know for sure is that my 8800GTX blows the 360 out of the water in the cross-platform games I've played. Mass Effect runs several times faster at the same resolution. Oblivion on 360 is pretty nasty. This is about the best comparison that I can come up with. Comparing spec sheets isn't going to get us anywhere.

Xenos is both benefitted and gimped by its EDRAM which isn't quite big enough but yet offers gobs of bandwidth that helps a lot. I don't play any games at 1280x720 or lower though. 1024x600? No way.

There are too many variables affecting the performance. The 360 CPU is obviously a question mark. Surely it's no match for a Core 2 or Phenom II most of the time.

I know the Xenon's MIPS rate is about 19,000 (6.0 MIPS/Hz), roughly just below an Athlon 64 x2's general MIPS per clock cycle, 7.3 MIPS/Hz. So basically, an old K8 Athlon x2 3.0 GHz will be roughly equal, but of course MIPS isn't the end all determining factor.
 
I think ATi ran the ruby demo on Xeno before and it was struggling to maintain 30 fps at 720p, but their X1800 can run it at a higher res and higher frame rate. But they are probably using a much more power CPU with the X1800. Cant find the link because its very old.

Actually the ruby was a quick port and was not optimized for xenos.
 
Comparing raw specs to a desktop GPU is pointless, since a desktop GPU and Xenos run in entirely different environments.
 
Comparing just individual parts is pointless too as the division of labour between a console and PC is different. Furthermore you have to factor in 'when' the comparison is taking place. When the Xbox 360 was first released it performed at 1.0X, where it is now? I can't say but it could be 1.5X or 2.0X. So even if you had quotes on the performance saying that each CPU core was like an Athlon X2 core, is that still the case with all the improvements since?
 
I think the only comparison we can really do is look at how well multiplatform games run on PC vs. the 360. One could conceivably build a PC that would match the 360's practical performance in a game. Just swap out parts until you've found a near match at the same resolution and similar detail settings and there you go.

My guess would be something like a GeForce 8600 GT / Radeon 2600 XT with a mid-range Athlon64 X2 or low-range Core 2 Duo.

I know the Xenon's MIPS rate is about 19,000 (6.0 MIPS/Hz), roughly just below an Athlon 64 x2's general MIPS per clock cycle, 7.3 MIPS/Hz. So basically, an old K8 Athlon x2 3.0 GHz will be roughly equal, but of course MIPS isn't the end all determining factor.
Xenon is an in-order triple core with what has to be insufficient L2 cache to go around. I'm not convinced it holds up that well to a A64 X2. Of course, the A64 X2 line spans 1.6 - 3.2 GHz and Xenon probably fits in there somewhere. But I'm sure that if you have some code that just happens to work really well with Xenon that it will be a speed demon due to its clock speed, 3 cores and SIMD capabilities.
 
Last edited by a moderator:
I think the only comparison we can really do is look at how well multiplatform games run on PC vs. the 360. One could conceivably build a PC that would match the 360's practical performance in a game. Just swap out parts until you've found a near match at the same resolution and similar detail settings and there you go.

Or you could just look at how 360 compares to the PS3, sporting a near stock 7800GTX.

Edit: For the video card anyway, which is what the OP asked.
 
Depends on how fast you can throw it away. For example, if you just stand still, it's not gonna be that fast; if you run it's a bit better, but the best would be to sit in a car and go with at least 150 miles per hour and then throw it, now that would be really fast.
 
Depends on how fast you can throw it away. For example, if you just stand still, it's not gonna be that fast; if you run it's a bit better, but the best would be to sit in a car and go with at least 150 miles per hour and then throw it, now that would be really fast.

:LOL::LOL::LOL:
 
Depends on how fast you can throw it away. For example, if you just stand still, it's not gonna be that fast; if you run it's a bit better, but the best would be to sit in a car and go with at least 150 miles per hour and then throw it, now that would be really fast.

But what if i used a canon to shoot it away.
 
According to Capcom, the vertex performance can match an 8800, but that report was made 3 years ago, who knows if or what kind of performance gains they have made with optimization since then.

Or you could just look at how 360 compares to the PS3, sporting a near stock 7800GTX.

Edit: For the video card anyway, which is what the OP asked.

What's the point in comparing the 360's GPU to the PS3's GPU since we all know the Cell helps out a good amount?

Depends on how fast you can throw it away. For example, if you just stand still, it's not gonna be that fast; if you run it's a bit better, but the best would be to sit in a car and go with at least 150 miles per hour and then throw it, now that would be really fast.

:LOL:
 
:LOL::LOL: Laa-Yosh is squarely to be blamed.
In all honesty I thought of a similar reply myself, but I thought we should at least try and let this topic have a go!

As for the OP, faster in what way? Different engines get different performance. US vs. discrete shaders throws in all sorts of other comparison complexities. Then you have 'in a PC' versus 'in a console' where the PC part takes a knock to efficiency, while the console part has the option of major optimisations squeezing performance from it. Unless you can specify a particular task, and overall 'average performance' is impossible to nail down. Best anyone can do is look at raw specs, which don't really give much idea of real-world performance, or compare lots and lots of similar titles across platforms, and then say, "in this game, on this engine, this GPU managed this many FPs at that framerate." So one could compare Gears and Oblivion on 360 to Gears and Oblivion on PC running at 720p and see what the framerate difference is. But that again is inaccurate, as the CPU, RAM, and code differences all have an impact on what the GPU can do.
 
I think the only comparison we can really do is look at how well multiplatform games run on PC vs. the 360. One could conceivably build a PC that would match the 360's practical performance in a game. Just swap out parts until you've found a near match at the same resolution and similar detail settings and there you go.

Even here you can't make a valid comparison as any game running on the console is going to limited to ~512 MB (or less) of combined video and system memory, with an added crutch of having to be designed to run directly from relatively slow Optical media. Whereas your 8800 GTX for example is going to sport 768 MB of memory plus anywhere from 1-4 GB of memory for a computer of that timeframe, likely with at least a 7200 rpm desktop HDD. Granted, the OS is going to eat some of that, but you're still going to have significantly more resources available. Even something as low as an 8600 GT is going to have much more system resources available.

And that's just one thing out of many that makes a direct comparison of just the GPU rather difficult to make.

Regards,
SB
 
Or you could just look at how 360 compares to the PS3, sporting a near stock 7800GTX.

Edit: For the video card anyway, which is what the OP asked.

RSX has only a bit more than half the pixel fillrate and memory bandwidth of a 7800 GTX. Otherwise it is similar to it but with an extra 120 MHz core clock, the same clock as that limited edition 7800 GTX 512MB but lower than a 7900 GTX.

What's interesting to me here is that we know G7x has a hard time with complex pixel shaders compared to even X1800. We also know that NVIDIA had to cheat like crazy with their texture filtering to keep up. There have been a few ports that have shown lesser image quality on PS3 compared to 360. But overall PS3 and 360 don't show hugely different results so the two consoles seem to be fairly equal in their capabilities.
 
Last edited by a moderator:
Back
Top