Xenos - RSX - What was left out?

Shifty Geezer said:
Vysez said:
pc999 said:
Can you please link me to those doc (not the patents because that can be used or not).
Thanks in advance.
When I say the docs have been leaked, I mean that people (read Xe Developers) sent the docs to their friends, who in their turn sent them to others friends, etc...
I've seen an offical MS comparison of Xenon to Alpha kits with recommendations of what devs can expect in differences in where Xenon is stronger and weaker. Makes for interesting reading, but even though I ain't under NDA I don't think it fair to talk about it. Though I can't see much point in the NDAs. I guess the marketting departments don't want the truth of these machines to come out as that would undermine their 'unstoppable uber-computing' propaganda with 'powerful but with limits' realities.

Oh. You've seen that too... I'm still trying to understand if it's real or not. Especially seen where it came from COUGH COUGH...
 
Vysez said:
pc999 said:
Can you please link me to those doc (not the patents because that can be used or not).

Thanks in advance.
I don't know if you can find them on the web.
That is if there's even a site that offer them to download.

When I say the docs have been leaked, I mean that people (read Xe Developers) sent the docs to their friends, who in their turn sent them to others friends, etc...

Ok, I did not know that, I need to get "better" friends :LOL: .

Titanio said:
Dot products are flops. Some have argued that "flops will be free" next-gen, but that's nothing to do with the hardware per se - just that the software won't be ever using all the flops it has to offer. Others would disagree completely.

Yes, but a dot product instruction can not do a flop, Right, and if they have a dot instruction then that and flops come from different places, please correct me if I am wrong.

All VMX units have 128-bit registers, Xenon's cores simply have 128 of them compared to 32 normally.

But why?I guess it is for some specific work/reason...
 
pc999 said:
Yes, but a dot product instruction can not do a flop, Right, and if they have a dot instruction then that and flops come from different places, please correct me if I am wrong.

The dot product instruction is executing flops, they're not coming from different places AFAIK. It's basically encapsulating a sequence of flops..it's more about development ease than performance I think.

pc999 said:
But why?I guess it is for some specific work/reason...

Possibly to better accomodate multi-threading - switching from one thread to another on the VMX unit without dumping data from the registers. I'm sure there are other reasons, but that's just a guess.

And yeah, I need better friends too! :p
 
You guys have to spill the beans. You're among friends. :LOL:

All these leaks and rumors are suggesting that the cpu isn't very good.
 
I will have to reply with a 'ditto'. At least a PM of some sort...

or...

*thinks of Pheobe Cates in "Fast Times at Ridgemont High"*

...no, that's not it...
 
london-boy said:
Oh. You've seen that too... I'm still trying to understand if it's real or not. Especially seen where it came from COUGH COUGH...
It's legit.
 
ralexand said:
You guys have to spill the beans. You're among friends. :LOL:

All these leaks and rumors are suggesting that the cpu isn't very good.

Well I've seen the leaked document too and that's not what the leak indicate. If you want to believe something then believe what ERP and DeanoC says. ;)
 
Darnit, seems like every second person has seen it now. Just quietly put the thing up somewhere! :p

From what we've been hearing up til now, the situation seems to be:

1) The final cores are cut-down with regard to "General purpose" performance and other aspects relative to the G5s in the dev kits, but there are no 3 of them, and with improved VMX and clockspeed.

2) For some devs, their code was reliant more on the areas that are now weaker than on the areas that are now stronger - hence complaints about performance i.e. in terms of CPU usage some are running worse on beta than on alpha (?)

?

So my question is what MS has been doing to ease that transition? What did they provide for developers before the arrival of beta kits to guide developers in terms of maintaining performance from alpha to beta? Switching your type of CPU - with the resultant side effects that there would seem to be - with maybe 3 or 4 months left for launch title development doesn't seem trivial.

Is it a minority of devs who're in this situation, or..?
 
I have seen documents that basically back up the gist of what has been said thus far in rumour and conjecture. It suggests a very capable system that isn't as good as existing tech in some areas (like GP), and which won't happily accomodate conventional programming techniques, with the document's author(s) saying they expect a degree of cache management might be needed to max XeCPU performance.

The take-home point for me was in regards MS statements.

1) They said Alpha kits had 1/3rd the power of final hardware. They never said in what way. In GP performance the alpha kits have MORE power than the final hrdware.

2) They say GP is important for games but their CPU wasn't designed for GP work; it's a balanced general performance+FP streaming processor that needs to be written to differently.

If word gets out (what's the betting Beyond3D gets mentioned as a source?!) that the Alpha Kits have better General Purpose computing power than XB360, and yet the demos weren't really too hot which was put down to inferior power in alpha hardware, where does that leave MS's statements regards the importance of GP?

In reality the XeCPU is very powerful but needs a different software architecture to make use of it. In this respect it can be perhaps be considered 3x the power of Alpha (though if I remember rightly the argument for 3x the power was something idiotic like 3 cores, 6 threads versus 2 cores, 2 threads). But if GP performance is where it's at, XB360 should come out worse than the Alpha Kits. Which goes to show GP isn't that important (unless MS and STI screwed up in believing things possible on streaming architectures) and MS only dug that up as a number to contest Cell with.

The details of the document I won't release because it's supposed to be NDA ;) Just a side by side comparison of different aspects to the alpha and final kit, like cores, threads, cache sizes, latencies, registers, and where there was an improvement in final hardware, and where final hardware wasn't as capable as the alpha. And remember the Alpha Kit wsa dual-G5's which are far too costly to appear in a console. If IBM can create a chip far cheaper than G5s that outperform G5s and GP work, why waste time with G5s?!
 
Shifty Geezer said:
2) They say GP is important for games but their CPU wasn't designed for GP work; it's a balanced general performance+FP streaming processor that needs to be written to differently.

If word gets out (what's the betting Beyond3D gets mentioned as a source?!) that the Alpha Kits have better General Purpose computing power than XB360, and yet the demos weren't really too hot which was put down to inferior power in alpha hardware, where does that leave MS's statements regards the importance of GP?

I was going to make this point...remarkably funny that they made this a key point of contest between themselves and Sony when they appear to be making many of the same sacrifices with the final hardware as Sony was, and designing with many of the same priorities in mind. Not good to point that out as your strength when you're not very strong in it anyway (at the very least, the argument rings very very hollow now).

But anyway, is there any indication about how MS was managing the transition to minimise "surprises" on final hardware? If a developer takes a launch title from Alpha to Beta and CPU performance for their current game code actually drops, where does that leave them with just a few months to go? I guess if your framerate is tied to the GPU in early games and not the CPU it may be less of a problem but still..seems a little risky.
 
Shifty Geezer said:
In reality the XeCPU is very powerful but needs a different software architecture to make use of it. In this respect it can be perhaps be considered 3x the power of Alpha (though if I remember rightly the argument for 3x the power was something idiotic like 3 cores, 6 threads versus 2 cores, 2 threads).
Isn't 3x mostly by Xenos?
In the GDC session MS guys told that CPU would be the bottleneck in forthcoming years, so I assume if CPU can keep up with GPU one way or the other it's powerful enough for them.
 
Well, MS wasnt exaclty lying with the GP stuff....Sure they may have less on Beta than on Alpha and Have much more FP on beta than on ALpha, but they never said that they Had more GP on beta than on Alpha.

There has to be a tradeoff, ultimally, Xenon still has like 3x more GP than the PS3, and thats where they are coming from.

Sure its not great, but still its much better than what the Ps3 has, and it still has a big FP "performance"/"numbers". Its much more balanced IMO.
 
Possibly. Maybe it was just forum-talk that explained 3x power as 3x hardware threads. Did MS ever even qualify where the performance difference was or just provide a nice round number?
 
Back
Top