any news about RSX?

This is the story as we know it so far.

Developers really never got a downgrade because their development kits were constantly getting better and better. Apparently, the kits they have so far (which may or may not be final) have a RSX clocked at 500MHZ and GDDR3 clocked at 650MHZ. This is not really a downgrade for them because the hardware was never what was announced at E3 of 2005 in the first place. Additionally, it has been hinted/stated/rumored (whatever) that on the official Sony development site the current (as of a couple weeks ago) target for the RSX was 500MHZ for the core and 650MHZ for the memory.

Now, in one way this is a downgrade, because at E3 of 2005 we were told the RSX would clock at 550MHZ and the GDDR3 at 700MHZ. Obviously, we all hope that if an additional development kit is shipped Sony will reach their original target. But it's not really a disaster because Sony's first generation PS3 games are generally looking very, very good and on par or better than currently or about to be released second gen 360 games.

Also, it's not really so bad because if the above is an indication of the power of the 360 and Alan Wake can be done on XBox 360 hardware with the same level of graphical detail as the PC version (as is being claimed) then developers can even go beyond that on the PS3!

Yes, I hope that the original specs are met with the final hardware. But the latest pictures of Alan Wake on the 360 (the images floating around that are high resolution are of the 360 version) prove that we have a lot to look forward to with even the current technically "downgraded" PS3 specifications.

Of course the fact we have recently learned about the 96KB per quad texture cache, 63 vertex lighting and post transform cache, fast vector normalize, and extra texture lookup logic (to help get data from the XDR) is nice too.

So actually, in some ways, the PS3's RSX has been "upgraded" from the common PC part in ways that we did not know about at E3 of 2005 when the original specs were released.

Basically, I think everything balances out. If the final specs are met then that's even better.
 
Also, it's not really so bad because if the above is an indication of the power of the 360 and Alan Wake can be done on XBox 360 hardware with the same level of graphical detail as the PC version (as is being claimed) then developers can even go beyond that on the PS3!

But the latest pictures of Alan Wake on the 360 (the images floating around that are high resolution are of the 360 version) prove that we have a lot to look forward to with even the current technically "downgraded" PS3 specifications.
How do you derive PS3 performance from Alan Wake shots on a $2000 PC? :???:
 
The developer for Alan Wake said in an interview I just read that the 360 version of the game and the PC version of the game should look identical.

Basically, the fancy demo they had yesterday was being ran on a quad core CPU (probably about the same processing power as the 360 CPU) and one single 7900GTX graphics card according to the interview.

I think it's reasonable to assume that the 360 when fully optimized could compete with that gaming rig and the PS3 with the power of CELL (estimated on this very forum to have about twice the computing power of the 360) could probably match it as well.

But once again, the point is that the developer said both the 360 version and the PC version would look the same.
 
A modern quad-core PC CPU would put both Xenon and Cell to shame in code not painfully hand optimized to take advantage of them, and possibly even then for game code. It's not like you can make a game about streaming decompression, after all.

And how do you conclude that if the X360 can do it than obviously the PS3 can do even better? From all the information we have thus far, it is impossible to make such conclusions. You simply can't do it on purely technical bullet points. The GPU's seem to be reasonably well matched, each with their own strengths and weaknesses. The CPU's even seem to be reasonably matched for running game code, at least for several years until we see some rather unorthodox coding. Morever, the system architectures, while different, don't seem to imply any obvious and significant technical advantages (NUMA vs. UMA, for example). About the only thing that seems clear is that if desired, it is possible to fit a longer playing game on a single disc for PS3 than for X360, assuming identical cut sceen footage is used, as that takes up a heck of a lot of space and contributes little to the playing time.

Best I can tell, and this has been said a zillion times over the past year, is that the talent, creativity, and determination (not to mention time and resources given to...) of the respective developers will have a far, far, far greater impact on final gameplay and graphical quality than any technical differences.
 
A modern quad-core PC CPU would put both Xenon and Cell to shame in code not painfully hand optimized to take advantage of them, and possibly even then for game code. It's not like you can make a game about streaming decompression, after all.

And how do you conclude that if the X360 can do it than obviously the PS3 can do even better? From all the information we have thus far, it is impossible to make such conclusions. You simply can't do it on purely technical bullet points. The GPU's seem to be reasonably well matched, each with their own strengths and weaknesses. The CPU's even seem to be reasonably matched for running game code, at least for several years until we see some rather unorthodox coding. Morever, the system architectures, while different, don't seem to imply any obvious and significant technical advantages (NUMA vs. UMA, for example). About the only thing that seems clear is that if desired, it is possible to fit a longer playing game on a single disc for PS3 than for X360, assuming identical cut sceen footage is used, as that takes up a heck of a lot of space and contributes little to the playing time.

Best I can tell, and this has been said a zillion times over the past year, is that the talent, creativity, and determination (not to mention time and resources given to...) of the respective developers will have a far, far, far greater impact on final gameplay and graphical quality than any technical differences.


Thanks! *applause* This post should be a sticky!
 
A modern quad-core PC CPU would put both Xenon and Cell to shame in code not painfully hand optimized to take advantage of them, and possibly even then for game code. It's not like you can make a game about streaming decompression, after all.

The Folding@Home people seem quite bullish on Cell vs multi-core CPUs at this point in time. Sure, it's a great app for them, but the "painfully hand optimized" bit strikes me as context-sensitive. The air we breath is so inundated towards x86ness that most people don't even think about just how "painfully hand optimized" most things are for x86, but point fingers at other models as being difficult.
 
Does anyone have the gigaflops for this new Quad core chip by Intel? I know determining performance is not as simple as measuring maximum gigaflops, but it would be neat to see a comparison.
 
Does anyone have the gigaflops for this new Quad core chip by Intel? I know determining performance is not as simple as measuring maximum gigaflops, but it would be neat to see a comparison.

I think that was about 50 Gflops. Which, if true, is quite a bit up from the Core 2 Duo best ratings I've seen, at around 16 Gflops.
 
If that is true and it's around 50 gigaflops then the CELL still has it beat because it's rated (even with one SPE disabled) at least around 150 gigaflops or more. Thanks for the information!
 
Does anyone have the gigaflops for this new Quad core chip by Intel? I know determining performance is not as simple as measuring maximum gigaflops, but it would be neat to see a comparison.

Reading further what you're saying i would say it would almost be better if you didnt know. Alot of your hardware power/capability judgements seems to be coming from optimized benchmark results and PR statements using numbers and figures that are so vanilla in context they could call it what ever they want. Thats problably the worst place possible to get info from, especially if thats how you plan on comparing capability.


....and i just saw your above post which proved my assumption right. You really need to slow yourself down, you're getting pretty far along on your ride in the gullibility train :)
 
Last edited by a moderator:
If that is true and it's around 50 gigaflops then the CELL still has it beat because it's rated (even with one SPE disabled) at least around 150 gigaflops or more. Thanks for the information!

SP FLOPS would be 85.12 GFLOPs.

Not thats its even remotely representitive of the comparitive performace of the chip to one of the console chips. You may as well rate them on clock speed for all the accuracy it would give you, or the old classic "number of cores" :LOL:
 
The air we breath is so inundated towards x86ness that most people don't even think about just how "painfully hand optimized" most things are for x86, but point fingers at other models as being difficult.
Point well taken, but...
Sure, it's a great app for them...
...is an important qualifier that shouln't be overlooked in the context of my post. Cell (and Xenon) have theoretical performance that is there for (1) apps that are compatible with that model of parallel tasking and (2) code that has been written (intentionally or not) in a way that can take advantage of the architecture. Out of curiosity, how are FAH people assuming performance to be there (since I don't know much at all about the folding algorithm)... by parallelizing the main algorithm or by running concurrent algorithms on separate SPE's? If it is the latter, that further supports my point, which in any case is that game code is a far cry from such an algorithm, and in the context of what consoles are primarily designed to do a modern quad core processor from Intel or AMD would properly trounce either Cell or Xenon unless a game were (yes) painstakingly optimized for them, and even then it would be an uphill battle.
 
If that is true and it's around 50 gigaflops then the CELL still has it beat because it's rated (even with one SPE disabled) at least around 150 gigaflops or more. Thanks for the information!

Dude, quoting raw flops numbers and declaring a victor, without caring at all under what circumstances the numbers were generated and by which types of applications that performance can be harnessed is about as ridiculous as looking at the raw horsepower figures of two "transportation vehicles" and declaring a winner, without knowing anything about the type of vehicle or the type of race being run.

Do you think a locomotive would trounce a Mercedes McLaren at a Formula 1 race? How about a monster truck vs. a Cessna over the English channel? Perhaps a drag car vs. a toyota cross country? Maybe a Lamborghini vs. a Ford F150 in a sled pull? I mean, in all cases the former is CLEARLY more powerful, right?

You should learn to at least try to put numbers into some sort of reasonable context.
 
Out of curiosity, how are FAH people assuming performance to be there (since I don't know much at all about the folding algorithm)... by parallelizing the main algorithm or by running concurrent algorithms on separate SPE's?

They didn't explicitly state their method, but Stanford is reporting yielding 100GFlops in folding operations when using Cell. So it's not an 'assumption' so much as a statement out of Stanford.
 
This is off topic, but I am confused as to why Xenon and PS3 Cell were designed the way they were. I can sort of understand the reason for the Cell design, as it is intended for multiple purposes. But, as far as I know, Xenon will only ever be used in a video game console.

Almost everything I've read that wasn't MS PR says that Xenon is pretty much a crap cpu for game code. I don't get why MS settled for this "floating point heavy" cpu design, especially when almost every dev says that floating point power isn't what they need most. How the hell could this have happened? Does Dean Takahashi's book shed some light in this?
 
Xenon's CPU is supposed to be good at game code. If it's not then surely there was a failure somewhere along the line, be it schedule related or design related. The decision Microsoft made was to sacrifice a bit of single threaded performance vs. a x86 chip in order to have more cores. They figured developers would be able to parallelize their work and that 3 cores would yield better performance for games than would the alternative.
 
Xenon's CPU is supposed to be good at game code. If it's not then surely there was a failure somewhere along the line, be it schedule related or design related. The decision Microsoft made was to sacrifice a bit of single threaded performance vs. a x86 chip in order to have more cores. They figured developers would be able to parallelize their work and that 3 cores would yield better performance for games than would the alternative.

Microsoft actually wanted Out Of Order Execution though and didn't get it (IBM somehow couldn't deliver..not sure the story), according to Takahashi's book.
________
VAPORIZER TELEVISION
 
Last edited by a moderator:
[quote="Bigus Dickus"]It's not like you can make a game about streaming decompression, after all.[/quote] Cyberia, Megarace, Seventh Guest, and many others would disagree. :p

Cell (and Xenon) have theoretical performance that is there
That's a heck of a step up from last 15 years of consoles - prior to this gen, we've had consoles with CPU power that just sucked (theoretical or not).
 
Last edited by a moderator:
Back
Top