Is the PS3 on track to deliver full specs?

A 650 Mhz GDDR frequency (instead of 700 Mhz) would mean a 7% reduction on local bandwidth and a 2.5% reduction on total RSX bandwidth.

The total bandwidth of RSX is 64GB/sec?

(but how? -> FlexIO ?)
 
Last edited by a moderator:
After the rumors from this last E3 you have to wonder how much lead time they do get at times though. The loss of rumble and the addition of 3D tilt seemed to be last minute to many.

Yeah... I can't help but think that they are still finalizing the last few options. The Mhz reduction and the rumored PS3 secret for instance are but 2 more examples.
 
The total bandwidht of RSX is 64GB/sec?

(but how? -> FlexIO ?)

Well, as I understand it, there's 128bit to GDDR3, and then there's 128bit to XDR2. Of course, I could be wrong ...

And how that relates to 64GB/sec and if that number is correct, I have no idea about.
 
He is under NDA

but I believe that there's no secrets, just a way to hype the people after the last news


Yeah but isn't it public knowledge that a secert is still left to be seen with the PS3. I thought he could confirm that that secert exist, while not telling us what the secert is.
 
Well, as I understand it, there's 128bit to GDDR3, and then there's 128bit to XDR2. Of course, I could be wrong ...
The XDR controller on Cell is 64bit, not 128. And the same applies to the FlexIO connection between them. FlexIO by nature is divided into 8-bit lanes (which is apparently the finest granularity you can get at the controller, if you believe Rambus), so 40 GB/sec with 35 GB/sec dedicated to RSX and 5 GB/sec to the SB (as the specs imply) suggests that each lane gives you 5 GB/sec... which in turn implies that RSX's connection to XDR is 56-bit. I'm sure more than a handful of people here should have arrived at the same conclusion.

In any case -- 700 MHz * 2 (DDR) * 128 bit = 22.4 GB/sec, vs. 650 * 2 * 128bit = 20.8 GB/sec (7.14% drop). And of course, assuming that it does not affect the FlexIO at all, you've got total bandwidth going from 57.4 GB/sec down to 55.8 GB/sec (2.78% drop). Other than nAo apparently using 8-bit FP precision, nothing really surprising.
 
5 mantissa (effectively 6 with normalization), 2 exponent, 1 sign, then. ;)
Eeek! no :)
In 4 bytes 3 positive values are packed, 2 are stored as 8 bit fixed point values, the last one is stored in a kind of custom-fp format (since it's a logarithm) which uses 4.5 bits for the exponent and 11.5 for the mantissa.

Marco
 
MasterDisaster, i'm sure a developer like nAo would have known about this some time in advance before it leaked here that a speed downgrade was going to occur. So with that they should be able to work around it a slong as they were given ample notice.

From our point of view (as developers) there was never a downgrade. The devkits were gradually upgrading all the time. So, you see, if there was anything short of promised , we just never got to use it.
 
The XDR controller on Cell is 64bit, not 128. And the same applies to the FlexIO connection between them. FlexIO by nature is divided into 8-bit lanes (which is apparently the finest granularity you can get at the controller, if you believe Rambus), so 40 GB/sec with 35 GB/sec dedicated to RSX and 5 GB/sec to the SB (as the specs imply) suggests that each lane gives you 5 GB/sec... which in turn implies that RSX's connection to XDR is 56-bit. I'm sure more than a handful of people here should have arrived at the same conclusion.

Like in this post? ;)

http://www.beyond3d.com/forum/showpost.php?p=767079&postcount=13

Anyway, I'm looking forward to seeing the final hardware layout.
 
From our point of view (as developers) there was never a downgrade. The devkits were gradually upgrading all the time. So, you see, if there was anything short of promised , we just never got to use it.

so your ARE basicly CONFIRMING the gpu speed is dropped from the initial 550mhz figure... :cool:
 
From our point of view (as developers) there was never a downgrade. The devkits were gradually upgrading all the time. So, you see, if there was anything short of promised , we just never got to use it.

But when you're developing, don't you target certain specs knowing that they will eventually be delivered? I mean if you were counting on a 22% increase in GPU clockspeed, from 450 to 550, but only recieved a 11% increase couldn't that sort of throw a wrench into the plans? Especially if you're really pressed for time?
 
But when you're developing, don't you target certain specs knowing that they will eventually be delivered? I mean if you were counting on a 22% increase in GPU clockspeed, from 450 to 550, but only recieved a 11% increase couldn't that sort of throw a wrench into the plans? Especially if you're really pressed for time?

We might develop with some target in mind, but (speaking personally here) I never expect an explicit X-percent increase based on some promised clock change. There are too many variables for me to be able to comfortably predict exact performance until someone drops a bit of hardware onto my desk that does what it's supposed to do that I can test on.

These days enough things are scalable that if more performance appears from somewhere, it can be put to use. If not, never mind.

It is kind of nice to know that more performance will be available, because it gives you a little headroom should designers or artists get a bit crazy, bu I never tell them that.
 
Back
Top