PS3 Goes Into Full Production...

Arwin said:
Considering that devkits have apparently had 2Gb

Do the devkits really have 2GB RAM. Can any devs comment on this?

I thought it was 1 GB (double the consumer unit for debugging overhead).
 
Arwin said:
You might be right. But TGS is as good a place as any, and like with the PSP, it doesn't need to be announced until right before the launch either.

Funnily enough, that spec change was announced in the July before launch. But it had been heavily rumoured some months previous to that (gamesindustry.biz reported on it the previous January), but we've had no solid rumours regarding PS3 in that light.

There's no street prices for XDR AFAIK. In fact, I think PS3 is the only device using it. So it'd be very difficult to get cost information.
 
Titanio said:
Funnily enough, that spec change was announced in the July before launch. But it had been heavily rumoured some months previous to that (gamesindustry.biz reported on it the previous January), but we've had no solid rumours regarding PS3 in that light.
Not that I'm anticipating a RAM increase at all, but there have been a few squeaks suggestive of as much. If it did happen, we could point to some clues along the way.
 
Well, we haven't had anything as explicit as reports like this for PSP!

http://www.gamesindustry.biz/content_page.php?aid=2814

I would be shocked if they added more memory at this stage, I think we'd have heard something more solid.

For anyone who's interested, by the way, there's analyst commentary on this news here:

http://www.ft.com/cms/s/4447cc3c-171d-11db-abad-0000779e2340.html

Talks about how it was earlier than expected, and that's prompted one of their sources to even speculate that the machine could launch earlier than expected. Rather misplaced speculation IMO, and Sony said today that it had no plans to change its release date.
 
Considering Cell's weakness when it comes to interacting with the GDDR3 (16MB/s read, 4GB/s write), I guess it would be arguably much more useful to add more XDR over adding more GDDR3.

Cell's ability to contribute to the rendering pipeline could be what really sets it apart from the competition. But only having 256MB XDR could be a bit of a problem there.

I'm sure just adding more GDDR3 would be much cheaper, but will it really improve the system much? And wouldn't 256MB on a 256bit bus show more interesting results than 512MB on the 128bit bus?
 
inefficient said:
Considering Cell's weakness when it comes to interacting with the GDDR3 (16MB/s read, 4GB/s write), I guess it would be arguably much more useful to add more XDR over adding more GDDR3.

Cell's ability to contribute to the rendering pipeline could be what really sets it apart from the competition. But only having 256MB XDR could be a bit of a problem there.

I'm sure just adding more GDDR3 would be much cheaper, but will it really improve the system much? And wouldn't 256MB on a 256bit bus show more interesting results than 512MB on the 128bit bus?

I'm not sure adding a 256bit bus is feasible. To my understanding the bus is more or less 256mbit already where it is split between GDDR3 and Cell's Flexio interface. Having a 256bit bus to GDDR3 would mean giving up the direct connection to Cell and thus indirect access to the XDR pool...without adding another bus or coming up with some wierd 384bit bus I'm assuming...might as well go 512bit then!

------

I'm not sure 256MB would be limiting for Cell being able to aid in graphics...vertex data isn't huge, and then the asset set should be smaller given Cell would be working on stuff composited into the scene or working withing certain stages of the rendering pipeline. A divergent discussion for sure...wish I had time for a topic. Bah!
 
Last edited by a moderator:
scificube said:
I'm not sure adding a 256bit bus is feasible. To my understanding the bus is more or less 256mbit already where it is split between GDDR3 and Cell's Flexio interface. Having a 256bit bus to GDDR3 would mean giving up the direct connection to Cell and thus indirect access to the XDR pool...without adding another bus or coming up with some wierd 384bit bus I'm assuming...might as well go 512bit then!

FlexIO and the GDDR3 bus are seperate. They could put in a 256-bit bus to GDDR3 and maintain the FlexIO (again though, highly unlikely IMO).

I don't think it would necessarily make any more or less sense to increase one pool over the other if you were to increase one. Cell's read/write capability for GDDR3 is such that RSX has greater access to XDR, which actually suggests that they see RSX as being the more data hungry one.

Again though, it's idle speculation IMO, I really don't think it'll happen :p
 
Titanio said:
FlexIO and the GDDR3 bus are seperate. They could put in a 256-bit bus to GDDR3 and maintain the FlexIO (again though, highly unlikely IMO).

I don't think it would necessarily make any more or less sense to increase one pool over the other if you were to increase one. Cell's read/write capability for GDDR3 is such that RSX has greater access to XDR, which actually suggests that they see RSX as being the more data hungry one.

Again though, it's idle speculation IMO, I really don't think it'll happen :p

I realize the busses are seperate. I'm more speaking to where they both originate as in the standard 256bit bus of Nvidia GPUs. I would've been far easier to not need a new mem and cheaper to not need a new mem controller than a more robust one or even two not to mention extra complexity of more traces larger and/or more busses would have added to the motherboard. Beyond tweaking this is what I think happened.

I just think they took half the end to end points and tied them up with Cell's already existing Flexio and this is what we got. If you or anyone else can illuminate how the busses were crafted completely independent of the other I'd be glad to hear about that process though.
 
inefficient said:
Considering Cell's weakness when it comes to interacting with the GDDR3 (16MB/s read, 4GB/s write), I guess it would be arguably much more useful to add more XDR over adding more GDDR3.

I'm sorry but i fail to see the correlation. At the end of the day, if Sony think most games will need more memory for graphics, they should put in more GDDR. If they think the XDR is not enough, they could add more. The fact that Cell is useless at accessing GDDR has little relevance here, compared to the real reasons they could find for upgrading. They won't add anything anyway.



I'm sure just adding more GDDR3 would be much cheaper, but will it really improve the system much? And wouldn't 256MB on a 256bit bus show more interesting results than 512MB on the 128bit bus?

That's all up to Sony to test and verify, then allocate against their budget and price point for the unit. They have decided what they have decided because of that ratio: performance and price.
 
scificube said:
I just think they took half the end to end points and tied them up with Cell's already existing Flexio and this is what we got. If you or anyone else can illuminate how the busses were crafted completely independent of the other I'd be glad to hear about that process though.

It seems to me that the FlexIO replaced the PCI-E/AGP interfaces, and not 128bits of the previous 256bit bus...
 
Bobbler said:
It seems to me that the FlexIO replaced the PCI-E/AGP interfaces, and not 128bits of the previous 256bit bus...

Could be...but wouldn't those interfaces would need to be sped up a good bit for that to work no? (AGP especially)
 
london-boy said:
The fact that Cell is useless at accessing GDDR has little relevance here, compared to the real reasons they could find for upgrading.
I'd say the chief reason for an upgrade would be for non-gaming. More GPU RAM isn't going to help much when the bottleneck there is BW. Whereas if using non-gaming Apps on Linux, you have all of 256 MBs, of which a substantial amount is likely gobbled up with OS. If Cell is ever to leverage it's power for home photo and video editing, <256MBs isn't going to cut it and limiting yourself to virtual memory on the HDD costs you the performance benefit of the system. This won't matter for web browsing or little apps, but for data hungry applications, media applications, those Cell is best at running, the inability to work with GDDR effectively is sure to be a limiting factor worth tackling. From that perspective, adding XDR would give more room for game data and working space, rather than graphics work, so would benefit both gaming and other functions. An inrease in GDDR will mostly only benefit gaming. At least, until PS3 specific applications are created that use the GDDR.
 
joebloggs said:
Do the devkits really have 2GB RAM. Can any devs comment on this?

I thought it was 1 GB (double the consumer unit for debugging overhead).

Nope, the devkits have 512 XDR + 256 DDR3 (not sure if it's 512 DDR3). This is for the 'almost' final devkit that we currently have.
 
inefficient said:
Considering Cell's weakness when it comes to interacting with the GDDR3 (16MB/s read, 4GB/s write)

Aint this BS rumor gone already? This was considered the absolute worst case when an SPE accesses the GDDR in a certain way which is not very practical. Using standard DMA burst transfers, the full bandwidth can be reached without any problems. Besides that the PPU can access the GDDR without any restrictions.
 
Jesus2006 said:
Aint this BS rumor gone already? This was considered the absolute worst case when an SPE accesses the GDDR in a certain way which is not very practical. Using standard DMA burst transfers, the full bandwidth can be reached without any problems. Besides that the PPU can access the GDDR without any restrictions.

Are all your posts going to be like this?

No, the PPE (PPU) *cannot* access GDDR3 without any restructions! Cell has ~4GB/s write and 16MB/s read to GDDR3--this is not a rumor.

All your comments about SPEs, DMA burst transfers, etc... :?: Your source of info for this is very inaccurate. The specifics of your posts and strong language indicate you have received some very poor technical information about Cell and the PS3.

And yes, there are ways around this these issues, it just takes some planning and extra effort. RSX can read from XDR at a high rate, and likewise if CELL needed access to GDDR3 then RSX can copy info from the GDDR3 to XDR very fast as well.
 
Acert93 said:
No, the PPE (PPU) *cannot* access GDDR3 without any restructions! Cell has ~4GB/s write and 16MB/s read to GDDR3--this is not a rumor.

Sorry for the bad (automatic google) translation, this is from another forum:

www.3dcenter.de said:
Cell actually can with full range. PPE thing can with completely normal program code, with full range. PPE has a completely commercial Cache, which in the following becomes still extremely important. The problem with the SPEs is that they do not have Cache. The memory transfer over FlexIO=>RSX=>GDDR3 gets however always whole 256 byte blocks, in order to have and the range well use evenly a beautiful long Burst; finally RSX's memory access is blocked, and this obligation break is to have also been worthwhile itself. If PPE gets so a long Burst pushed in the throats, the data in the L2-Cache land. With the SPE everything falls which is surplus simply in the back again down. Since like no Cache is said there, only the inquired word (any power-of-two number comes on between 1 byte and 16 byte on the instruction,) can be stored at all by this beautifully long Burst. For the remainder there is no defined place of destination, and thus it
s gone for good. Even if the byte thereafter by the immediately following instruction one inquires directly, again the whole “Cachelineâ€￾ comes started, with the appropriate costs. Alone by it one can explain perhaps why the range only as a ridiculously small degree is usable. In the reality it is still worse, because the SPE-ISA does not have an access to storage addresses outside of local store. Here can be worked in the completely bad case with exceptions (and appropriate handlers), whereby even the PPE is involved. *würg* It is also unclear how exactly the 16MB/s was reached, the factor 1400 under the expected range is no solid power of two, and puts by it IMO close that this a Realworld test is. Only which was not tested exactly there as white one, if one does not get it said by Sony.

The normal case, the only case for that the SPEs at all also only in the beginning are intended looks however looks completely differently: DMA (direct memory access) from that local store (to somewhere), or into the local store (from somewhere). There is its own private DMA (direct memory access) CONTROLLER for each SPE. This DMA (direct memory access) CONTROLLER “knowsâ€￾ contrary to the SPE core the addresses of the entire memory in the system, in addition belong XDR, GDDR, the local store of the neighbouring SPEs and, if this is activated, also the “luredâ€￾ parts of the L2-Caches of the PPEs. These DMA (direct memory access) CONTROLLERs hang all on the FlexIO and can from there with maximum speed with the external world communicate. Therefore the range of the GDDR3-memory can be also driven out, if RSX does not use it straight.

...

Transparent and “more normallyâ€￾ access to memory outside of the own LS is feasible only on detours over a exception Handler. There you program some “Channelâ€￾ - registers, and let in principle the PPE fill in the data , while the SPE stands still. That is then transparency, but on the other hand ass slow. I think this is where the famous number of 16MB/s refers to this absolute Worst Case.

Oh and this guy is a NDS dev. afaik.
 
Jesus2006 said:
Sorry for the bad (automatic google) translation, this is from another forum:

Oh and this guy is a NDS dev. afaik.

Not too much into machine translated technical speak, but the below thread should answer most of your questions:

http://www.beyond3d.com/forum/showthread.php?p=770681#post770681

And to clarify: The numbers The numbers (Cell 16MB/s read from GDDR3, ~4GB/s write to GDDR3) come from a Sony devstation slide and are included in Sony's own PS3 documentation.
 
4 GB/s are enough to saturate with geometry any GPU out there, it's a no issue at all.
Regarding the other figure..why read data when you can write them? ;)
 
Titanio said:
FlexIO and the GDDR3 bus are seperate. They could put in a 256-bit bus to GDDR3 and maintain the FlexIO (again though, highly unlikely IMO).

I don't think it would necessarily make any more or less sense to increase one pool over the other if you were to increase one. Cell's read/write capability for GDDR3 is such that RSX has greater access to XDR, which actually suggests that they see RSX as being the more data hungry one.

Again though, it's idle speculation IMO, I really don't think it'll happen :p

I'm sorry but I'm kind of confused, what exactly do we know currently as facts and what are the speculations?:oops:
 
Shifty Geezer said:
It was news back then. Now they need to add to the marketting campaign, rather than bore everyone with regurgitating the same old specs which everyone's already heard.
You're assuming that the specs are the same. Alot has changed since Sony released official PS3 specs. The launch date, the look of the system, dual SKU's, downgrades, etc. At the PS Conference in March, the RSX clockspeed was conspicuously missing and the E3 '06 RSX and Cell specs have been replaced by vague bullet points.
:oops: How can we know less than we did last year?! Has someone used a satellite brain-ray to wipe the world's memories of PS3's spec from E3 '05? And have they scoured the internet and removed all reference to such figures so we'll never again know what they were?
That's EXACTLY what I'm saying. You won't see a detailed PS3 specs sheet older than May 2005. Everything about the system has been updated except the specs. I'm not saying that they have decreased or increased but the fact that they are gone altogether makes me a bit suspicious. In contrast, I can go to Xbox.com and see semi-detailed specs for the Xbox 360.

I'm sure some of us can ignore the fact that the emperor has no clothes on, but I can't be the only one a bit taken back by the fact that Sony refuses to list the clockspeed of either the Cell or RSX when last year they were almost trying too hard to convince the world how much more "superior" the PS3 was to the competition. If not much changed since E3 '05 this would be a non-issue. However, almost everything about the system has changed, maybe even the specs. But it seems as if Sony is actively "hiding" them. Those of us that can add some insight to this are obviously gagged, but that's business albeit a shame.

The PS meeting is soon, so maybe something of substence will leak. But I'm not holding my breath.
 
Back
Top