Several UK Studios working with PlayStation3 DEV KITS

version said:
devkit has 2 cells ,SPEs compute intersection , PPE shading :)

Hmmmmm...?

PPE does what kind of shading? Vertex, pixel? "SPEs computer intersection"..? Sorry if these are dumb questions..
 
Titanio said:
version said:
devkit has 2 cells ,SPEs compute intersection , PPE shading :)

Hmmmmm...?

PPE does what kind of shading? Vertex, pixel? "SPEs computer intersection"..? Sorry if these are dumb questions..

raytracing algorithm :), ppe texturing and shading, spe ray-triangle intersection :)
 
version said:
Titanio said:
version said:
devkit has 2 cells ,SPEs compute intersection , PPE shading :)

Hmmmmm...?

PPE does what kind of shading? Vertex, pixel? "SPEs computer intersection"..? Sorry if these are dumb questions..

raytracing algorithm :), ppe texturing and shading, spe ray-triangle intersection :)

Is this your own speculation, or..?
 
Version has a bad habit , he speculates all the time making people believe he 's speaking about 'facts' ;)
 
version said:
london-boy said:
Why would the CPU handle texturing?

texture data is in main ram, spe cannot direct read 1-2 texel, cpu can

Wouldn't it be easier (to choose just one adjective out of a 100) to have the GPU access main memory for textures, without having to go through the CPU?!
 
Speaking of memory... my current 'most want to know details' must be the memory configuration.

For relative comparison, on 360(is that official? :p ) we have ~20Gb/s feeding 3 cores + 1 GPU. For PS3, following what is speculated to date, we have 25Gb/s feeding 1 CPU + 8 SPE + 1 GPU.

If memory bandwidth doesn't increase, I suppose most applications will focus on routines and algorithmns with a very high (computation-intensive : input-data) ratio. AI doesn't map too well(to SPE). Procedural visuals limited practical functionality. Can't overdo particle effects. Lots of animation? Lots of lighting calculations?
 
Passerby, in Xenon numbers you should also factor in edram bandwith.
I don't know if PS3 GPU has edram or not but even if it has not edram I don't
think 25.6 GB/s of bandwith will be just shared between CPU and GPU.
IMHO PS3 GPU will have another pool of dram to work with, but I'm not excluding GPU can use CELL external bandwith too via FlexIO.
25.6 Gb/s to be shared between a 200 Gigaflop/s monster and a next generation GPU is a shame! There should be something more..
 
I would like to have eDram at least for renderbuffers like Xenon, I don't like the idea of the fixed cost effects having bizarre random speeds because of nasty bandwith contentions.
Of course, nAo already knows what I think ideal eDram config would be :p

Definately agree that only 25GB/s system-wide bandwith would be on the weak side.
 
Fafalada said:
I would like to have eDram at least for renderbuffers like Xenon, I don't like the idea of the fixed cost effects having bizarre random speeds because of nasty bandwith contentions.

you speak as somebody with xbox experience.. i thought you were a ps2 boy, Faf! :eek:
 
Actually Faf, I know you work for a small team or a least did but what is the chance of you working on XB2 especially if increases market share?
 
Now somewhere on this forum I rememeber seeing a slide I think. Whatever it was, it showed a Cell processor with 25 Gb/s on one side and 75 Gb/s on the FlexIO shared between GPU and other 'stuff'.

Dunno where. Dunno the source. Dunno nuffink, really! But that info certainly appeared round these parts somewhere...
 
Shifty Geezer said:
Now somewhere on this forum I rememeber seeing a slide I think. Whatever it was, it showed a Cell processor with 25 Gb/s on one side and 75 Gb/s on the FlexIO shared between GPU and other 'stuff'.

Dunno where. Dunno the source. Dunno nuffink, really! But that info certainly appeared round these parts somewhere...

Thats correct. though the FleXIO is divided into a cohent and noncoherent link, dunno what to make of it yet. Think i read that the coherent would be used for connecting to other Cells and the noncoherent could be used for IO.

My best bet is the GPU having a own set of 30-50GB/s Ram (DDR, GDDR, XDRam, whatever). Think of Chip- and FastRam on the Amiga ;) Would be stupid IMHO to pull textures from XDRRam through Cell through Cell/GPU-Link into small GPU-eDRam.
 
Come on now. Online gaming will be the next big thing. The only reason it's not now is because games play better on broadband (of course). So when the total number of broadband users go up the number of people playing and using the microtransactions will grow also.

The funny thing is Sony has talked about microtransactions for years now. Thats really old news. M$ talks about it and people act like they discovered it. That really shows what the majority of the people knowledge of history is in videogames. Know your history!!! Kinda like when people say the Christopher Columbus discovered America when its obviously false.
________
Jaguar xk6 engine picture
 
Last edited by a moderator:
Npl said:
Shifty Geezer said:
Now somewhere on this forum I rememeber seeing a slide I think. Whatever it was, it showed a Cell processor with 25 Gb/s on one side and 75 Gb/s on the FlexIO shared between GPU and other 'stuff'.

Dunno where. Dunno the source. Dunno nuffink, really! But that info certainly appeared round these parts somewhere...

Thats correct. though the FleXIO is divided into a cohent and noncoherent link, dunno what to make of it yet. Think i read that the coherent would be used for connecting to other Cells and the noncoherent could be used for IO.

My best bet is the GPU having a own set of 30-50GB/s Ram (DDR, GDDR, XDRam, whatever). Think of Chip- and FastRam on the Amiga ;) Would be stupid IMHO to pull textures from XDRRam through Cell through Cell/GPU-Link into small GPU-eDRam.

I've been hearing the Xenon is going to have a custom memory management chip codenamed: Fat Angus :LOL:
 
Back
Top