News about Rambus and the PS3

266 MB/s of sustained bandwidth or burst bandwidth ? Just asking... I forgot...

I really like the idea of infinite storage though... the real Digital VCR :)
 
I dont know much about the ps3 like you guys know but what I'm hearing from you guys the ps3 should have a least 1 gig of ram
 
1gig of RAM would probably be too much for a console. Is there (or will there be) an optical media fast enough to fill that amount of memory in a reasonable time (less than 20 seconds)?
 
Panajev2001a said:
long post snipped

I started replying to this and wrote tons of good stuff, then for some reason I moved the scroll wheel on my keyboard a couple notches while pressing the shift key. This was interpreted as pressing the back button in the browser window and my post vanished into the bit-bucket of cyberspace.

I can't be arsed to do it all over again. Sorry.

Let's just agree to disagree on this, OK? :)

*G*
 
I see with the processing power and on chip bandiwdth Cell has that we might see for the first time micro-polygon ( REYES-like )

Just out of curiousity, how many micropolygon per FRAME for a typical next gen game on a 1080p resolution are you expecting ?
 
You think a 2005 game can do with 9 megs of textures per frame

If they are VQ compressed ( we can decompress them using the APUs in the GPU ) we might get something like 1:8 compression... leaving 1.5 MB as a decompression buffer and 2.5 MB as a streaming buffer we would have 40 MB of uncompressed textures ( which is , as even using 2 MB of the VRAM for textures and accounting for CLUT we would have 1:4 compression and this would lead to 8 MB of uncompressed storage ) storage.

Transferring 840 MVertices/s ( micro-polygons ) per second would mean 14 MVertices/frame...

That would take ~20 GB/s and this would leave ~5.6 GB/s between main memory and CPU + GPU...

Let's say we will leave for pure texture transfers 3 GB/s ( leaving to A.I., physics, sound and other code 2.6 GB/s )...

3 GB/s means that each frame main RAM can provide 50 MB of compressed textures per frame...

Assuming we use VQ again ( the GPU in this case, as said before, is using some APUs for VQ decompression as with a micro-polygon based renderer the 1 TFLOPS CPU would do a "big" portion of the total work ) this would mean 400 MB of uncompressed textures per frame...

Ok... let's say that we could only get 1:6 compression ratio ( S3TC )...


On the GPU e-DRAM we would have 5 MB of compressed textures * 6 = 30 MB

Let's say we only had 2 GB/s left for texture streaming ( efficiency and such is hitting us harder )...

2 GB/s / 60 = ~33.34 MB of streamed compressed textures * 6 = 200 MB of uncompressed textures... Total MB of Textures per frame = 230 MB


Ok let's say we only use CLUT...

On the GPU e-DRAM we would have 5 MB of compressed textures * 4 = 20 MB

3 GB/s / 60 = 50 MB of streamed compressed textures * 4 = 200 MB

You think about the FUTURE, but not streaming textures from main RAM is not the FUTURE...
 
I think the ps3 will launch with less than 512 megs of ram including embeded ram. Why ? Cost . All my arguements come from cost. Not only that but with 64 megs of embeded ram they wont need as much ram as a gpu in a pc.
 
how many micropolygon per FRAME for a typical next gen game on a 1080p resolution are you expecting

Actually that is resolution dependent... an advantage of micro-polygons renderers like REYES or similar ones is that we can get quite nice quality motion blur and AA for "relatively" cheap so we might not need to pump the resolution to 1080p...

Normally I'd expect to have polygons the size of half a pixel and depending if we do deferred ( slicing 'n dicing and --> ) shading or not this would change the number of micro-polygons actually shaded and pushed through the pipeline...

1024x768 * 8x ( let's assume over-draw ) = ~6.25 MPixels

This means ~12.5 MVertices/frame

I hope my brain was processing the math correctly...
 
jvd said:
I think the ps3 will launch with less than 512 megs of ram including embeded ram. Why ? Cost .

*EEEEEEE!* Wrong answer, thank you for playing!

Sony isn't in the business of building budget consoles. If the company had been Nintendo I could have believed you. If Sony's primary concern was cost, they wouldn't have chosen to build their own microprocessor (with partners, of course) in the first place.

All my arguements come from cost.

Yes, I know, and you get it wrong every time. :) Hehe, sorry man, I'm not flaming ya. :) If it was Nintendo we were talking about I would have agreed with you.

Not only that but with 64 megs of embeded ram they wont need as much ram as a gpu in a pc.

Sorry, as I would have explained in my reply to Panajev, this doesn't work. You need to keep duplicate copies in main ram as well as edram. If you don't, you're giving yourself the mother of all headaches when you have to swap data in and out of edram. You end up using twice as much bandwidth, transfers take twice as long (first out with the old, then in with the new) and you have to try to avoid memory fragmentation as best you can both in edram and main ram, because data packets are unlikely to be of the exact same size. Of course, clearing out fragmentation will consume even more bandwidth.

*G*
 
f Sony's primary concern was cost, they wouldn't have chosen to build their own microprocessor

BEEEEP!

Thanks for playing, try again :p ( hey this is fun :) )

Their concern is cost ultimately and even their consolidation and recent investments in the semiconductor business is ultimately to make Sony a more efficient corporation ( one that actually doesn't buy ICs that other subdivisions in the company are producing ? ).

MS has money and the will to waste quite a bit of it... Sony has to bleed them dry by pushing for innovation and efficient plans towards an as quick as possible reduction of manufacturing costs... MS keeps playing the technological catchup, the wait and see game, but this costs them money as they need more capital to achieve the same effect...

Also, buying chips from 3rd parties means paying them for their R&D... you would waste money and you would pay them to design better chips in the future...

Also, manufacturing your own chips means that you can quickly convert the successes in newer manufacturing technologies to lower manufacturing costs for the chips they are producing as they control the manufacturing directly...

All the different revisions of EE and GS ( the die shrinks that led to an 8 Watts, 86 mm^2 EE+GS@90 nm chip ) thanks to better and better manufacturing lines allowed Sony to cut its manufacturing costs faster than MS as they were linked to deals signed with 3rd parties ( they themselves admitted that the fall on manufacturing price on their Hardware would have been slower than Sony's ).

They do think at the bottomline, investors are nervous already ( some of them are not even seeing the long-term investment they are going [and they have in this year] to put down in these 3 years balance sheets ) you do not want to scare them...
 
Tell me why would you want to keep using polygons as primitive, if you can do millions and millions of vertices per frame? Why not go with voxels or something similar?
 
Normally I'd expect to have polygons the size of half a pixel and depending if we do deferred ( slicing 'n dicing and --> ) shading or not this would change the number of micro-polygons actually shaded and pushed through the pipeline...

1024x768 * 8x ( let's assume over-draw ) = ~6.25 MPixels

This means ~12.5 MVertices/frame

Hmm, I thought the typical number they used in their stochastic sampling is 1 pixel divided into 16 subpixels.
 
Tell me why would you want to keep using polygons as primitive, if you can do millions and millions of vertices per frame? Why not go with voxels or something similar?

It's all about the problem you're solving. Voxels trade space for computation. Verticies trade compuation for storage. Currently, we have LOTS of computing power but we have little space and the ability to compete with the locality of reference problems of using lots of space.
 
V3,

I did not say using Pixar Renderman for real-time 3D, I said REYES-like and not exactly like the most hi quality implementations you can find...

In this case I am ending up with 8x less work :) ( and counting that the shading work ios not particularly trivial, this is not something you can easily overlook ).
 
V3 said:
Normally I'd expect to have polygons the size of half a pixel and depending if we do deferred ( slicing 'n dicing and --> ) shading or not this would change the number of micro-polygons actually shaded and pushed through the pipeline...

1024x768 * 8x ( let's assume over-draw ) = ~6.25 MPixels

This means ~12.5 MVertices/frame

Hmm, I thought the typical number they used in their stochastic sampling is 1 pixel divided into 16 subpixels.

Let's say we use four sub-samples, we split polygons so they reach 1/4th the size of a pixel in area...

Then let's say we put the limit on 14 MVertices/frame...

This means that at 1024x768 we can have an overdraw of 4.45x which is not that bad...
 
Aww, you do not want REYES-compromise. Its better using the current approach used by GPU, than what you so called REYES like.
 
Yes I do want a REYES compromise... I like the standardized approach of micro-polygons and the fact we can do true dsiplacement mapping as we are working sub-pixel...

I presented another example using micro-polygons the size of 1/4th of a pixel... and it still yelded 4.45x of overdraw adn you know that this could mean with the use of techniques that would help reduce opaque over-draw... if we used them it would still mean that in addition to that we can afford that kind of over-draw...
 
Look at one of my previous posts... taking a worst case scenario that would mean ~20 GB/s ( 840 MVertices/s at 24 bytes/vertex )...
 
Why would you want to transfer micropolygons from main memory to be rendered ? Are we talking about cutscenes here ?
 
Back
Top