News about Rambus and the PS3

function said:
Squeak said:
Okay, maybe I shouldn’t have mentioned voxels, as I’m not completely sure how they work, but what I meant was this: When you can generate multiple 3d points (that’s what vertices are, right?) per pixel, why bother sending all that extra information to draw triangles when you could just colour the 3d points and be done with it? In short, what is the advantage of having subpixel triangles?

I think one good reason might be that to calculate lighting accurately you need a normal, which means you need a surface (which a triangle has). I don't know how you would calculate accurate, directional lighting for a surface made of particles. For a bumpy surface, you'd have to try and give each particle a size (radius) and cast a shadow from that point, which would be even more complex for point or spot lighting.

Though I imagine self shadowing on a micro-polygon level would be quite expensive anyway.

Maybe I'm missing something pretty simple ... ?

I think that the only realistic way to use such a rendering approach would be to do almost all models in HOS.
In that case, it would "just" be a case of sampling the normal, the required number of times on the surface?

To V3: Thank you for the link that was a really good read. Weird that I haven’t come across this paper until now. :)
 
V3 said:
They surely had the pwoer with the GSCube

Check out the # of bones they used.

Well, if they do the micropolygon route, they better have a good sampling algorithm in place.

Yes, but that will be software dependent... the Hardware doesn't need to be modified for that...

GSCube could have gone the micro-polygon approach: maybe not with the exact image quality obtained in off-line CG movies done with Renderman, but surely with decent rendering quality and decent frame-rate.

Not only the patent Cell CPU ( 1 TFLOPS, 4 PEs, 8 APUs per PE ) is considerable more powerful than the GSCube, but it is also more balanced ( Integer and FP wise ), has more bandwidth and much more local memory ( the EEs in the GSCube still only had 80 KB of total Cache + local memory [RISC core: 16 KB SPRAM, 16 KB Instruction Cache, 8 KB Data Cache; VU0: 4 KB of Data Memory, 4 KB of Instruction Memory; VU1: 16 KB of Data Memory, 16 KB of Instruction Memory] ) to increase the real-world efficiency.

6.2 GFLOPS * 16 = 99.2 GFLOPS

1 TFLOPS = 10.08x faster than the GSCube in pure CPU FP power

80 KB * 16 = 1.25 MB

4 MB of Local Storage ( SRAM ) = 3.2x the amount of SRAM ( and all of this is addressable ) and then we add the e-DRAM ( and the registers )...

With e-DRAM added ( 32 MB is the amount I will be using in this post ): 36 MB = 28.8x the amount of local memory.

Bandwidth to main RAM for the EEs = 3.2 GB/s ( for each EE )... 3.2 GB/s * 16 = 51.2 GB/s

Local Bandwidth to the e-DRAM would be in the order of 100 GB/s which is 1.95x and even if we assumed 51.2 GB/s this would be an apples to oranges scenario as the EE shares the main RAM bandwidth with the GS and the Cell processor has much more SRAM storage ( 3.2x ) compared to the 16xEEs so it would buffer data better and waste less bandwidth ( taking data from e-DRAM compared to the EEs taking data from their main RAM )...
 
Squeak said:
I think that the only realistic way to use such a rendering approach would be to do almost all models in HOS.
In that case, it would "just" be a case of sampling the normal, the required number of times on the surface?

Yeah, I didn't think of that ... use a HOS and calculate the normal at the point you take the position of each of the particles. Still not sure how you'd do cast shadows. :(
 
Wait... are you talking about lighting a micro-polygon ?

What about just using the normal each micro-polygon has ( micro-polygons are "flat shaded quadrilaterals", they do not specify front-faced particles ) and calulating the Shading of each micro-polygon like you would flat-shade a regular polygon in current 3D engines ( dot product between light vector and micro-polygon normal ) ?

The bumpy surface is divided in micro-polygons and then you shade... shading always happen after the slice 'n dice phase...
 
We're trying to skip micro polygons, and just use several accurately lit particles (respresenting the surface) to make up the colour value for each pixel.

My first thought was that a particle has no surface, just a point, and so you can't calculate a normal for it. But by calculating the normal at the point on the surface where the particle sits you can get round that.

I haven't had time to read all the stuff on Reyes and micropolygons that's been posted yet Panajev, but I'm going to as it looks interesting. :)
 
With that said, I am less scared about the external RAM bandwidth as we do not need 50 GB/s, the CPU is not feeding 1 TFLOPS of computational power straight from that memory pool... we drive the data we need from the Local Storages and the e-DRAM... the external memory is there to keep the on chip storage full by streaming in data fast enough not to let that on-chip storage empty of data... and I believe that 25.6 GB/s are fast enough for the job.


BINGO!

This is most likely true, and its the reason why I am NOT saying that this amount main memory bandwidth is horrible. sure, I would like more, but the enourmous bandwidth of the local storage for APUs and the eDRAM should make up for it.
 
Megadrive:

This line of reasoning sounds suspiciously like all the crows crawing during spring/summer last year when we discussed wether ATI would implement 256-bit memory on R300. General concensus was, since Nvidia thought it was too expensive there was no way it could be done. ;)

Remember, we're still talking about a part that's 1 1/2 years MINIMUM off into the future. Technology progresses...QUICKLY...in this business.


*G*
 
you have an exellent point there Grall, one which I do not disagree with, really.

I myself would not be at all surprised to see 50 or even 100 GB per sec of main memory bandwidth. and 512 or more MB of main memory.

although it's safe to assume there will be 25+ GB/sec and only 256 MB, at least for now :)
 
GSCube could have gone the micro-polygon approach: maybe not with the exact image quality obtained in off-line CG movies done with Renderman, but surely with decent rendering quality and decent frame-rate.

Not only the patent Cell CPU ( 1 TFLOPS, 4 PEs, 8 APUs per PE ) is considerable more powerful than the GSCube, but it is also more balanced ( Integer and FP wise ), has more bandwidth and much more local memory ( the EEs in the GSCube still only had 80 KB of total Cache + local memory [RISC core: 16 KB SPRAM, 16 KB Instruction Cache, 8 KB Data Cache; VU0: 4 KB of Data Memory, 4 KB of Instruction Memory; VU1: 16 KB of Data Memory, 16 KB of Instruction Memory] ) to increase the real-world efficiency.

6.2 GFLOPS * 16 = 99.2 GFLOPS

1 TFLOPS = 10.08x faster than the GSCube in pure CPU FP power

80 KB * 16 = 1.25 MB

4 MB of Local Storage ( SRAM ) = 3.2x the amount of SRAM ( and all of this is addressable ) and then we add the e-DRAM ( and the registers )...

With e-DRAM added ( 32 MB is the amount I will be using in this post ): 36 MB = 28.8x the amount of local memory.

Bandwidth to main RAM for the EEs = 3.2 GB/s ( for each EE )... 3.2 GB/s * 16 = 51.2 GB/s

Local Bandwidth to the e-DRAM would be in the order of 100 GB/s which is 1.95x and even if we assumed 51.2 GB/s this would be an apples to oranges scenario as the EE shares the main RAM bandwidth with the GS and the Cell processor has much more SRAM storage ( 3.2x ) compared to the 16xEEs so it would buffer data better and waste less bandwidth ( taking data from e-DRAM compared to the EEs taking data from their main RAM )...


Although the GSCube (the 16xPS2 version) has some things that PS3 won't have. 32 MB of eDRAM per GS. that's 512 MB of eDRAM. which might be more than PS3 has in total (main mem plus EDRAM) not to mention raw fillrate. although PS3 should be geared more toward pixel shading (maybe) but in filling raw micro polygons, GSCube might outdo PS3 in certain conditions. but then perhaps not in realworld, because of PS3's advantage in eDRAM bandwidth and local storage / caches.

we're not talking about the 64xPS2 version of GSCube though. did that ever go into production or get used?
 
From my understanding the 1Tflop number is for real FLOPS, not reg NV BS.

yeah. I just call them Nvflops.

PS2 does 6.2 GFLOPs in theory. I don't know how real Sony's numbers are, but they are probably alot more realistic than Nvidia's. remember Nvidia rated Riva128 (NV3) at 16 Gflops and GeForce256 (NV10) at 50 Gflops. which is LoL yeah riiiiight territory. maybe they count other things like the flops for non-T&L stuff, I dunno.


The 1 TFLOPs of PS3's version of CELL should be more real than NVidia's claims of 116 or 76 GFLOPs for GF3/NV2A/GF4 class GPUs.
 
what i'm worried about is, all this Flops talk.... we already have GPU's that theoretically push hundreds of Gflops, but those Gflops are for shaders calculations and all that... i mean, people were saying the XGPU was pushing what, 80Gflops????

XGPU's original rating, when it was still going to be 300 Mhz, was 140 Gflops. when it was brought down to 250 Mhz, it was at 120 Gflops or something like that. then it's final clockspeed of 233 Mhz, it was rated at 116 Gflops.. I think Nv brought it down to 80 Gflops for some reason, of which I don't know. I think the original GF3/NV20 was rated at 76Gflops.

Nvidia's Gflop ratings, or 'Nvflops' as I like to call them, is pretty wack.
 
calulating the Shading of each micro-polygon like you would flat-shade a regular polygon in current 3D engines ( dot product between light vector and micro-polygon normal ) ?

Well, even if it is only flat shaded, the shading is dependent on the shader they used. If you used the standard equations, the visual will just look like today's game.
 
An other big investment from Sony into Cell?

Sony Corp. gained 220 yen, or 6 percent to 3,890, its biggest gain since July 2. It was the second-most active stock by value on the Tokyo Stock Exchange's first section. The company plans to invest more than $1.7 billion to produce the chips used in home networks, Barron's said. The technology, known as CELL, won't be readily available for at least two years, Barron's said.

Sony said in April it will invest that amount over the next three years to make specialized ships for the company's next- generation computer entertainment system and other products.

http://quote.bloomberg.com/apps/news?pid=10000101&sid=a2nnaVw_v_3A&refer=japan

Is this the old investment, or an added investment?

Fredi
 
McFly said:
An other big investment from Sony into Cell?

Sony Corp. gained 220 yen, or 6 percent to 3,890, its biggest gain since July 2. It was the second-most active stock by value on the Tokyo Stock Exchange's first section. The company plans to invest more than $1.7 billion to produce the chips used in home networks, Barron's said. The technology, known as CELL, won't be readily available for at least two years, Barron's said.

Sony said in April it will invest that amount over the next three years to make specialized ships for the company's next- generation computer entertainment system and other products.

http://quote.bloomberg.com/apps/news?pid=10000101&sid=a2nnaVw_v_3A&refer=japan

Is this the old investment, or an added investment?

Fredi

everything i've been hearing points to a 2006 ps3. If its late 2006 i think it will come with a blue ray . But it doesn't seem that the cell chips will be ready in time. I wonder if something is wrong at the plant .
 
For me it sounds like an early 2006 release. (2003 + at least 2 years --> end 2005 or beginning of 2006)

Fredi
 
jvd said:
But it doesn't seem that the cell chips will be ready in time. I wonder if something is wrong at the plant .

What made you think that? Any link, any official statement?
 
jvd said:
everything i've been hearing points to a 2006 ps3. If its late 2006 i think it will come with a blue ray . But it doesn't seem that the cell chips will be ready in time. I wonder if something is wrong at the plant .



huh? where did THAT come from....?
 
V3 said:
calulating the Shading of each micro-polygon like you would flat-shade a regular polygon in current 3D engines ( dot product between light vector and micro-polygon normal ) ?

Well, even if it is only flat shaded, the shading is dependent on the shader they used. If you used the standard equations, the visual will just look like today's game.

Of course, but I was taking a simplified scenario as I could not understand at first why they were talking about the micro-polygon not having a normal...
 
Well after a two week vacation I'm back. I'll have to read through all the latest threads on the forum.

Until than. tata.
 
Back
Top