News about Rambus and the PS3

Vince, olong product cycles are not always bad: the time Intel and HP spent defining IA-64/EPIC/IPF is showing in the ISA ( quite neat IMHO ) and will help the architecture in the long run...

The same applies to Cell: the years they spent in R&D ( nice 4-5 years [more counting the IBM previous Research on similar projects] ) might mean that they remain fixed in an older mindset compared to people that start from scratch 2 years aftewr the Cell project begun, but their overall approach will end up being cleaner, thoughtful and more future proof than the other as more time was spent in accurate planning of the architecture and not only pushing the performance envelope...
 
I wonder how much E-DRAM...

256mb External XDR memory with 32-64mb edram on Cell I suppose. And what about the Rasterizer..

256mb external + 32mb edram on Cell + 64mb edram on rasterizer? Seems pretty good to me..
 
I'm going to start going back into old ps3 topics and just record and document alot of the good replys and put them on disks. It will be interesting to look back onto the things we said in 3 years and see what was true and what wasn't.
 
Panajev2001a said:
Vince, olong product cycles are not always bad

Yes they are.

The time Intel and HP spent defining IA-64/EPIC/IPF is showing in the ISA ( quite neat IMHO ) and will help the architecture in the long run...

This is just not true. Intel's origional goal was to launch Merced in 1997. It finally hit the market in what, 2001(?), that's ~3 years late and only helped to bury it's acceptance. Companies such as SGI and HP scrapped their Rx000 and PA-RISC lines for this and it has subsequently allowed companies like IBM with Power4 to jump in the fray.

John Crawford took over the joint HP-Intel team in 1994 - making the total active development cycle around 7 years, they predicted 4 at the most. This type of project is a corperation killer. David House, former Intel chief of corporate strategy, said it best when he said, "This will end up being one of the world's worst investments, I'm afraid."


If Cell doesn't make it to market in the late 2004/2005/early 2006 timeframe then STI lost their window. For Cell, three focused years are it - any more and the world will have passed it by.

IBM's Jim Kahle (STI leader) summed it up when he said, "We need to get this project completed before our competitors get a shot at building something similar."
 
If Cell doesn't make it to market in the late 2004/2005/early 2006 timeframe then STI lost their window. For Cell, three focused years are it - any more and the world will have passed it by.

IBM's Jim Kahle (STI leader) summed it up when he said, "We need to get this project completed before our competitors get a shot at building something similar."

I agree, by 2005-2006 the product needs to come out, I completely agree with you on that...

One thing is a good development cycle, one thing is a rushed one and one thing is a too long development cycle...

I disagree on the IPF situation... Intel is quite adamantly pushing IPF and quite succesfully for such a late architecture and the situation is not looking bad for its future...
 
Doesn’t exactly look less powerful than PS2, more like different but equal in power.

Its more general purpose than PS2, even if it has 20 GFLOPS under it. Anyway they are only getting like 1-2 fps on their REYES pipe, while getting like 80 fps on the OpenGL.



It turns out that most REYES renderes today doesn’t use it, because most graphic chips now have has some sort of hardware AA build in, so they can save the resources for something else. The standard micropolygon size today in a REYES renderer is around one pixel.

Those PRman still uses renderfarm, I don't think they shifted to something like NV35 yet. You need those sampling, to avoid you seeing the aliasing. Especially when things are moving fast, which in games happen often. Actually they up the sampling to 64 for fast moving ones.
 
V3 said:
Doesn’t exactly look less powerful than PS2, more like different but equal in power.

Its more general purpose than PS2, even if it has 20 GFLOPS under it. Anyway they are only getting like 1-2 fps on their REYES pipe, while getting like 80 fps on the OpenGL.

IIRC they admitted that their REYES pipe was not exactly as optimized as possible... not as the OpenGL one...

For once they were using triangle IIRC and not subdivision surfaces or NURBS and they were not doing deferred Dicing 'Slicing and Shading: using HOS you could for example Transform the control points, sort the patches and convert to micro-polygons only the visible portions...

They admitted they were generating lots of non-needed micro-polygons...

Also, lower the resolution to 720p or 480p and lower the level of AA ( number of samples ) and you should push the frame-rate up...

Considering PlayStation 3 should be a 1 TFLOPS class machine and that Imagine processor had 20 GFLOPS...

1-2 fps on a non optimized REYES pipe and better than regular HDTV quality ( they were not rendering at 480p :p ) with 20 GFLOPS ?

If we scaled things linearly with FLOPS rating we should get ~50-100 fps on PlayStation 3

Adding in a good amount of optimizations ( and scaling down some factors ) and stuff we should get a 30-60 fps engine on PlayStation 3 :)
 
For once they were using triangle IIRC

They were using quad, if my memory serve me well.

and not subdivision surfaces or NURBS

They used Catmull-Clark subdivision rules, so this will allowed native support for B-Spline and Bezier surfaces.

and they were not doing deferred Dicing 'Slicing and Shading: using HOS you could for example Transform the control points, sort the patches and convert to micro-polygons only the visible portions...

Wouldn't make alot of different in this case, they're rendering teapot and pin.

They admitted they were generating lots of non-needed micro-polygons...

True. I think they can't avoid not generating them, but they propose some solution, for those micropolygon not to be shaded.

Also, lower the resolution to 720p or 480p and lower the level of AA ( number of samples ) and you should push the frame-rate up...

They were rendering on 720x720 of a teapot and a pin.

Still, the resulting images is about the same. I think before they go ahead with REYES, they have to wait for shaders to be very long, that a REYES like architecture is more efficient.
 
V3 said:
For once they were using triangle IIRC

They were using quad, if my memory serve me well.

and not subdivision surfaces or NURBS

They used Catmull-Clark subdivision rules, so this will allowed native support for B-Spline and Bezier surfaces.

and they were not doing deferred Dicing 'Slicing and Shading: using HOS you could for example Transform the control points, sort the patches and convert to micro-polygons only the visible portions...

Wouldn't make alot of different in this case, they're rendering teapot and pin.

They admitted they were generating lots of non-needed micro-polygons...

True. I think they can't avoid not generating them, but they propose some solution, for those micropolygon not to be shaded.

Also, lower the resolution to 720p or 480p and lower the level of AA ( number of samples ) and you should push the frame-rate up...

They were rendering on 720x720 of a teapot and a pin.

Still, the resulting images is about the same. I think before they go ahead with REYES, they have to wait for shaders to be very long, that a REYES like architecture is more efficient.

480 is lower than 720x720 :p... seriously, I wonder how big was the micro-polygon size and how many samples they were generating for things like AA and such...

I will look into that paper to extract more info since my memory is not helping me well...

They might have been using quads, but this still this means that the input stream was larger than it should have and we can always use less wasted bandwidth...

But if they were rendering teapots...

rue. I think they can't avoid not generating them, but they propose some solution, for those micropolygon not to be shaded.

Deferred shading...

Yes, we need longer shaders to show the benefit of such an architecture, but these shaders are coming...

2nd generation PlayStation 3 software will be out in 2006-2007 and by that time I think GPU manufacturer will have increased average shader's lenght a bit farther than what NV35 and R300 can currently do...

I think that a micro-polygon based renderer has place in PlayStation 3's lifetime...

Tons of simple primitives ( single textured or flat shaded ) pushed to a streamlined Rasterizer ( the Pixel Engine part of the GPU ) in large quantities by a monster CPU with tons of local bandwidth...

If you had to think about the requirements for a micro-polygon based real-time renderer in terms of Hardware it would run on and then you think about PlayStation 3 you see that they... kinda... match :)

Think what PlayStation 2 likes the best: tons of simple polygons.. it doesn't seem to me that GSCube pushed for more complex polygons ( more texture layers, etc... ) or that PlayStation 3 is evolving in a much different direction... the comments made by ATI's Dave Orton make more sense under this light...
 
This new kind of memory that the ps3 is having it is 256 mb total memory. Or it is 512 mb of total system memory

second thing is the ps3 going to be powerful than the gs cube
 
This report of 256mb of ram isn't by any means final or confirmed. But if it IS true, it means external memory not total system memory. You still have to factor in the e-dram.

PS3 obviously won't be as good as GScube resource wise, however I do see PS3 coming up with visuals as good as it.

I would have to look at GScube better to give a better answear to your question.
 
I think IQ wise the GSCube can afford bigger textures and much higher AA thanks to the fact it has more memory available, but processing power wise the Cell set-up we have in that patent ( 1 TFLOPS class machine ) surpasses the regular GSCube...
 
If you look at the gscube specs and compare it to the speculation of your ps3 specs the ps3 beats the gscube in most of the catacories
 
seriously, I wonder how big was the micro-polygon size and how many samples they were generating for things like AA and such...

They have a stop length in their subdiv algo, I think for the pin, they used stop length of 1 pixels and that generates about 1+ million micropolygons. Increasing the stop length to 1.5, 3 will half the # of micropolygons, at expense of quality.

But if they were rendering teapots...

Yeah teapots and bowling pin and an armadillo with like 1200 floating points ops per fragment.

Deferred shading...

Don't need REYES for this.


I think that a micro-polygon based renderer has place in PlayStation 3's lifetime...

Well if it can't operate on per fragment, it needs too, to be competitive with Xbox2 and GC2.

Tons of simple primitives ( single textured or flat shaded ) pushed to a streamlined Rasterizer ( the Pixel Engine part of the GPU ) in large quantities by a monster CPU with tons of local bandwidth...

Shaders will used alot of textured, not just color, but for other things as well, to compute the final fragments either directly or from micropolygons. There aren't going to be alot of single textured stuff.

If you had to think about the requirements for a micro-polygon based real-time renderer in terms of Hardware it would run on and then you think about PlayStation 3 you see that they... kinda... match

That Imagine processor is alot like a single PE in BE.

Think what PlayStation 2 likes the best: tons of simple polygons.. it doesn't seem to me that GSCube pushed for more complex polygons ( more texture layers, etc... ) or that PlayStation 3 is evolving in a much different direction... the comments made by ATI's Dave Orton make more sense under this light...

Small polygon, doesn't equate micropolygons and REYES style rendering. Simple polygon, doesn't mean that to compute it you don't required alot of texture.
 
Another speculation

By Dennis Day, News Editor
Published July 15, 2003 -- 10:10 am CDT

While Sony refuses to officially discuss plans for their next console, sources close to the company have begun to release tidbits of new information. Specifically, the console is expected to incorporate 4 Yellowstone XDR-DRAM memory chips totaling 256MB with a combined bandwidth capacity of 25.6GB per second. Sony has reportedly signed deals with Elpida, Toshiba, and Samsung to manufacture the memory components beginning in early 2005. Within its first year of production, Sony's trio of memory suppliers are expected to produce 20 million XDR-DRAM chips. By 2006 that number is expected to rise to 30 million. As previously reported, the PlayStation 3 is also expected to incorporate Sony's CELL processor technology which is being cooperatively developed with Toshiba and IBM. Testing of the new hardware is expected to begin in late 2003.
 
Back
Top