RSX = Nvidia 7900???

Suppose that makes sense if they're doing a die shrink that'll give them better yields.

I wonder for a chip such as RSX, which will be produced on a far larger scale, will maintain the redundancy information anyway for even better yields?
 
Asher said:
Can RSX's framebuffer exist outside of the GDDR3 RAM? I understand it can use Cell to access XDR RAM, but there's more latency in that and would require even more significant reworking of the core logic to do that.

From what I understood, the framebuffers must exist in the GDDR3 RAM pool while it can access all other kinds of data (vertex, pixel, etc) from both RAM pools.

So for things like anti-aliasing, throwing a figure around like 48 GB/s becomes a red herring.

It would not make much sense to have the frame buffer in CELL's main memory, that's for sure, but like you said, vertex, textures, etc., can be streamed across.

There has been some talk of object rendering in different buffers, and that might change things from the traditional way of doing things, but I don't know much about that.

If you are removing your vertex and texture reads from the GDDR3 bandwidth, then of course you can consider the 48 GB/s total bandwidth, when it comes to anti-aliasing.

I don't think the GDDR3 will only be used for frame buffer only, as that would leave a lot of that memory empty. So some data will reside there also of course.
 
Of course, I'm not saying GDDR3 is only for the framebuffer, but I am saying it's my understanding that GDDR3 is the only pool of RAM where the framebuffer can reside. Such a restriction surely has a big impact on anti-aliasing.

(Disclaimer: I'm from the realm of CPUs and I'm still learning about the GPU side)
 
Hey if you wanna compare PC performance, I don't think todays computers have RAM with more than 10GB/s, pc3200 is at 6,4GB/s in dual channel. So that would leave RSX with 38GB/s which isn't that much less than 7800gtx's 41,6GB/s.
 
Asher said:
Of course, I'm not saying GDDR3 is only for the framebuffer, but I am saying it's my understanding that GDDR3 is the only pool of RAM where the framebuffer can reside. Such a restriction surely has a big impact on anti-aliasing.

I think there are dozens of posts that have covered the whole RSX bandwidth, and AA issues, that I don't see what more I can add to it. Enabled AA @ 1280 resolution is not going to be an issue. Every title should have it.
 
Last edited by a moderator:
Asher said:
Of course, I'm not saying GDDR3 is only for the framebuffer, but I am saying it's my understanding that GDDR3 is the only pool of RAM where the framebuffer can reside.

I think it can reside in either buffer, but you'd probably want to keep it in GDDR3 I guess.

People have suggested splitting the buffer, putting your zbuffer in one pool, your colour buffer in another, but I'm not sure how desireable or necessary that would be.

As for the impact of all this on AA, I think it'll be alright. But every game is different, with different requirements, so it is tough to speculate in general terms.
 
I think this fits well with the two HDMI connectors of the PS3. Probably some synergy at work.
Also, the G71 includes two built-in dual-link TMDS transmitters, so GeForce 7900 cards can power a pair of high-def digital displays without the need for an external TDMS transmitter. ATI's Radeon X1800 and X1900 cards have this feature already, and the G71 does well to follow suit.
http://techreport.com/reviews/2006q1/geforce-7600-7900/index.x?pg=1
 
I find it interesting the G71 transistor count is now down to 278.

Assuming the RSX stays similar, that gets it pretty close to Xenos 257.

In fact, given lots of rumors that ATI undercounts and doesn't count cache as transistors, it wouldn't surprise me if Xenos shader core die is larger than RSX. If someone were to measure the two with a ruler.

Of course, that doesn't mean a lot. G71 can keep up with X1900 in most games.
 
Edge said:
Enabled AA @ 1280 resolution is not going to be an issue. Every title should have it.

As soon as you want to have an FP framebuffer, the above is not true.
 
Titanio said:
I think it can reside in either buffer, but you'd probably want to keep it in GDDR3 I guess.

Yeah, considering that Cell will use some of the XDR bandwith, you'd even loose a bit of speed if you move the framebuffer there.

People have suggested splitting the buffer, putting your zbuffer in one pool, your colour buffer in another, but I'm not sure how desireable or necessary that would be.

Another trick, according to Faf, may be to do some heavily deferred rendering, where the GPU generates separate passes for different elements of the frame. For example, an 'albedo' channel to show textures and colors without any lighting, a diffuse channel with the lighting, a specular channel with the highlights, and then a z-depth channel, an atmospheric effects channel, and so on. This data would be then combined in the SPUs using a single formula to composite the passes into the final image. You could even break up the diffuse pass into a surface normal pass and add lighting in the SPU; and even perform some multisample AA somehow.
The only problem I see with this is that the amount of data is several times as much as with a simple framebuffer ;) - but perhaps it could be streamed through the FlexIO bus to Cell, without storing the full images for the passes.

As for the impact of all this on AA, I think it'll be alright. But every game is different, with different requirements, so it is tough to speculate in general terms.

I think it's pretty safe to assume that some titles will have some level of AA, but others will not. The whole system is a lot more complex than the (not exactly simple) X360, but with a few more bottlenecks as well, so there are lots of compromises for developers to choose from. The more complex graphics you'll get, the less likely that they can squeeze the bandwith and memory for AA...
 
It's important to realize that RSX will likely have a few redesigns over its lifetime to reduce costs. So while the first incarnation might be similar to G70, the next one could have some of the improvements of G71.
 
Xmas said:
It's important to realize that RSX will likely have a few redesigns over its lifetime to reduce costs. So while the first incarnation might be similar to G70, the next one could have some of the improvements of G71.
There are elements that slightly alter the performance characteristics between G70 and G71, and I doubt that you'd really want that on a console mid-lifecyle.

If RSX bears similarities to G71 then I should imagine that a lot of 90nm work they have done will be ported between them both.
 
Dave Baumann said:
There are elements that slightly alter the performance characteristics between G70 and G71, and I doubt that you'd really want that on a console mid-lifecyle.
As long as performance does not go down while at the same time cost is reduced, there's no reason not to do it.
 
Xmas said:
As long as performance does not go down while at the same time cost is reduced, there's no reason not to do it.
Introducing different timings etc. may require more testing.
The main benefit of a closed console platform from developers pov is that it remains invariant.
 
Crossbar said:
Introducing different timings etc. may require more testing.
The main benefit of a closed console platform from developers pov is that it remains invariant.
And yet console hardware does change over time.
 
Xmas said:
And yet console hardware does change over time.

You should clarify your point on whether youre referring to a cost benefit or performance benefit.

Obviously all companies strive for cost benefits over time by shrinking/combing chips in the hardware.

However, there is no benefit for a console to have a better performing chip in a later revision of the hardware. In fact, i'm sure companies work against technology to ensure that doesnt happen.
 
Last edited by a moderator:
Xmas said:
And yet console hardware does change over time.
Sure, cost reduction etc., but I think you know what I mean.

They do not put any effort in too get higher clocks or whatever because that does not simplify the developers situation and may f**k up the execution of present games.
 
Edge said:
Remember Nvidia claimed RSX was a seperate development, using 50 engineers, so RSX is not the 7900.
I'm almost sure the shader pipes are the same. The ROPs may be reduced, but even that doesn't look like it from the announced specs. As Dave said, it's very likely that much of the work was ported between the two.

The only things that will be different are the 128-bit memory controller and the FlexIO interface. It is possible, IMO, for these to be on the G71 chip without us knowing (since they shouldn't take up much room), but since G71 isn't manufactured at the same place as RSX will be (right?), it doesn't make much sense.

On another note, I wonder if we'll see analysts reduce their projected cost of PS3 now that they know G71 is so tiny.
 
Back
Top