Some questions concerning possible changes and/or outcomes of RSX and their relevance

MBDF

Newcomer
hi, I'm wondering what addvantage there would be to adding in more ROP's to the RSX, also wouldn't upping the memory connect to 256 bit be seen as at least somewhat leveling the playing field in terms of image quality vs. the next X-Box. Also, it could be an advantage in as it would give developers more bandwidth for Cell<>RSX comunication.

any answers would be appreciated, thanks
 
the way the g70 is i don't think simply adding more rops will do much. You need to add more vs and ps shaders an dother things also . But i really don't see it happening with the limited bandwidth it has .



As for a 256bit bus. That is expensive. Sure it will double bandwidth. But it will drive costs up greatly . There is a reason why we haven't seen graphics boards under the 200$ mark ( i don't believe we've seen any under 300$ actually ) with 256 bit busses . They are expensive . We've only seen high end boards that slowly filtered to the mid / low end before being phased out like the 9700pros / 9800pros . Dispite 256 bit busses being in use in the high end for about 4 years now
 
Thanks for the replies, however, I think the possibility of HDR AND 4xAA should be taken into account as well (at 720p)... would 22 GB be enough, and allow the fredom to use bandwidth for comunication between Cell and RSX?

The way I see it is, 256 bit is a neccessity for the dreams of Ken Kutaragi to become a reality... and for creative developers to really stretch their wings.

Hopefully there will be some sort of bandwidth saving features in RSX, otherwise I think the true potential is lost.
 
Like Pana said, it's all part of Sony's evil plan.
I say the interface is 256bit, but half of it is reserved for Ken's obsession with including a miniPC inside PS3 which runs completely independant from games.
That's why PS3 will also cost at least 600$ when it launches.
 
Hey, I'm all for that. I'd pay $600 just to have that black mofo stand there on my desk and look all shiny and badass...

In all seriousness though. The more pixel shading going on in a game, the fewer pixels are going to get rendered every clock cycle, cutting down on the bandwidth needed to render them. And 22GB/s is already enough to support quite a bit of more conventional rendering too for early games that don't do much shading.

Also, it's a bit early to worry about this when the machine isn't going to be released for another eight or nine months or so...
 
Totally on the side, but what do people think on our finding out what the RSX is about sometime in the near future (around TGS)? My mind tells my it might be a while yet, but my heart asks with 360 in full production by that time, and with the chip supposedly taped out by by that time as well, why not throw us a bone?
 
We been pretty long in the 90nm "era" and i assume RSX is very likely a G70 "maybe" with some tweaked stuff.
So going 65nm rather soon makes the chip pretty small and 45nm pretty tiny over the PS3 lifespane it would be pretty hard fitting pads/pins on a tiny core like that.
 
If there's any 'secrets' to RSX we should hear about them a little while before November to penetrate the gamer media and psyche and disrupt appreciation for XB360. I would expect an announcement at TGS if that were the case, as Sony again try to draw attention from XB360 with 'wait a bit longer and get this amazing tech!'.
 
overclocked said:
We been pretty long in the 90nm "era" and i assume RSX is very likely a G70 "maybe" with some tweaked stuff.

So going 65nm rather soon makes the chip pretty small and 45nm pretty tiny over the PS3 lifespane it would be pretty hard fitting pads/pins on a tiny core like that.

Would it be possible in the near future to decrease the size of the pins/pads, and what of the future of 256-bit videocards?

As an aside, I'd like to throw something out that has been dancing in my head lately. One thing I've noticed is that the 136 shader ops figure for the RSX has not been mentioned anywhere concerning the G70, now excluding the free normalize I see two possible alterations of the G70 becoming the RSX:

1) 28 pixel shaders and 8 vertex shaders (with 4 pixel shaders around for redundancy)

or

2) 32 pixel shaders and 4 vertex shaders (which would also agree with the strange figure
at the press conference copncerning 52 dot products of which Jaws mentioned)*

Now perhaps Sony would go this route becasue 4 vertex shaders at 550 mhz would still
allow them to reach 500 million verticies/triangles per second, which incidentaly might be
the limiting setup rate, and I suppose Cell would be handling thigs such as LOD.

anyhoo, just some stream of thought on possible outcomes. I'm somewhat of a newbie when it comes to video card tech, so thanks for barring with me.
 
Sorry, I meant 28 pixel shaders and 12 Vertex shaders, edit doesn't seem to be working for me right now.

btw, what advantages would there be in having more transform rate than setup rate?
 
As far as i know you get the same mentioned 136 intructions with G70=

PS = 5 inst * 24(120) + VS = 2 inst * 8(16) = 136 instructions

Maybe nVidia didnt want to tell anything about any eventual uppgrades as the whole thing to me seemed like a PR-hype for the 7800-series with the Luna demo and all.

To me RSX is VERY likely a G70 at higher clockspeed plus the FlexIO interface witch i guess is the "radical" change Kuturagi talks about.

A quote from David Kirk- "The two products share the same heritage, the same technology. But RSX is faster," said Kirk."
 
Gotcha, it's just that a lot of people don't count the free normalize so I was tweaking accordingly, If Sony and/or Nvidia wanted or had planned to maintain the 136 figure.
 
When David Kirk says faster what does he mean? Is he referring to clock speed or memory speed?

Is it just like overclocking a videocard for faster Frames per Second? I mean they way videocards work you would need a massive overclock to make it even worth it. I know people still do it for a few fps.
 
skilzygw said:
When David Kirk says faster what does he mean? Is he referring to clock speed or memory speed?

Is it just like overclocking a videocard for faster Frames per Second? I mean they way videocards work you would need a massive overclock to make it even worth it. I know people still do it for a few fps.

Im fairly sure he means the clockspeed of the core as the memory is slower.
Both nVidia/Sony and ATI/Ms spread FUD so i dont take anything they say as facts..
 
Back
Top