Could be more RSX info...

At E3 they were all like it there is no way of doing it properly. Now they are probably going to use the line that they do it right.
 
Dr. Nick said:
At E3 they were all like it there is no way of doing it properly. Now they are probably going to use the line that they do it right.

No, Nvidia actually had a Unified pipeline before ATI. But when they tested it they said the performance when compared to separte vertx/pixle pipes is alot lower so it just is'nt worth it.

They never actually ruled it out, just at the time there's no point due to a performance decrease.
 
Last edited by a moderator:
ATI has been working on a unified architecture for quite awhile now, FYI. 360's R500 and the upcoming R600 are both the evolution of prior work done on unified-shader part R400 (which never saw the light of day); R500 isn't the start-point of this architecture.
 
Last edited by a moderator:
!eVo!-X Ant UK said:
No, Nvidia actually had a Unified pipeline before ATI.

I don't think so.

But when they tested it they said the performance when compared to separte vertx/pixle pipes is alot lower so it just is'nt worth it.

No, their internal estimations lead them to believe that unification was not "worth it", so they never designed such a part. Not that they actually tested it.
 
Anyway it certainly does seem that the patent deals with a unified architecture, unless I'm reading Fig. 2 all sorts of wrong.

PS - Devs, I'm noting your silence on the GDDR-3 issue!
 
xbdestroya said:
Brimstone's right though that the RSX seemingly has much to gain from going XDR vs GDDR-3. We've been told that GDDR-3 will be what it gets, but if there were any way to have changed that in the last six months I would hope that Sony would have pursued it.

I've personally taken Sony's announcments at face value and am expecting GDDR-3 (though perhaps clocked higher than the original 700MHz), yet on a Rambus bus the pin-for-pin value of XDR seems decisively clear to the degree that I wonder what the constraining factors in not going that route would be.

Sony seems comortable enough with Rambus memory now and in the past in general that I'd be surprised if it were simply the expected cost differential XDR:GDDR-3, but who knows...

PS - since it's 'public' info might as well take a shot... can any of the PS3 devs here deny or confirm that RSX is still on GDDR-3?

I'd hope for it too, along with increased memory(dev.s need to be more demanding.).
Shifty Geezer said:
Wasn't PSX very generic?
That RSX is more than just a 7800 chip with different interface is plausible and IMO likely. That it's insanely more powerful than the top of the line GPU SLI'd 4 ways isn't.
A few million 'spare' transisitors isn't going to account for a 4x increase over G70's performance. Extra features are likely, but super-uber performance isn't. A Quad for redundancy, adding nothing to performance but helping yields, is one probability given we've been told redundancy will feature in RSX fo rthat very purpose.
4xSli does not translate to 4x/400% improvement. Maybe 2-2.5~. If it's 32 and is boosted to 600-625Mhz~, it could offer such a performance boost over a vanilla 7800 or more if it had custom g80ish improvements(threading), especially if it's paired with a faster xdr based setup.
 
Last edited by a moderator:
zidane1strife said:
4xSli does not translate to 4x/400% improvement. Maybe 2-2.5`. If it's 32 and is boosted to 600-625Mhz~, it could offer such a performance boost over a vanilla 7800 or more if it had custom g80ish improvements(threading), especially if it's paired with a faster xdr based setup.

4xSLI could approach a 4x power increase in a console. Even in the incredibly unoptimised PC a very GPU bound game can achieve 80-90% more performance on a dual SLI setup.

I would estimate between 300-350% in a good environment.

p.s. the power reference for this is a quad GTX512 system with a GTX512 being around double the power of a vanilla 7800.

An underclocked G71 with a quad disabled would seem to be good bet for RSX.
 
Since the developers are working with standard tools like CG, OpenGL ES 2.0, Collada and others my logic says to me that Nvidia is playing the same game that they played with Xbox again.

RSX is nothing more than a G70 clocked at 550Mhz in the specs because Nvidia thinked in these moment that 550Mhz was the clock that they will take with a 90nm. I am sure that the RSX is a G70 580Mhz+FlexIO today and it can be better day after day until Sony says: "Stop development, launch in 2 or 3 months".
 
pjbliverpool said:
No, but what makes you think Sony can quadruple the technical competence of the worlds leading GPU developer?

What makes you think they cant?? just look at Cell and the EE. Sony is no sluch in chip design
 
!eVo!-X Ant UK said:
What makes you think they cant?? just look at Cell and the EE. Sony is no sluch in chip design

Good grief.

There's a vast chasm between "no slouch" and able to "quadruple the technical competence of the worlds leading GPU developer".
 
Urian said:
Since the developers are working with standard tools like CG, OpenGL ES 2.0, Collada and others my logic says to me that Nvidia is playing the same game that they played with Xbox again.

RSX is nothing more than a G70 clocked at 550Mhz in the specs because Nvidia thinked in these moment that 550Mhz was the clock that they will take with a 90nm. I am sure that the RSX is a G70 580Mhz+FlexIO today and it can be better day after day until Sony says: "Stop development, launch in 2 or 3 months".

I'd say it'd be a bit silly if all they got was a g70 with flexio, I mean the xbx was one of the consoles that was put together the fastest and even it had an extra vrtx shader over the gf3, and a few trcks here and there.
 
zidane1strife said:
I'd say it'd be a bit silly if all they got was a g70 with flexio, I mean the xbx was one of the consoles that was put together the fastest and even it had an extra vrtx shader over the gf3, and a few trcks here and there.

Exactly, there not gonna spend over 2 years with nvidia and millions of dollars just to add the flexio.
 
!eVo!-X Ant UK said:
What makes you think they cant?? just look at Cell and the EE. Sony is no sluch in chip design

Because its a completely ludicrous expectaton. If nvidia were so completely incompetent at creating GPU's that a electronics company with virtually no GPU design experience can outperform them by a factor of 4, don't you think they would have been pushed out of the GPU industry long ago. In fact if Sont were capable of producing these seemingly magical GPU's, don't you think they would have already cornered the market and eliminated the competition?

By your calculations, then 2 years from now Sony would be capable of having a GPU on the market 64 times faster than anything nvidia or ATI could produce on their own!

And anyway, what about Cell and the EE? Both are designed to be good at specific areas of computing and can't be directly compared to a P4 or Athlon.
 
Microsoft took 200 engineers from Nvidia for to tweak the NV25 and making the NV2A.

Sony only has taken 50 engineers from Nvidia. I cannot believe in an huge change, sorry.

Last Nvidia GPU (when PS3 launch will be anounced)+FlexIO= RSX.

Today the las Nvidia GPU is the G70 90nm running at 580Mhz of clock speed.
 
Back
Top