RSX and PR bull?

I havent really going in discussion in earlier threads besides what i think RSX is(GTX @550,FlexIO and TCache).

Ive given it some more thought and it would be fun/interesting if it were different(not much) but a little.

First why is RSX the G70 with these "enchantments" or rather where did i get convinced that this is the RSX? As many other with me..

First the two statements from David Kirk and Tony Tamasi about it being faster and build/sharing the same architecture as G70.

Second is the famous slide on E3 with the 136instructions and then when g70(GTX) launched you get the same numbers. For many a coin dropped and it was easy to guess what the RSX was going to be.

For the fun of it if i take the statements, i didnt se this as a talk about PS3 gfx rather the usual hype with a "new" core around the corner and major PR for nVidia.
The demos they showed and the slides were all prelaunch 7800GTX material.

The talk about same architecture as G70 or build on is the same i can say beetwen NV40 and NV47(G70). So considering the timespan of G70 and PS3 i think its a good chance we will se some different but not much in the RSX.

The facts we have is the memcontroller, faster clock and Tcache. I wouldnt get my hopes to high but there could well be a difference to keep up the interest.
 
scatteh316 said:

OK..?

My point is having a discussion NOT based on the "known" but rather what is behind the PR. I think most would agree that nVidia will not give out any specs about an refresh of G70 or the like.

I would like to have a discussion thats somewhat realistic and interesting because it wouldnt supprise me if RSX is based on the part that nVidia will counter R580 with.

Maybe it doesnt matter really as i think the BW will be the constraints or bottleneck in PS3.
 
That's an interesting way to think about it (the 580 thing), but I dunno... we've been hearing really good things about overclocking the GTX as it is... I'm thinking they may ust fight back with a upclocked more pipes 7800. The g70s still at 110nm if I'm not mistaken, so there should be plenty of room to move more pipes in at the smaller process, and handily enough the base architetchural ideas in th 7800 series will be going into RSX which will be a perfect 90nm test bed on very mature fab. w00t.

As to what RSX will be... I dunno, honestly, I have this really weird suspicion... something Sony seems to be good at instilling... that there's something weird going on with the RSX. TO say, we keep hearing OVER and OVER and OVER that RSX is BASED on G70, and I can't but help think they taken the technology and somehow fused it with some basic concepts of CELL. It's like a chocolate pie... CELL architetchure is the crust while G70 is the chocolate filling. Probably, the worst analogy EVER, but if it was as related as we think I imagine they would use the word "derivative" or something such. Yeah, more wild crazy speculation, but frankly there just isn't enough to really think of anything real, you know. We know what we know, and we're waiting for more.
 
I'm just wondering why they would reduce the bus from 256bit to 128 bit, presumably to reduce costs, yet at the same time increase costs by basing the GPU on the the G70's successor...

It's kinda like 1st things first, why would they cripple the bus to save money, then turn around and spend more money on extra performance....doesn't make sense

they also have to worry about yeilds right? At 550Mhz? on 90 nanometre?
 
Architectural improvements are a one-time R&D cost. Die size is the only thing that really matters when talking about 'advanced' GPUs, and of course the cost of such will go down throughout the life of the console as smaller nodes get adopted.

A 256-bit wide bus vs a 128-bit wide bus on the other hand is something that will present a signifiant cost headache throughout the life of the console - and certainly the reason that both MS and Sony have decided to opt out.
 
scooby_dooby said:
I'm just wondering why they would reduce the bus from 256bit to 128 bit, presumably to reduce costs, yet at the same time increase costs by basing the GPU on the the G70's successor...

It's kinda like 1st things first, why would they cripple the bus to save money, then turn around and spend more money on extra performance....doesn't make sense

they also have to worry about yeilds right? At 550Mhz? on 90 nanometre?

No matter what they do they have to redesign somewhat to fit into the ps3 as I don't expect the ps3 to have a pcie slot. Saving cost and increasing performance (wrt a fixed platform) are both possible, obviously they need to balance the two.

I don't think of RSX as a modified G70, I work from the assumption that RSX is based heavily on the G70 design.
 
I agree. I think we actually know less about RSX than we thought we did. And yeah, I also don't think that we can get a representative of RSX's performance by overclocking GTX to 550 MHz.
 
xbdestroya said:
Architectural improvements are a one-time R&D cost. Die size is the only thing that really matters when talking about 'advanced' GPUs, and of course the cost of such will go down throughout the life of the console as smaller nodes get adopted.

A 256-bit wide bus vs a 128-bit wide bus on the other hand is something that will present a signifiant cost headache throughout the life of the console - and certainly the reason that both MS and Sony have decided to opt out.


See that's why I always wondered why it would be such a bad idea to go with Xdram for the GPU as well. The lower pin count with higher clock could allow for a 256bit bus while keeping costs down over the span of the console's life. Yeah, upfront there'd be a price cost, but we're talking about long terms. Everyone said that was crazy though.
 
Mefisutoferesu said:
See that's why I always wondered why it would be such a bad idea to go with Xdram for the GPU as well. The lower pin count with higher clock could allow for a 256bit bus while keeping costs down over the span of the console's life. Yeah, upfront there'd be a price cost, but we're talking about long terms. Everyone said that was crazy though.

Great point on the XDR pin count.

The reason GDDR was used rather than XDR though in the first place though was supposedly because NVidia couldn't get their memory controller to work up to snuff with the XDR in the time they had allotted. Now... seeing as how it's been quite a while, that almost seems a little unbelievable, but nonetheless that was the reason given in some interview somewhere.
 
xbdestroya said:
Great point on the XDR pin count.

The reason GDDR was used rather than XDR though in the first place though was supposedly because NVidia couldn't get their memory controller to work up to snuff with the XDR in the time they had allotted. Now... seeing as how it's been quite a while, that almost seems a little unbelievable, but nonetheless that was the reason given in some interview somewhere.


Why does that seem unbelievable?

Figure they've worked their asses off to reach the point they are at now. Could they have afforded another 6 month to a year delay while the hardware was redesigned, tested, re-taped out, just to reach the same point they are at now? Would any performance benefit they might have received be worth the investment and delay?
 
Powderkeg said:
Why does that seem unbelievable?

Figure they've worked their asses off to reach the point they are at now. Could they have afforded another 6 month to a year delay while the hardware was redesigned, tested, re-taped out, just to reach the same point they are at now? Would any performance benefit they might have received be worth the investment and delay?


I'm missing your point. Are you saying that in two years of designing this thing, it doesn't strike you as odd that they couldn't figure out an XDR memory controller? I mean - they claim that they couldn't/didn't, and I'm not doubting them per se. But I do find it a bit odd.
 
xbdestroya said:
I'm missing your point. Are you saying that in two years of designing this thing, it doesn't strike you as odd that they couldn't figure out an XDR memory controller? I mean - they claim that they couldn't/didn't, and I'm not doubting them per se. But I do find it a bit odd.


You are assuming that they've actually spent 2 years on RSX alone. And that's a HUGE assumption from you.

Based on the fact that it really is (Despite the obvious denial here) a modified G70, it's much more likely that they've spent 2 years on the G70, and less than a year on the RSX conversion of that GPU.

And how long do you think it takes to create an entirely new memory controller for a completely new RAM type that you have zero experience with? I'm talking about going from concept to design to alpha test hardware to beta test hardware to final production chips? Think they could get that all done in a month or two?

Or, if you would prefer, we can use reverse logic. If Nvidia has spent 2 or more years on RSX design alone, and it was highly customized for the PS3 then why wouldn't they have designed the chip to use the RAM type that has been part of the PS3 design for well over 3 years now?

That doesn't make much sense, does it?
 
Powderkeg said:
You are assuming that they've actually spent 2 years on RSX alone. And that's a HUGE assumption from you.

Based on the fact that it really is (Despite the obvious denial here) a modified G70, it's much more likely that they've spent 2 years on the G70, and less than a year on the RSX conversion of that GPU.

And how long do you think it takes to create an entirely new memory controller for a completely new RAM type that you have zero experience with? I'm talking about going from concept to design to alpha test hardware to beta test hardware to final production chips? Think they could get that all done in a month or two?

Or, if you would prefer, we can use reverse logic. If Nvidia has spent 2 or more years on RSX design alone, and it was highly customized for the PS3 then why wouldn't they have designed the chip to use the RAM type that has been part of the PS3 design for well over 3 years now?

That doesn't make much sense, does it?


I take the ~two years thing from the same place anybody and everybody does: the David Roman interview with XBitlabs. I'm not trying to create fantasy here. The RSX was being developed derivative and concurrently with the G70, not some after-the-fact type of situation. I agree they are probably very similar - never did I say otherwise. David Roman makes clear it's based on the G70 architecture and that the G70 R&D forms the core of the RSX: I'm not trying to fight that.

Now, I know you have no idea how long it takes to design a memory controller, and neither do I. You asked how long do I think it takes? Well, I think it takes less than two years. But that's neither here nor there, since I don't think the RSX memory controller will work with XDR - I was simply musing and that much was obvious to anyone. So really, I fail to see your point.

Now I would ask you why you think that only one aspect of chip design can be worked with at a time? Surely if they wanted to try for the XDR compatability, they could have been working on that concurrently - not exclusively - in conjunction with all other developments throughout the life of the project.
 
Last edited by a moderator:
This interview??

"Anna (X-bit labs): As far as I know NVIDIA may claim that there was not that much investment into the RND: only about 50 engineers. Is this the result of the fact that Sony’s own engineers contributed to the development of the GPU in a significant way?

David Roman: We do not disclose anything on the actual resources. Obviously there is a major economy of scale. This chip is a custom version of our next generation GPU. So we’ve been working on the next generation GPU for close to two years now, namely about 18 months. I don’t know the cost of this one but I know the cost of the last generation: it was 350 million dollars. These are expensive chips to develop. So, the fact that we didn’t have to do that development just for the Sony application obviously is a major economy of scale, because we are doing the development for the new chip anyway. The amount of work involved into customization, I don’t know. "

It's a custom version of G70, they've been working on G70 for 18 months at the time of the interview, it didn't make economical sense to design a custom PS3 gpu when they were designing their new chip anyways.

He does not say they were designing RSX for 2 years concurrently to G70, he says RSX is a custom G70, and that the G70 has been in development for 18 months.

" And basically what it is, it’s on next generation of GPU. As you know we don’t talk about next generation products but it’s our next generation of GPU. And we’ve been working with them to produce a customized version that is customized specifically to connect that to the cell processor, so that they could work together."

So clearly they're doing something to make work with CELL, it will be interesting to see what happens!
 
dukmahsik said:
It's interesting we don't know more about the RSX when the system is said to be 4-5 months away

It definitely seems that way... feels like I've been waiting forever!

But I just checked on when Dave's Xenos article came out and it was mid-June, so basically five months before 360's launch. If we assume a March launch for PS3, then if there's some sort of arbitrary trend in this world of ours, hopefully we'll get RSX info within the month or so.
 
First I'll say I don't think Sony originally wanted to use a GPU in the PS3 and was originally going to use two Cell processors instead. I think RSX is a G70 with PC specific stuff stripped then the I/O was beefed up with FlexIO and maybe they added small functions to enhance PS3/RSX interaction. When they added FlexIO though it probably wasn't just replacing PCIe they probably beefed up the architecture to handle the extra bandwidth, after all what's the point of all that bandwidth if the bottleneck is the architecture itself (I haven't seen any 7800 AGP vs. PCI-e benchmarks but with the 6800s it was neck and neck). I would also say adapting an architecture for a smaller manufacturing process is a non-trivial task especially if the architecture wasn't originally designed with the smaller process in mind.
 
scooby_dooby said:
It's a custom version of G70, they've been working on G70 for 18 months at the time of the interview, it didn't make economical sense to design a custom PS3 gpu when they were designing their new chip anyways.

He does not say they were designing RSX for 2 years concurrently to G70, he says RSX is a custom G70, and that the G70 has been in development for 18 months.


Yes that's the interview. I read it of course so I'm not arguing your logic, I'm just saying that it's not hard to believe that provisions for the RSX development could have begun those eighteen months ago (from Jan). I remember reading years ago about NVidia approaching Sony abotu graphics collaboration for their PS3, and I read this on cbsmarketwatch.com. Now - in the past year I have not been able to find that article, and back then I don't recall anything ever coming of it. Either Sony and NVidia went stealth right then and there, or Sony crawled back to NVidia after the fact.

Here's the deal - if I'm wrong I could care less. This isn't directed at you either Scooby - rather at Powderkeg - but I really have to make clear that RSX's development is not some emotional topic for me. If it's the best ever great; if it sucks beyond belief, hey I still have a job and a girlfriend, right?

I was simply waxing philisophical on memory bit-width issues, not actually trying to make some prognosis of what form I thought the final RSX would take.

@Robofunk: G72 will be 90nm, so I'm sure that there were engineering provisions available to Sony for the 90nm G70--->RSX transition, and that it didn't have to be a 'from scratch' proposition.
 
Last edited by a moderator:
xbdestroya said:
It definitely seems that way... feels like I've been waiting forever!

But I just checked on when Dave's Xenos article came out and it was mid-June, so basically five months before 360's launch. If we assume a March launch for PS3, then if there's some sort of arbitrary trend in this world of ours, hopefully we'll get RSX info within the month or so.

Why would you assume march?
 
Back
Top