Could be more RSX info...

!eVo!-X Ant UK said:
Not incompotent, just need'd pushing in a new direction by sony.

The multiple core direction ;)

I don't think nvidia need to be pushed to aim for the best performance possible, thats why we all love competition. As for the mulitcore approach, many people have already been over the reasons why it makes no sense whatsoever for a GPU. Its like putting more wheels on a car to make it go faster.
 
pjbliverpool said:
I don't think nvidia need to be pushed to aim for the best performance possible, thats why we all love competition. As for the mulitcore approach, many people have already been over the reasons why it makes no sense whatsoever for a GPU. Its like putting more wheels on a car to make it go faster.
Multicore GPU's might not have any usage now, but we will be seeing them...eventually :)
 
!eVo!-X Ant UK said:
Multicore GPU's might not have any usage now, but we will be seeing them...eventually :)

Your misunderstanding the concept. There is no such thing as a multicore GPU because GPU's are already multicore. A quad is effectively a GPU core which shares some parent resources much as a dual core CPU can share L2 cache or the memory interface. So G70 already has 6 "cores". Creating completely independant cores would just be creating more overhead for those quads.
 
!eVo!-X Ant UK said:
Multicore GPU's might not have any usage now, but we will be seeing them...eventually :)

They will have a use when the transistor count get so high that it effects yields...

The most cost effective method of adding more logic at that point with keeping yield high would be multi-gpu configurations, not necessarily on the same die but within the same "package" (high speed/bandwidth busses connecting them).
 
Urian said:
Microsoft took 200 engineers from Nvidia for to tweak the NV25 and making the NV2A.

Sony only has taken 50 engineers from Nvidia. I cannot believe in an huge change, sorry.

I'm not sure the amount of engineers bears any greater relevancy to the question if RSX is a mere upclocked G70 with 1 or 2 changes or in fact something greater*.

*Note; I'm not expecting a greater change, though I'm not going to rule it out either. Certainly not on any # of engineers numbers...
 
I don't know guys, I can't really relate to the 24 pipeline G70 @ 550 MHz not being good enough. Developers are going to get some amazing results out of such a chip. Note, I also agree with some here, the 24 (* 2) pixel shaders in the G70 is very much a "farm" of ALU's, even if they have TMU's attached.

All these new patents might be referring to something Nvidia is releasing this summer, the G80 class, which would not be ready in time for a PS3 launch.

Sony has to manufacture millions of these GPU's this year, and so it stands to reason, the chip has to be done soon, to get on to Sony's manufacturing process. This takes time.
 
!eVo!-X Ant UK said:
Not incompotent, just need'd pushing in a new direction by sony.

The multiple core direction ;)
Let's take G70 as being 300 million transistors with 24 pixel pipelines and 6 vertex pipes (that right?). What is your envisaged RSX multicore GPU and how much would it cost, remembering that a doubling in transistor count has a correspondingly logarithmic increase in cost, though I'm not sure what the ratio would be.
 
Shifty Geezer said:
Let's take G70 as being 300 million transistors with 24 pixel pipelines and 6 vertex pipes (that right?). What is your envisaged RSX multicore GPU and how much would it cost, remembering that a doubling in transistor count has a correspondingly logarithmic increase in cost, though I'm not sure what the ratio would be.

Didn't Sony's slides claim 300M for RSX?
 
300M+
Edge said:
I don't know guys, I can't really relate to the 24 pipeline G70 @ 550 MHz not being good enough. Developers are going to get some amazing results out of such a chip. Note, I also agree with some here, the 24 (* 2) pixel shaders in the G70 is very much a "farm" of ALU's, even if they have TMU's attached.

All these new patents might be referring to something Nvidia is releasing this summer, the G80 class, which would not be ready in time for a PS3 launch.

Sony has to manufacture millions of these GPU's this year, and so it stands to reason, the chip has to be done soon, to get on to Sony's manufacturing process. This takes time.
IT's good enough, but it's not like the g80 will pop out of thin air this summer, it's been in the works for quite a while, some aspects or improvements(for example with vertex texturing) could've easily been implemented along with the flexio. Nvidia also likely has several potential R&D paths in the works at anyone time and could borrow from such any ideas that would work.

PS As for the rsx vs 4xsli, remember the cell vs gpu thread, it doesn't have to be in every scenario that it keeps up for them to claim similar performance, if it's within particular scenarios it'll do, and is actually possible if they did manage to make certain improvements;)
 
Originally Posted by Urian
Microsoft took 200 engineers from Nvidia for to tweak the NV25 and making the NV2A.

Sony only has taken 50 engineers from Nvidia. I cannot believe in an huge change, sorry.

Sorry but that proves nothing.
 
Joe DeFuria said:
Speaking of which, someone (I suppose this would be chip fabricators and/or chip packagers) really needs to come up with a technology that DOES allow for 256 bit bus chips to scale down to smaller sizes...I'm very disappointed that 256 bit PC cards seem unable to break the $199 barrier in any meaningful way. (And that 128 bit chips have not over taken 64 bit chips in the value segment...)

Oh, they can scale down high pin counts to smaller areas. It does cost a whole bucket load of money to do it though. Its always a cost/benefit trade-off.

Aaron Spink
speaking for myself inc.
 
zidane1strife said:
4xSli does not translate to 4x/400% improvement. Maybe 2-2.5~. If it's 32 and is boosted to 600-625Mhz~, it could offer such a performance boost over a vanilla 7800 or more if it had custom g80ish improvements(threading), especially if it's paired with a faster xdr based setup.

Sony would have a pretty big challenge cooling a 100+ watts GPU + a ~100 watt CPU plus the BRD, plus the power supply, all within a box with supposedly a smaller volume than X360.

To give a counter point, supposedly, XCPU is around ~80 watts and XGPU is around 40-50 watts. And they still have cooling issues, and that is with an external power supply.

People are free to speculate all they want, but while you're speculating, speculate on the physical constraints of the system and how your speculations work within that enviroment.

Aaron Spink
speaking for myself inc.
 
aaronspink said:
and XGPU is around 40-50 watts. And they still have cooling issues, and that is with an external power supply.

People are free to speculate all they want, but while you're speculating, speculate on the physical constraints of the system and how your speculations work within that enviroment.

and at what performance would you rate Xgpu at in regards to ATI's family of processors....R520?..R580?..?
 
aaronspink said:
Sony would have a pretty big challenge cooling a 100+ watts GPU + a ~100 watt CPU plus the BRD, plus the power supply, all within a box with supposedly a smaller volume than X360.

To give a counter point, supposedly, XCPU is around ~80 watts and XGPU is around 40-50 watts. And they still have cooling issues, and that is with an external power supply.

People are free to speculate all they want, but while you're speculating, speculate on the physical constraints of the system and how your speculations work within that enviroment.

Aaron Spink
speaking for myself inc.

TSMC 90nm =/ Sony 90nm, the latter may consume a bit less while at the same speed.
MS =/ Sony, sony's known for making pretty large heat sinks when it's called for example the ps2 originally supposed to be .180m but alas came in .250m but they managed to do it, and 180/250 = 0.72. Now the ps3 is designed for 90nm but it will probably be scaled down to 65nm, with that in mind they can do it again, 65/90 = 0.72.

This brings another parallel between the dc and x360. Both sony's h/w for the time, and these two h/w units shared the use of the same process. Yet the former was throughouly trounced in the gphx dept., though, it is still unknown what will happen with the latter.

PS also note I later clarified saying it could be in specific scenarios, akin to what happened in the cell vs gpu thread.
 
Last edited by a moderator:
The possibility does exist that Sony could fab the RSX on it's SOI process to help the GPU run faster and cooler. It's hard to believe that all the heavy investment into SOI would just be for CELL CPU's.
 
zidane1strife said:
TSMC 90nm =/ Sony 90nm, the latter may consume a bit less while at the same speed.

Um, possibly but not a lot. Sony is no better than IBM and Nvidia didn't increase heat going from IBM to TSMC.

MS =/ Sony, sony's known for making pretty large heat sinks when it's called for example the ps2 originally supposed to be .180m but alas came in .250m but they managed to do it, and 180/250 = 0.72. Now the ps3 is designed for 90nm but it will probably be scaled down to 65nm, with that in mind they can do it again, 65/90 = 0.72.

It isn't an issue with heat sinks, the X360 has fairly large heatsinks, it has to do with volume, airflow, and noise. PS2 was on another whole different thermal level than PS3.

And PS2 wasn't originally supposed to be 180 nM, it was supposed to be 250 nM or do you think that you make a process change in less than 12 months for a chip design? The PS2 cooling and enclosure was designed around the thermal loads they would get at 250 nM.

Aaron Spink
speaking for myself inc.
 
Brimstone said:
The possibility does exist that Sony could fab the RSX on it's SOI process to help the GPU run faster and cooler. It's hard to believe that all the heavy investment into SOI would just be for CELL CPU's.

I'm curious about this too -- I don't know if there is anything that would patently make it not possible?

I know there are often quirky things, but a GPU should be relatively similar to a CPU in its density and contained structures (if not a bit more forgiving)?
 
!eVo!-X Ant UK said:
But its not just Nvida working on RSX now is it???
See: Mythical Man Month, No Silver Bullet, common sense, &c, &c. It's not like Sony'll crack the whip and get NV productive again. NV hasn't exactly been coasting on Riva's success for the past decade. One reason they've outlasted S3, PVR, 3dfx (at least in the retail space) might be b/c they're hard, intelligent workers (as opposed to clueless slackers--or even competent punctuals :)).

If you think adding another team can produce miracles, I guess a very weak comparison can be made to ATI + ArtX = Radeon 9700 Pro. As celebrated as it was, it wasn't miraculous in the sense of consistently (or, heck, ever) doubling the competing architecture's speed. I guess they did pack an extra core in there in the form of another quad, but they were clocked lower and used roughly the same amount of transistors. So, no miracles relative to NV (but somewhat miraculous relative to pre-9700 ATI in that they went from trailing slightly to leading slightly).

Stupid question about PS3 being all XDR: Rambus won't be fab-constrained supplying every PS3 with double the (announced) chips, will it? I have no idea how big of a supplier they are, but PS2 and PSP are only packing around 32MBs each, while PS3 will shoot to 512--and presumably use a not-insignificantly greater number of XDR chips. That's volume, baby. OTOH, initial volume may be purposefully limited/masked by a high initial price.
 
Last edited by a moderator:
Pete said:
Stupid question about PS3 being all XDR: Rambus won't be fab-constrained supplying every PS3 with double the (announced) chips, will it? I have no idea how big of a supplier they are, but PS2 and PSP are only packing around 32MBs each, while PS3 will shoot to 512--and presumably use a not-insignificantly greater number of XDR chips. That's volume, baby. OTOH, initial volume may be purposefully limited/masked by a high initial price.

Rambus doesn't supply the RAM themselves - they license it. Samsung, Toshiba, and Elpida are the notable sources Sony could draw from for their XDR. I know Samsung's a confirmed XDR supplier to Sony; forget whether either of the other two are confirmed or not, but I'm sure throughut it's life Sony will be purchasing modules from all three for PS3. I know Elpida was planning on a production ramp with the PS3 specifically in mind, if that's any indication.
 
Last edited by a moderator:
Pete said:
Stupid question about PS3 being all XDR: Rambus won't be fab-constrained supplying every PS3 with double the (announced) chips, will it? I have no idea how big of a supplier they are, but PS2 and PSP are only packing around 32MBs each, while PS3 will shoot to 512--and presumably use a not-insignificantly greater number of XDR chips. That's volume, baby. OTOH, initial volume may be purposefully limited/masked by a high initial price.

Rambus is an IP company, they don't own any fabs. It's up to Samsung and Elipida to churn out XDR in volume. Samsung alone provides 85% of the worlds GDDR-3 ram and they already produce XDR.
 
Back
Top