RSX evolution

I wonder why the 65nm shrink of RSX is running behind that of the Cell BE so much. PS3 units with a 65nm cell (type CECHG~) have been in production since October 2007 yet they still appear to contain a 90nm RSX.

What will the die size of the 65nm RSX be anyway? I've only been able to find some sporadic figures about the 90nm RSX (varying from 230 to 258 mm2).

Also, has anybody seen any die shots of RSX? I could find a picture of a related GPU but not precisely RSX.
 
The fabs that make the Cell (East Fishkill and Nagasaki) were simply on 65nm before the fabs that make RSX (OTSS). There's no other real reason than that - we should be seeing 65nm RSX in production units before too long.
 
I’m guessing a yield issue, It may also be cheaper to produce them (RSX) at the current die size.
I'd expect the same, getting nearly 100% out of the 90nm Dies might be better than the lower rates with 65nm.
Other companies switch the process to get also higher clocked processors, but that's unimportant for sony, so the costs are the only reason I see.
 
(offtopic maybe, can I bring gfx driver evolution here?)

We all have seen how PC gfx cards get new software drivers regularly. Do things like that happen to consoles. When we download firmwares from the PSN/XBL, do they reguarly update gfx drivers as well?

Lets say NVIDIA/ATI pc drivers give a slight performance boost on pc games. Is it something they cannot do in a known fixed console world. It ruins some highly finetuned optimizations gamedevs have done?
 
(offtopic maybe, can I bring gfx driver evolution here?)

We all have seen how PC gfx cards get new software drivers regularly. Do things like that happen to consoles. When we download firmwares from the PSN/XBL, do they reguarly update gfx drivers as well?
not really possible, most performance related stuff is compiled into the games. you can see from the psp how it works, the app generates ready to go commandbuffers, although there are several APIs you can use.
The "driver" just handles switching of commandbuffers and the bufferswap.

Lets say NVIDIA/ATI pc drivers give a slight performance boost on pc games. Is it something they cannot do in a known fixed console world. It ruins some highly finetuned optimizations gamedevs have done?
most of those optimizations are trimming of buffersizes and sometimes patching shaders that are used by the games to have less stalls. you can think of it like the minigl drivers AMD made for VooDoo. just the needed functionality is included, in the optimal way for that purpose. on the other side, those generic out-of-the box driver need just to work.
on consoles the developer are responsible for that fine-tuning and optimizations. they have all the tools and access to low level hardware, and because it's just one hardware, they can spent their time on it.
 
(offtopic maybe, can I bring gfx driver evolution here?)

We all have seen how PC gfx cards get new software drivers regularly. Do things like that happen to consoles.
No. On the PC the driver works kinda like a translator speaking different languages. Mr. A (API, graphics library, DirectX or OpenGL) speaks English and Mr. G (GPU) speaks French. Mr. A has instructions for what Mr. G has to do, but Mr. G isn't speaking the same language, so Mr. A's instructions are interpreted by Mr. D (Driver) who translates them into the language Mr. G understands. You can make refinements to the translation process.

Because PC developers have no idea what hardware their software is going to be running on, they can't write in the hardware's native language but have to use an interim languages to be translated through the drivers. Because the console hardware is fixed, devs can write directly to the hardware. The improvements over time won't come from nVidia or ATi drivers, but the developers themselves will find improvements in how they do things and get more performance.
 
I'd think that they still use some sort of driver and API though. These devs want to port to PC and other consoles. Coding to the metal isn't going to make that easier. I doubt development today is going to the metal as much as it used to, with how complex coding has gotten and how multiplatform is the only way to go.
 
Well, you do get some changes in the APIs, fixes, optimizations, and whatnot. But in general, these layers are not really thick enough to call them drivers. We're pretty far down on the metal on both the 360 and the PS3. (No clue about the Wii, sorry.)

One thing you will simply not see on a console (AFAICT :)) is that the "driver" will mess around with your push-buffers, change your shaders and all of that. So you shouldn't expect big performance improvements from firmware updates.

To achieve multiplatform compatibility, you will have to write a good chunk of what DX or GL give you yourself. Which is actually a lot of fun, so there...
 
with Cell 45nm beginning production in early 2009, I cant help but think: is it worth spending costly research on a 65nm part that late?

The other question of course would be: Why did it take that long?
 
Kotaku isn't reporting anything we didn't already know - there is essentially zero new information there. As for RSX vs Cell in process progression, like discussed earlier in this thread, Cell and RSX are fabbed on different processes. IBM will have 45nm SOI before Toshiba will have 45nm CMOS6; same thing with 65nm - Cell was simply ready to fab on its respective process faster than RSX on its. Plus, so long as there were plenty of 90nm chips lying around, they would have had to work through that inventory anyway.

There's no wasted effort on reducing RSX to 65nm this 'late' in the game, believe me, because 45nm isn't something we should be expecting anytime too soon here for that chip. Though there is the possibility that Toshiba will ramp 45nm faster relative to its peers than it did 65nm.
 
not really possible, most performance related stuff is compiled into the games. you can see from the psp how it works, the app generates ready to go commandbuffers, although there are several APIs you can use.
The "driver" just handles switching of commandbuffers and the bufferswap.


most of those optimizations are trimming of buffersizes and sometimes patching shaders that are used by the games to have less stalls. you can think of it like the minigl drivers AMD made for VooDoo. just the needed functionality is included, in the optimal way for that purpose. on the other side, those generic out-of-the box driver need just to work.
on consoles the developer are responsible for that fine-tuning and optimizations. they have all the tools and access to low level hardware, and because it's just one hardware, they can spent their time on it.

True, but just like with some portable console... if you use OS functions and the CPU% performance hit goes down for those functions you have more room to expand your application in... truth to be told, unless you are CPU limited in the game you might not see any automatic and quick improvement.
 
Guys, one thing you should remember is that with long running production runs like consoles you only switch to a smaller process when cost per transistor goes down. You can be damn sure that fabs are charging more per transistor (and hence much more per mm2) for smaller process for quite some time because some people will pay a premium for the higher clocks and/or lower power consumption of chips made on smaller processes, assuming equal yields.

Only when a fabs' capacity at the smaller process gets big enough will it start to charge less per transistor for the smaller process and start to phase out the larger process.

GPU makers make products that last only a couple years, so the fixed costs of taping out are very significant. They will pay extra at first to use a smaller process not only for the benefits I mentioned above, but also so that they don't have to redesign and tape out an additional time.

Both MS and Sony stuck with 90nm for a while, so it tells you that the fabs aren't charging much for them despite the larger die size.
 
I'd think that they still use some sort of driver and API though. These devs want to port to PC and other consoles. Coding to the metal isn't going to make that easier. I doubt development today is going to the metal as much as it used to, with how complex coding has gotten and how multiplatform is the only way to go.

Sony's RSX low level library (don't really exist a high level one) is almost macro that output words in the command buffer. I think it's "coding to the metal" is a good way to describe it ^^
 
Back
Top