Impress PC Watch: Will the PS3's backward compatibility with the PlayStation and PlayStation 2 be done through hardware?
Ken Kutaragi: It will be done through a combination of hardware and software. We can do it with software alone, but it's important to make it as close to perfect as possible. Third-party developers sometimes do things that are unimaginable. For example, there are cases where their games run, but not according to the console's specifications. There are times when games pass through our tests, but are written in ways that make us say, "What in the world is this code?!" We need to support backward compatibility towards those kinds of games as well, so trying to create compatibility by software alone is difficult. There are things that will be required by hardware. However, with the powers of [a machine like] the PS3, some parts can be handled by hardware, and some parts by software.
IPW: What about the endian (byte order) when emulating CPU codes with software?
KK: The Cell is bi-endian (has the ability to switch between usage of big endian and little endian ordering), so there are no problems.
IPW: The Xbox 360's backward compatibility will be done by software, since [there is] no other choice since they don't manufacture their own chips...
KK: The current Xbox will become antiquated once the new machine comes out this November. When that happens, the Xbox will be killing itself. The only way to avoid that is to support 100 percent compatibility from its [Xbox 360's] launch date, but Microsoft won't be able to commit to that. It's technically difficult.
IPW: The most surprising thing about the PS3's architecture is that its graphics are not processed by the Cell. Why didn't you make a Cell-based GPU?
KK: The Cell's seven Synergistic Processor Elements (SPE) can be used for graphics. In fact, some of the demos at E3 were running without a graphics processor, with all the renderings done with just the Cell. However, that kind of usage is a real waste. There are a lot of other things that should be done with the Cell. One of our ideas was to equip two Cell chips and to use one as a GPU, but we concluded that there were differences between the Cell to be used as a computer chip and as a shader, since a shader should be graphics-specific. The Cell has an architecture where it can do anything, although its SPE can be used to handle things such as displacement mapping. Prior to PS3, real-time rendered 3D graphics might have looked real, but they weren't actually calculated in a fully 3D environment. But that was OK for screen resolutions up until now. Even as of the current time, most of the games for the Xbox 360 use that kind of 3D. However, we want to realize fully calculated 3D graphics in fully 3D environments. In order to do that, we need to share the data between the CPU and GPU as much as possible. That's why we adopted this architecture. We want to make all the floating-point calculations including their rounded numbers the same, and we've been able to make it almost identical. So as a result, the CPU and GPU can use their calculated figures bidirectionally.
IPW: We were predicting that eDRAM was going to be used for the graphics memory, but after hearing that the PS3 will support the use of two HDTVs, we understood why it wasn't being used.
KK: Fundamentally, the GPU can run without graphics memory since it can use Redwood (the high-speed interface between Cell and the RSX GPU) and YDRAM (the code name for XDR DRAM). YDRAM is unified memory. However, there's still the question of whether the [bandwidth and cycle time] should be wasted by accessing the memory that's located far away when processing the graphics or using the shader. And there's also no reason to use up the Cell's memory bandwidth for normal graphics processes. The shader does a lot of calculations of its own, so it will require its own memory. A lot of VRAM will especially be required to control two HDTV screens in full resolution (1920x1080 pixels). For that, eDRAM is no good. eDRAM was good for the PS2, but for two HDTV screens, it's not enough. If we tried to fit enough volume of eDRAM [to support two HDTV screens] onto a 200-by-300-millimeter chip, there won't be enough room for the logics, and we'd have had to cut down on the number of shaders. It's better to use the logics in full, and to add on a lot of shaders.
IPW: First of all, why did you select Nvidia as your GPU vendor?
KK: Up until now, we've worked with Toshiba [for] our computer entertainment graphics. But this time, we've teamed with Nvidia, since we're making an actual computer. Nvidia has been thoroughly pursuing PC graphics, and with their programmable shader, they're even trying to do what Intel's processors have been doing. Nvidia keeps pursuing processor capabilities and functions because [Nvidia chief scientist] David Kirk and other developers come from all areas of the computer industry. They sometimes overdo things, but their corporate culture is very similar to ours. Sony and Nvidia have agreed that our goal will be to pursue [development of] a programmable processor as far as we can. I get a lot of opportunity to talk to Nvidia CEO Jen-Hsun [Huang] and David, and we talk about making the ideal GPU. When we say "ideal," we mean a processor that goes beyond any currently existing processor. Nvidia keeps on going into that direction, and in that sense, they share our vision. We share the same road map as well, as they are actually influenced by our [hardware] architecture. We know each other's spirits and we want to do the same thing, so that's why [Sony] teamed with Nvidia. The other reason is that consumers are starting to use fixed-pixel displays, such as LCD screens. When fixed-pixel devices become the default, it will be the age when TVs and PCs will merge, so we want to support everything perfectly. Aside from backward compatibility to, we also want to support anything from legacy graphics to the latest shader. We want to do resolutions higher than WSXGA (1680x1050 pixels). In those kinds of cases, it's better to bring everything from Nvidia rather than for us to create [a build] from scratch.
IPW: Microsoft decided to use a unified-shader GPU by ATI for its Xbox 360. Isn't unified shader more cutting edge when it comes to programming?
KK: The vertex shader and pixel shader are unified in ATI's architecture, and it looks good at one glance, but I think it will have some difficulties. For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.