Cell & RSX: what will be Sony plans?

Hello,
I don't understand why Sony has made lot of effort to build with IBM a so complicated and different processor (Cell) if on the other part the have implemented a graphic processor that seems to be really a standard not innovative or exotic (read customizable/optimized) one. What do you think are Sony's plans about use of those chips? Do you think that RSX will not be at the same height of the cpu if really well used with all its feature? And if this is true, does it mean that 3D rendering is not a priority talking about complexity since it was at PS1 age?

Bye
Asymmetric multi-cores are the state of the art and the way to go if you want maximum throughput per silicon area and per Watt.

Modern PC CPUs have evolved away from providing the fastest possible execution times (K6 3-cycle IMUL ftw!), toward extracting parallelism from everyday code, to use their relatively few execution units as effectively as possible, especially when executing code that engineers hate.
OTOH if you want to just crunch through large amounts of data quickly, you need many execution resources and a way to keep them fed, but the silicon overhead for clever management turns into a problem rather than a strength. The transistors it uses could have been spent for execution, to greater effect.

Sure you can use a PC CPU to do number crunching, but you're not going to win against a custom-designed bunch of execution units that gets to use the same silicon area and fab technology. The only problem here is that you might need a more general PC CPU to drive it or otherwise, if you want it to be able to manage itself, you're compromising the design in the direction you wanted to avoid.

So in practice, you want the characteristics of both processor types.

You want at least one somewhat normal CPU core in a system to do housekeeping and to orchestrate all these little critters, but slapping together multiple large and branchy-code-friendly CPUs is wasteful.

In a closed-box single-task, single-user system, single-thread performance isn't as much of a priority as it would be on a PC, and ~same performance profile/thread portability between execution cores is completely irrelevant. Thus embedded can take leaps the PC market cannot, as soon as it makes sense.

Cell builds heavily on IBM's preexisting PowerPC ISA. The project might have been expensive and long in the making, but I don't perceive it as risky or poorly thought out. The overall idea is pretty sound, the PPC building blocks have existed and have proven themselves in practice, and it has been understood for a good while that other approaches to increased CPU performance have certain issues that may be acceptable for a PC that needs to run legacy software as fast as possible, but aren't such hot ideas for a system designed off a clean slate. The goals were pretty clear. It took so long to make it just right, not to get it finished at all in whatever form.

Oh and re RSX, I don't think it's a poor choice, it's a pretty fast GPU and a reasonable choice for the time frame where a decision had to be reached, it's just less impressive and IMO less elegant than the Cell BE. Cell is somewhat of an engineer's wet dream come true. It's pure and right.
 
The Graphics Synthesizer was 279 mm^2 large. nVidia or ATi probably would've delivered twice the performance for that silicon cost, and PowerVR probably would've delivered three times it.
 
The Graphics Synthesizer was 279 mm^2 large. nVidia or ATi probably would've delivered twice the performance for that silicon cost, and PowerVR probably would've delivered three times it.

...Matrox four times, and S3 five times...

Numbers out of our asses ftw!!
 
Benchmarks by the actual games compared those companies' graphics chips and indicated the performance difference.

Virtua Fighter 4 was coded natively to both NAOMI2 and PS2 by SEGA AM2, and PC benchmarks comparing the PC companies are the most common attraction at dozens of related websites.
 
i'm betting that Sony just wanted a simple, reasonably powerful GPU but one that was not top of the line.
What's not top-of-the-line about the G70 for when it was included in PS3? It may not be top-of-the-line now, but you can't add a GPU to a console just a few months before it launches!
 
...Matrox four times, and S3 five times...
Numbers out of our asses ftw!!
Though Lazy8's numbers are pure guesswork, and it's not surprising to see PowerVR raised ahead of nVidia... but Lazy8 has got a point worth considering. Given GS's costs, what would the other GPU manufacturer's have produced? That'd be a long and complicated discussion though, as GS included eDRAM who's absence would have made for a very different machine, and I dare say less scalable. GS's simple brute-force method has proven very flexible in the long run.
 
Tile based deferred rendering afforded more bandwidth than eDRAM and also afforded more fillrate, heightened image quality unconditionally, and didn't trade off the die area to do it.

An AM2 graphics article on Virtua Fighter 4 revealed that they were able to use twice the polygon count under the more demanding lighting and other graphics advantages throughout in the custom NAOMI2 version than in the custom PS2 build, and NAOMI2 is smaller in die size. A PC benchmark like the one at Anandtech.com shows that the GeForce2 MX, specced similarly to the Kyro2, performs about 2/3 as fast, or less: http://www.anandtech.com/printarticle.aspx?i=1435

A console can be readied for launch at the same time its chipset is being developed and is able to ship only a few months after volume production of that chipset, so the GPU in a console should be current at launch. Also, a system sold at a very high pricepoint should have a GPU with comparable specs, like ROPS, to the high end at the very least.
 
What's not top-of-the-line about the G70 for when it was included in PS3? It may not be top-of-the-line now, but you can't add a GPU to a console just a few months before it launches!

But the PS3 just came out now.

Not to dovetail with the RSX Vertex thread much, but I think that consumers and developers had higher expectations based on a) 12 month launch difference and b) retail cost differences. Similarly the die size disparities accent the difference between an adapted PC part and a console specific design.

As for Megadrives comment, and top of the line, I guess that all comes down to perception. A G70 @ 550MHz was top of the line from Nvidia in the middle of 2005. Early 2006 saw dual GPU designs, and fall 2006 saw G80 with huge leaps in

1. Performance
2. Efficency
3. IQ
4. Featureset

But is G80 realistic in a closed box on 90nm? Probably not. Definately not with Sony's silicone budget. If Sony knew a) NV could deliver G80 samples in H1 2006, and volume in H2 and b) offer a custom "midrange" version to meet the power/heat/silicon targets Sony had then I think they would have done so. Basically Sony was late to the game in regards to GPU decisions. They didn't want to parner with ATI; so that left NV.

No time for a custom design like Xenos.
No time or silicon budget for [midrange] G80 (G80 was too late in the dev cycle for Sony's RSX/launch target dates)

So Sony got the best "top of the line" that doesn't fit those criteria. But could have Sony had a better GPU if they had contracted NV in 2002/2003? I think yes. If NV had kept pace with a H1 2006 G80 schedule and Sony had aimed for Fall 2006 for the PS3 launch could have we seen a G80 derivative aimed more at the console space? Again, I think yes.

I think with more planning Sony/NV could have done better with the die space RSX consumes. But on the other hand given the timeframe and budget for RSX and the companies making bid Sony got the best bang for buck for that die space.

It all comes down to benchmarks and perceptions. I can see it both ways, but as a consumer I do tend to see the jumps last gen (Dreamcast => PS2 => Xbox) that were made in a 24-30 month period, as well as the scaling costs for each, and must say a 12 month later release and higher platform cost do, based on history, set higher expectations from some early adopters. Most won't care and RSX is more than capable to do the tasks it is being asked to accomplish.

But is it top of the line in Fall 2006? I would say no. For single chip solutions that is G80. But then again RSX die area doesn't allow that sort of top of the line. Just my opinion.
 
A console can be readied for launch at the same time its chipset is being developed and is able to ship only a few months after volume production of that chipset, so the GPU in a console should be current at launch. Also, a system sold at a very high pricepoint should have a GPU with comparable specs, like ROPS, to the high end at the very least.

Erm, not quite.
The RSX is like a PC part but it is not a PC part, is a customised version of a PC part. The changes include a different internal structure, a different memory structure and different I/O bus. Another change is that Sony and Toshiba make the RSX chips so the chip needs to be ported to their process, on complex chips that's a very long process, in the case of CPUs it's about a year.

Leading edge GPUs are now becoming big, expensive, power hungry, have huge memory busses and are made in small quantities. All of these make them pretty much completely unsuitable for consoles.

A G80 could be modified for use in a console but if they did that you're talking about a delay of a year or more and the end product would not be as powerful.

--

As for why they put their efforts into Cell rather than the GPU, I think the answer is the bottleneck is the processor so you're much more likely to get a better system by building a more powerful CPU than a better GPU. If you look at PCs hardware accelerators are becoming available for physics, Ai and even networking. Cell can be used for all of these, GPU can't.
 
As for why they put their efforts into Cell rather than the GPU, I think the answer is the bottleneck is the processor so you're much more likely to get a better system by building a more powerful CPU than a better GPU. If you look at PCs hardware accelerators are becoming available for physics, Ai and even networking. Cell can be used for all of these, GPU can't.

O'Really? Doubletake! ATI and NV would disagree ;)
 
General purpose performance is a measure of a "good" CPU. Multimedia processing and physics are often well suited to the parallelism of GPUs. If an area of silicon is going to be used primarily for graphics, it might be more effective as part of the processor that's specialized for it, the GPU.

The design for a system shouldn't have to be a choice between having either a stand-out CPU or a stand-out GPU. The system designer should commission the optimal parts for both.

Both the Dreamcast and X360 show that using a custom GPU in a console and launching while the technology is still current is possible and just takes better planning by the hardware developer.

Graphics are a main feature of a games console. High-end GPUs might be expensive, but so are high end consoles these days.
 
Isn't the dreamcast situation more similar to the PS3 - both machines took PC graphics cards ( that weren't the latest and greatest ) and shone because programmers could work on a fixed platform.

Some of the PS3/Xbox360 arguments do sound exactly like the old PS2/Dreamcast arguments. Especially the texture size ones :)
 
Isn't the dreamcast situation more similar to the PS3 - both machines took PC graphics cards ( that weren't the latest and greatest ) and shone because programmers could work on a fixed platform.
No. The Dreamcast graphics chip only made a very late (more than a year after the console's launch) token appearance for the PC under the name of Neon 250. It was too late to compete (Geforce 1 coming up).

edited bits: it's not even in the B3D chip tables. Should tell you something about its market impact.
 
Well there's no RSX board available for PC :)
My point is that there were similar arguments about features ( especially texture size and quality ) at the launch of the PS2..
It's very easy to make quick decisions based on comments ( and feature lists ) but optimising game code is way more difficult - even a difference in line size on a cache can have a knock on effect in terms of relative performance.
 
So Sony got the best "top of the line" that doesn't fit those criteria. But could have Sony had a better GPU if they had contracted NV in 2002/2003? I think yes. If NV had kept pace with a H1 2006 G80 schedule and Sony had aimed for Fall 2006 for the PS3 launch could have we seen a G80 derivative aimed more at the console space? Again, I think yes.
This probably isn't the thread for it, but what customizations would be usefully added to make a part better suited for a console? eDRAM wouldn't be large enough to work without tiling, and that has it's own issues. So choosing not to go the eDRAM route, and not going with US because nVidia didn't have the tech, what other customizations are important?
 
For an entire year until the TNT2 Ultra entered the discount market in late 1999, no competitor's graphics chip could match the DC's CLX2, and it was almost ten months before one which could match it released into any segment of the consumer market at all.

Customization is always done to console GPUs for their system environment. That's never been relevant to whether a console can ship on schedule with the proper planning.
 
I think more importantly, GPUs are too busy rendering graphics to do other things. Using GPUs to facilitate non graphics tasks is unlikely to take off IMO, at least for a few years, as you'd be losing the graphical increases that everyone wants.
 
I really didnt know which thread to post this, so im posting it here

Preview Build: Sony Shares First Party PS3 Secrets

PS3 Hi, this is Jamil Moledina, the executive director of the Game Developers Conference. I'll be blogging weekly under the heading "Preview Build" to alert you to some of the latest news on GDC, along with the context as to how they fit into emerging trends in game development.

Today, we posted two new talks to the site, Real-World SPU Usage and RSX Best Practices, that expose publicly for the first time the tools and techniques that Sony's first party developers used to create their launch titles.

These intermediate level talks go beyond the simple architecture descriptions previously disclosed to describe the real-world lessons of building for the PlayStation 3. Furthermore, these sessions aim to simplify the complexity cited by some developers in creating games for the platform. Anyone considering or who is already knee-deep in programming for the next gen platform must attend these sessions.

Both talks were created by Mark Cerny, the Game Developers Choice Awards Lifetime Achievement Award recipient for 2005. He will lead the RSX session, while Chris Carty from SCEE ATG will lead the SPU session. Mark played a key role in developing many of these tools, and we felt it was necessary to bring some of this learning to the greater GDC community. After all, the whole reason for GDC is to support the development of better games, and what better way to showcase a platform than to facilitate the development of rich and varied content by the entire game development community.

This is a method that has worked well for Microsoft, which continues its tradition of providing a full-day workshop at GDC dedicated for developing games for their platforms. Stay tuned for speaker and content details for this year's Microsoft Game Developer Day.

any thoughts? (sorry if its old :oops: )
 
Back
Top