Yoshida confirms SCE working on new hardware

Status
Not open for further replies.
Yeah i read that leak too last year or so.
But dont think sony will do that they will just remaster those games in HD too sell them to the masses and they will buy it. Atleast i would do that its a nice cheap way to make some money on old ip and franchises.

We will see in a couple of years or so.

There are over 2000 PS2 games, hardly all of them are worth remastering in HD, but selling them discounted on PSN just like the PS1 games would be like money for nothing.
 
Like I said earlier, think about it, re-read all about G80 and its arrival, G80 would have been a thermal nightmare if it would have been chosen for PS3 in either 2006 or 2007 mainly because of the 90nm process, it really would have made alot more sense to wait for 65nm (still too much power comsumption, heat) and it would have been perfect at G92b @ 55nm but thats like three years later in a November 2008 launch, by that time there would be no problems with BR drive diodes and no problems disabling SPUs in Cell, maybe the console would have had twice the ram for system and graphics but the problem is the competition has been selling consoles since Nov 2005 and they would have to deal with a large MP group of devs treating the PS3 like devs were treating the PS2 to XBox 1 ports and you basically get diminishing returns.
A full G80 that's released at the same time as the PS3 had 680M transistors and a 384-bit memory interface. Cut that in half and you get the same number of transistors as the RSX, which fits your power and heat budget, and a 192-bit memory interface, which would go nicely with 384MB of video memory instead of 256MB and 50% more bandwidth. It shouldn't have cost any more than the RSX from a production standpoint if you ignore the memory. In return, they could have cut another SPU out to further improve yields on the cell, or even better, got rid of all the PS2 stuff, the boot flash memory, etc. on the motherboard like they eventually did, and sell a PS2 BC addon at the same profitable price as the PS2.

A half G80 would still run rings around the G71. At the very least, it'd have a proper scaler.
 
A full G80 that's released at the same time as the PS3 had 680M transistors and a 384-bit memory interface. Cut that in half and you get the same number of transistors as the RSX, which fits your power and heat budget, and a 192-bit memory interface, which would go nicely with 384MB of video memory instead of 256MB and 50% more bandwidth. It shouldn't have cost any more than the RSX from a production standpoint if you ignore the memory. In return, they could have cut another SPU out to further improve yields on the cell, or even better, got rid of all the PS2 stuff, the boot flash memory, etc. on the motherboard like they eventually did, and sell a PS2 BC addon at the same profitable price as the PS2.

A half G80 would still run rings around the G71. At the very least, it'd have a proper scaler.

I seriously doubt that the 1/2 G80 would of been mature enough to approach the yield neccessary to provide the number of PS3 needed for launch while being cost effective. Also, I doubt that the PS3 would have been ready for holiday 06 with a redesigned Cell.

In all likelihood the RSX was plan B. Sony misjudged their semiconductor ability leading up to this generation and it up hindering the success of the PS3.
 
I seriously doubt that the 1/2 G80 would of been mature enough to approach the yield neccessary to provide the number of PS3 needed for launch while being cost effective. Also, I doubt that the PS3 would have been ready for holiday 06 with a redesigned Cell.

In all likelihood the RSX was plan B. Sony misjudged their semiconductor ability leading up to this generation and it up hindering the success of the PS3.

1/2 G80 isn't any bigger than the G71, so it'd be just as mature IMHO.

They wouldn't need a redesigned Cell, just more die harvesting.
Sony have seriously underestimated the progress made in PC GPU's and they had no experience with a modern GPU before, hence the RSX.
 
They'd still have to add-on the XDR section to the GPU, and that's probably the main reason why RSX is bigger than G71 (by ~50mm^2). Also don't forget how long it took for a G8x derivitive to even hit the market.* And it's not like nVidia needs a couple million units at launch. Sony would and production like that takes awhile.

Your timing is just waaaaaaaaaaaaaaay off there.

*G84 arrived in April 2007, 170mm^2 @80nm. And even then, this design had its own thermal issues reported a year later.
 
They'd still have to add-on the XDR section to the GPU, and that's probably the main reason why RSX is bigger than G71 (by ~50mm^2). Also don't forget how long it took for a G8x derivitive to even hit the market.* And it's not like nVidia needs a couple million units at launch. Sony would and production like that takes awhile.

Your timing is just waaaaaaaaaaaaaaay off there.

*G84 arrived in April 2007, 170mm^2 @80nm. And even then, this design had its own thermal issues reported a year later.

Just like the RSX is clocked lower than a desktop G71, so could be a 1/2 G80 PS3 derivative, and a custom design done in parallel with G80 but with half the die size could possibly be produced in decent numbers. Or at the very least they could hire the same ATI that made the XGPU, while that design is owned by MS, the experiences that the engineers gained would easily carry over to the new chip.

Anyways, we're stuck with a crippled 7800 for the foreseeable future.
 
At the risk of starting a virtual riot, why should Sony continue with cell in PS4. Sure it's fast at the workloads it was conceived for, but according to some devs game code really isn't a good fit for the architecture. Devs have made impressive use of the cell CPU, but I get the feeling they did so in spite of the architecture rather than because of it.

Everyone here seems to assume the cell is the best possible CPU for a game console, but is that really true. Wouldn't a more conventional CPU/memory architecture make more sense next time?
 
Last edited by a moderator:
Devs have made impressive use of the cell CPU, but I get the feeling they did so in spite of the architecture rather than because of it.

The Cell architecture is strictly polarized. If the performance is impressive, then it has to be because of the architecture (and of course, the developers' skills). Otherwise, performance will tank unapologetically, and we wouldn't hear about it (can't ship).

Certain classes of problem may not map well to the architecture. It will require much engineering time to rearrange the data to fit the architectural needs. The other way is to do it on the regular PPU, or brute force using more cores.

EDIT:
Everyone here seems to assume the cell is the best possible CPU for a game console

Nah... I don't think that's accurate.

We are talking about Cell because so much has been invested, and it seems to be rather capable. Afterall how many desktop CPU can run Blu-ray and play games ? I am also curious if Kinect type of depth sensing can run fast enough on Cell without the extra chip. The secure SPU angle is interesting too. I wouldn't be surprised if some of Cell's traits make it into a future CPU.

In other words, whoever comes next has a large shoe to fill -- unless Sony changes the scope/game.
 
Just like the RSX is clocked lower than a desktop G71, so could be a 1/2 G80 PS3 derivative, and a custom design done in parallel with G80 but with half the die size could possibly be produced in decent numbers. Or at the very least they could hire the same ATI that made the XGPU, while that design is owned by MS, the experiences that the engineers gained would easily carry over to the new chip.

Anyways, we're stuck with a crippled 7800 for the foreseeable future.

Umm, the big part of what he was saying was 2007. Imagine how much worse off the PS3 would be if it had launched in 2007/8 compared to the X360?

Sony didn't have a choice of picking a G8x chip, none. The best they could have done if an Nvidia chip was the first choice (which is highly doubtful) would have been to contract Nvidia for a specific design similar to MS and ATI.

However not being first choice, faced with a spectre of MS launching X360 well ahead of PS3 and with costs ballooning out of control on PS3, they had to make a choice for a quick and dirty GPU. Thus a slightly modified G7x chip.

Regards,
SB
 
Last edited by a moderator:
A full G80 that's released at the same time as the PS3 had 680M transistors and a 384-bit memory interface. Cut that in half and you get the same number of transistors as the RSX, which fits your power and heat budget, and a 192-bit memory interface, which would go nicely with 384MB of video memory instead of 256MB and 50% more bandwidth. It shouldn't have cost any more than the RSX from a production standpoint if you ignore the memory. In return, they could have cut another SPU out to further improve yields on the cell, or even better, got rid of all the PS2 stuff, the boot flash memory, etc. on the motherboard like they eventually did, and sell a PS2 BC addon at the same profitable price as the PS2.

A half G80 would still run rings around the G71. At the very least, it'd have a proper scaler.

An RSX with a 192 bit memory interface and 384 MB of video memory would also be a completely different beast compared what we have today. If the RSX had looked like that most devs wouldn´t complain much about it.

Problem is (besides it adds unacceptable costs) that Kutaragi designed the PS3 to allow very efficient cost reduction during the consoles life span. A 192 bit memory interface would put a tighter restriction on how much the RSX could be shrinked.

And as Alstrong pointed out the timing of the G80 made in a totally unrealistic alternative for the PS3.

I am also not so sure a half G80 actually would run rings around the RSX if you would pair it up with the same memory interface as the RSX. Beside the difference in bits in the memory interface, the G80 was launched with higher clocked GDDR3 chips as well. The RSX is partly designed around the memory interface so any comparison to another GPU should be done with that in mind.

NRP said:
At the risk of starting a virtual riot, why should Sony continue with cell in PS4. Sure it's fast at the workloads it was conceived for, but according to some devs game code really isn't a good fit for the architecture. Devs have made impressive use of the cell CPU, but I get the feeling they did so in spite of the architecture rather than because of it.

Everyone here seems to assume the cell is the best possible CPU for a game console, but is that really true. Wouldn't a more conventional CPU/memory architecture make more sense next time?

Of course not! But what is the "best possible CPU for a game console" anyway? Some would argue it is a single core CPU running at THz or PHz speed. But we know that will not happen.

So we have the alternative of homogenous multicore CPUs and heterogenous multicore CPUs. Homogenous multicore offer easier programming while heterogenous multicore like the Cell offer better raw performance/die size and better memory utilisation through asyncronous memroy handling. Which is best for you depends on what your priorites are.

Going with the mature libraries for the PS3 and all the investments that have gone into that, I don´t really see why they would give up on Cell for the next generation of consoles. But I don´t know when the PS4 is planned either.
 
An RSX with a 192 bit memory interface and 384 MB of video memory would also be a completely different beast compared what we have today. If the RSX had looked like that most devs wouldn´t complain much about it.

That's also keeping mind that they would have to fit six GDDR3 512Mbit chips onto that package instead of four. Of course, they would have to move it off-package then, thus increasing board complexity a tad with all the trace wiring.

mmm... On a side note, where are the pics of 45nm RSX + GDDR3 :!:

I am also not so sure a half G80 actually would run rings around the RSX if you would pair it up with the same memory interface as the RSX.
mm.. indeed. The ROPs saw some pretty significant boosts in the G8x.
 
I think Sony will launch late and not give any specific details of the console to the last possible minute to stop copycats.

Sony pioneered motion controls and social gaming mid-gen with Eyetoy etc, Nintendo (and others) saw it's potential and run with it. Without eyetoy no Wii. Nintendo have now jumped on the 3D bandwagon also.
 
mmm... On a side note, where are the pics of 45nm RSX + GDDR3 :!:

There have been no pictures with the heat plate off. Very annoying indeed!
Some tech sites are not doing their job properly.

I am also very curious about the measurements of the new RSX. According to pocketnews Sony has confirmed it to be on a 40 nm process.
 
Well, I'm glad that they are doing things in a very straightforward fashion. But on the other hand, I really like Kutagari's designs just because IMHO, they are way ahead of their time. I know that it takes a lot of effort for developers and that it costs a lot of money to build tools and engines from the ground up to accommodate technology that isn't of the norm, but I can't help but admire his way of thinking.

Edit:
Also, I wanted to add that I don't think the PS3 was a failure, Sony sacrificed the PS3 with a new disc format and they won the format war because of this. I think that if the PS3 wasn't around, HD-DVD would've won.
 
Last edited by a moderator:
No, they went with a conventional multicore, shared cache architecture, as conventional as RSX was a conventional GPU. Sony went with a new CPU architecture designed around different principles to existing CPUs, and MS went with a new GPU architecture designed around different principles to existing GPUs. Neither broke the mould in both departments.

I would say MS went with a new CPU: new instruction set, new system interconnect, new core architecture, etc. Saying CELL is next gen because it has a lot of bad architecture from the 60's/70's is hilarious. The fundamental data movement of the CELL architecture had been known to be a dead end bad design for DECADES before it was used in CELL. There is nothing that a control store gets you that you can't get in a cache based design with added flexibility on top. Cell is fundamentally an architectural dead end and it was even before it saw first silicon.
 
Besides, weren't 360 failures largely due to X clamp, design flaws, GPU heat, not the motherboard?

As best we know it was down to thermal cycling of the C4 bumps on the GPU/CPU chips. The motherboard, etc, are fine from everything that has been reported. So it wasn't a cost issue but a design issue in the materials used in the substrate/die interfaces. The cost differences here are negligible quite honestly, it just takes experience with manufacturing/fault grading/testing which MS by and large didn't have as an organization at that time. Companies like Intel/AMD/IBM have whole departments of 100's of people that do that stuff for a living and have 30+ years of institutional knowledge which MS lacked. The same thing happened to Nvidia with many of their GPU chips.
 
Sony didn't have a choice of picking a G8x chip, none. The best they could have done if an Nvidia chip was the first choice (which is highly doubtful) would have been to contract Nvidia for a specific design similar to MS and ATI.
Yeah, that's what I meant by 1/2 G80 design. They would have contracted NV and NV would make a custom chip based on their latest G80 architecture, but one that would fit the transistor budget.

Also, the software cost for Cell seems to be getting less and less, how much extra logic would be required to beef up the SPU's so they could also be used as conventional multicore CPU's if that's what a dev wants? A configurable SPU/PPU hybrid that'd offer best raw performance as a SPU, but could also be used as a PPU with reduced performance (mostly due to memory bottleneck) if one isn't willing to spend the extra programming effort.

I'd say packing 16 of these SPU/PPU cores with 256KB cache would be a nice evolution for the next cell, and certainly enough for gaming applications. At launch devs could use the processors in "PPU mode" so they can get games out and running, and later more as a SPU mode for some high performance down to the metal coding, extending the console's lifespan.
 
BR drives are now $59 for a much faster unit with higher quality optics than found in the PS3 at Newegg,

DVD-ROM drives have effectively been at their baseline manufacturing prices for quite some time. Basically for almost the same price (~$1-2) you can get a DVD burner instead. Wouldn't be surprised if the cost of distribution for a DVD-ROM drive was more than its bill of materials at this point.
 
The MB was probably 100+ on it's own.

The MB was no where CLOSE to 100+ on its own. I've seen BOMs for many motherboards including server grade motherboard of many more layers and complexity with less volume than the PS3 board and they don't even cost 100+ to make. Once you take off the silicon chips, a motherboards cost is dominated by number of layers, material, size, and placed parts. The differential caused by things like types of chokes and inductors are minor, especially on such a small board as the 360/PS3 boards.
 
Status
Not open for further replies.
Back
Top