HD was and still is a big selling point of the 360 and PS3. Yes, the number of households/users with HDTVs and monitors has grown immensely in the last 7 years, but I think the normal uninformed public still adheres to the golden tone of someone or something saying "HD", especially with the rise in resolution on mobile devices.
I do agree. Still you got to admit that a lot of people played on SDTV for a long part of this generation. If the generation had a more reasonable length the issue would have been further lessen.
For the ignorant public a lot of games are @1080P if you see what I mean
Still I'm OK to concede that HD (720P) was a reasonable point at this point of time.
But I still think that at this point in time silicon budget were to constrained. I'm happy neither with form of the ps360 and that's after 7 years.
I think a compromise have to be done, more on that latter.
The state of game design relatively wouldn't have been effected by half the RAM in the current systems. What we see now is just an outgrowth of last gen for the most part, exception being the Wii and Kinect. The large amount of RAM I think is just another reason for devs to continue overtargeting visuals instead of spending valuable time and money developing gameplay. Just look at how many games have framerate issues, most likely due in part to pushing graphics too hard. Yes, a half size memory footprint has the potential to make things worse, but prevents GPU overtaxing.
I do not agree. I would say what you're describing is more a side effect of the move to AAA games than anything else. Everything has to be pushed at eleven. The competition is rough risk with regard to gameplay or genres is avoided as much as possible. Overall I would say that nowadays games are way better than they were before on average. I mean you remember all those old games that were turning more than often into slide shows? There are still issues but I think it's way better now.
I remember the rumors of the dual core 1.5 GHz PPC + X1600 Wii spec, and in the end such a system would've worked to Nintendo's advantage in the long run. The GC commonality I think worked against them, making cheap shovelware way too ubiquitous and easy to make. Such a thing couldn't work too well on the HD sets, since the minimum level they had to aspire to would be much greater, and therefore more expensive.
Well as much as I dislike the Wii hardware history has proved it was the righ choice. It made them a hell lot of money.
I think that what will have hurt Nintendo (if the wiiU fails) is to hold on the Wii for too long when imho they had a shot to push something new one or two year ago. A real half gen upgrade which may have embarrassed both Sony and MS that could not really afford to move forward at this point in time.
What I mean by a super Wii may be was not clear. So I mean a system that would have been to the ps2 a "super wii" to the gamecube.
Like Nintendo they should have avoid the too modern PC GPU. I mean they are great but moving from the T&L / directx 7 era to the RSX /directx 9.c costed a lot of transistors.
Especially if aiming at HD resolution the saved transistors may have proved useful.
I've a tough time figuring what was Sony technical judgement on the ps2 hardware. I'm mostly ignorant of the psp design and a quick look gave me the impression that they were moving away from the PS2 approach with regard to how geometry was handle.
In the ps2 the EE was in charge of all the geometry and the GS was a rasterizer/fragment processor/ searching a proper word.
In the psp (going by wiki so I might be mislead completely) it looks like the GPU is more complete.
So If I get it right, I would assume that Sony came to the conclusion that the approach in the PS2 was not the most efficient. To me it makes the Cell even more of a strange choice as it looks like it trying to do what the EE was doing and much more.
Point is the PS2 achieved great thing. they should have tried to push the design further.
By them selves could have they design more complex MIPS cores, vector units, and GPU?
Even without aiming at Cell or RSX level of performance, I believe the answer was no.
Did they need to aim at this high. My belief is no either.
My belief is that in 2006 they could and should have produced something more akin to the PSV.
Whatever architecture they would have use for the cpu (be it MIPS, ARM or PowerPC) was irrelevant.
As I see the problem is not only that they were at the limit of what they could do but at the limit of their expertize. I think their is quiet truth about Bill Gates comment about them "not knowing what they were doing". They went for goals that were way above their expertize on the matter.
I believe Sony could not have designed an simple (if simple is a proper description for this) out of order super scalar MIPS processor still at the time they wanted to do the Cell and beat the whole world.
Same for the GPU (which it seems have been aborted early) they wanted to compete with companies who were pushing out multiple GPUs a year, investing a lot, etc. Shortly companies whom doing GPU were the actual business.
For me they went to ahead of them selves, they lose control of the project.
It seems to me that they wanted an EE2 and a fragment processor, so a system acting more or less like the ps2.
Sony should have considered more seriously the successful solutions used either by CPU or GPU manufacturers to make their products better, instead of trying to go for their own solution. It's not like they could not have been fitted into their design choice.
Looking at both the ps2 and the world the reasoning could/should have something like this.
1) We need/want a better/faster CPU.
*What do we have? a low clock, in order, narrow CPU with a super short pipeline, no L2 cache.
*What are the industry leaders doing? wider CPU, longer pipeline, out of order execution, L2 cache.
*Was it an option when the ps2 was designed? No, too complicated, big, cost a lot of power.
*Is it still not an option? The answer should have been no.
Should we pursue the level of performance the industry leaders are providing? Another no. Too, big too hot.
* Can we do it our selves? Looks like it would have been another no.
=> we need a pretty simple but
modern CPU, that's to say, a longer pipeline, higher clock speed, on chip L2.
That sounds like nothing but the pentium, clock for clock, beat the original Pentium by between 25% and 35 %. On average another 30% 9clock for clock) was gain by integrating the L2 on die (from Katmai to Coppermine). Was that prohibitive for a system launching in 2006? No Coppermine (Xbox Cpu by the way) was worth 29millions transistors with its 256KB of L2.
That's early implementing.
Including further refinement and a bit bigger L2, and starting from the EE CPU we might reach 200% the
sustained perfs and clock for clock. @1.2GHz for example that's 8 time the sustained performances, still that's not marketable GFLOPS figures
* Where are they heading? Multi threading and Symmetric multi processing.
*Why? hitting the wall in single thread performance mostly for three reasons: clock speed, power, main memory latencies.
* Should we get there? SMT sounds like an easy yes / SMP was indeed a tough arbitration. They may have indeed decide to squeeze some extra single thread performance. Intel and AMD barely made it around 2005/6 ( I don't remember for sure when the first real dual cores happened, AMD was first).
=> Summup we need to design (or buy) a CPU with good single thread performance. Pursuing high clock speed is an iffy prospect. We are sure we need a good SIMD. We are looking forward a huge jump in performance.
2) We need/want better vector units.
* Do we need two of them?
Like for SMP it makes more things complicated. Software people are not that willing to adapt to SMP, the same may applied wrt to vector processor.
Looking at how modern GPU operate, geometry is handled more efficiently by a more specialized unit(s) within the GPU. that's what T&L and in more modern GPU (at the time of ps3 design) vertex pipelines were handling. Let go for one.
*How do we make it faster?
higher clock speed, wider SIMD, implement scatter gather.
*Do we want memory coherency? Yes.
*how do we keep it busy? No choice SMT to hiding memory latency. OoO execution sounds way too complex to introduce.
*Can we do it alone? No.
=> We need to develop a heavily threaded (like 4) vector processor sharing the same memory space as the CPU. Actually we may want to reuse the result in other devices.
3) We need a better GPU
*Can we pass on handling geometry on the GPU? (see above) No.
*What the manufacturers are doing? Moving away for the fixed function pipeline
*How does it go? (at the time of design) not that well. As I remember the direct 8.0 and 8.1 were more top of the art directx 7 gpu and were taking serious performances hit in directx 8 and above mode. So it's promising but costly.
*What are the most noticable improvement for the average end user? Texture filtering and MSAA greatly improve the IQ.
* Is bandwidth a constrain? Yes.
* what are the solution? EDRAM or TBDR.
* Do we aim for HD? EDRAM is bothering if yes.
* Side note, edram is costly and limit the process one can use.
=> we need to design (or buy) a GPU with a really strong T&L engine (stronger than what were on the market as manufacturers were moving away from it).
We need GPU that makes an optimal use of bandwidth and can handle high level of texture filtering and anti aliasing.
---------------------
I'm kind of toying but it looks a lot like what the students were doing when I was working for a business school. Stay pretty open minded, see what competitors are doing, what works what don't, etc.
Then you match that with your strength and weakness and see how it turns.
I don't say that what I wrote is correct just that at least I find it more coherent than what the ps3 project looks like from outside.
With regard to partnership I would have hoped that Sony and Toshiba have joined force and be up to the task to deliver the CPU and VPU. Either way IBM was the only option.
On the GPU side I would have wish a partnership between Sony and PowerVR (or whoever own them at the time) But anything truly custom for either NV or ATI would have work including edram or not.
As a side note I would have wished Sony to notice that RAM seems to be a god sent for developer and that more RAm can go a long way. I would have which that RAM budget was higher in the PS3 even if that means cutting corners elsewhere. One option would have been to go with cheap ram.
1GB unified pool of RAM would have done imho great.