so PS3 will have ~534 million transistors + other questions

ralexand said:
Being able to stick more Cell chips in for future gen also will really make backwards compatibility very simple.

Yeah if they use Cell and just scale it B/C should be no problem - especially the case if they use NVidia once again, which seems likely.

Speaking of which, I wonder since NVidia seems entrenched now if the GPU for PS4 would be a custom chip built along NVidia's vision catering to PS4's needs or a custom chip built around Sony's vision, independent of NVidia's PC roadmap.
 
I hope the growth from PS3 to PS4 (both CPU itself and GPU itself) is like the growth we saw from PS1's 'GPU' (not a real GPU but that was its name) to PS2's GS -- which was, like 16 of them in parallel. in other words, 16 enhanced PS1 GPUs plus edram.

okay well, I don's actually know how accurate it is to say that Graphics Synthesizer's 16 parallel pixel engines/pipes are like 16 enhanced PS1 GPU cores plus eDRAM but that is what Ive seen bandied about over the years--on message boards.



hmm,
the transistor count of PS2's GS was 43 million transistors. between 7 and 12 million of those were logic, the rest eDRAM.

anybody know the transistor count of PS1's 'GPU' ?
(which was just a rasterizer)

in fact a breakdown of PS1's entire chipset, including transistor count, would be really cool.

PS1 CPU was made from MIPS R3000A core plus GTE plus JDEC decoder, plus a DMA and a few other things (maybe integrated I/O). Sony and LSI Logic did the PS1 entire CPU made from those components, while I believe Sony alone did the PS1 'GPU'.


Yeah if they use Cell and just scale it B/C should be no problem - especially the case if they use NVidia once again, which seems likely.

Speaking of which, I wonder since NVidia seems entrenched now if the GPU for PS4 would be a custom chip built along NVidia's vision catering to PS4's needs or a custom chip built around Sony's vision, independent of NVidia's PC roadmap.

xbdestroya - yeah I was thinking along the same lines for PS4's GPU.

I'm thinking that Sony and Nvidia will combine their best technologies and ideas since they'll have a total of at least 5 years to develop it instead of a year or two modifying an upcoming PC GPU to a greater or lesser degree.

perhaps Sony, for PS4, will want to give its GPU team (and/or Toshiba's) a chance to redeem themselves from their "failure" to do something that could be ultimately accepted for PS3.

but beyond that, I hope to see some radical new technologies emerge and be adopted for use in realtime graphics. basicly, the sky is the limit. raytracing, global illumination, everything that we want that cannot be done now, or in the Xbox360/PS3/Rev generation. with all due respect, I hope that people such as Nvidia's current chief scientist (Kirk) does not hold back the PS4 GPU with his current (or circa 2004) attitude against some of the hardware dedicated and geared toward doing raytracing.
 
Megadrive1988 said:
...
hmm,
the transistor count of PS2's GS was 43 million transistors. between 7 and 12 million of those were logic, the rest eDRAM.

4MB of eDRAM in the GS is ~ 32 million transistors...

Megadrive1988 said:
anybody know the transistor count of PS1's 'GPU' ?
(which was just a rasterizer)

in fact a breakdown of PS1's entire chipset, including transistor count, would be really cool.

I believe Quaz51 was compiling this info recently...maybe he'd like to share the findings? :)
 
I think that Cell probably will form the basis of PS4 at this point, simply because indeed if little else, it is extremely scalable. There's big upside to that from an R&D perspective, but of course the downside is that there's no new 'revolutionary' architecture for us to speculate on for the next 5+ years.

Yeah, at least to the consumer's mind, the PS4 CPU will likely be a more, faster version of the PS3's. Not necessarily revolutionary, but I'm sure it will also be much more refined - based on the issues that come up this gen. Granted, that will probably only tickle the dev's fancy.

But limitting the upside to mostly being R&D reductions is inaccurate imo. The fact that this will also mean an easier developer learning curve, improved tools, improved OS, etc., shouldn't be underestimated.

The x86 model isn't necessarily a bad model. The problem was, it wasn't well suited to gaming and media (and should have probably been killed a while ago in many peoples minds). CELL isn't perfect, but it is better suited to the purpose of gaming and media and will probably improve in that direction as time goes on.
 
Personaly, from what I can understand of the Cell, the management of threads is the biggest issue ( that and the fact that SPUs are limited in flexability and precision, ) It appears that there is even more complexity for programmers to contend with, even over last generation (barring the graphics API)

My guess therefore is that in late 2012 when there is a mature 32nm process (hopefully 20nm) They WILL HAVE TO put greater hardware control of thread handling / branch predictions and smart decision making hardware back in to the system.

Somthing they have foregone this generation to maximise raw throughput.

I think therefore that this would knock out half the raw performance next gen but vastly increase the access to the power.
 
By 2012, developers will be so used to writing for Cell that hardware management won't be needed. As it is, I don't know that anyone can say yet how hard it is/isn't to write for. I got the impression from the Toshiba demo that the video processing they were doing was distributed between SPE's automatically.
 
Shifty Geezer said:
By 2012, developers will be so used to writing for Cell that hardware management won't be needed. As it is, I don't know that anyone can say yet how hard it is/isn't to write for. I got the impression from the Toshiba demo that the video processing they were doing was distributed between SPE's automatically.

I'm sure it was.

It's a trivial case you're doing the same contained thing to multiple indepndant pieces of data in parallel, it's about as hard as transforming verts in parallel.
 
Qroach said:
So if you can transform one vert, you can do two at the same time essentially?


They have to be operated on in the same way for parallelism...at least that's what I understood. :p
 
Qroach said:
So if you can transform one vert, you can do two at the same time essentially?

No just indicating that if you have no requirements for synchronisation accross the data. i.e. one vertex result doesn't affect another, or one video stream has no impact on another. Then parallelsim is completly trivial, it's the quivalent of running multiple copies of word, it's just not a hard problem. You sequentially run the tasks on available processors and collect the results at the end.

In this case since the video decoder is likely to fit in SPU RAM, it's really just a case of farming out the data and collecting the results. From a parallelism standpoint it's hard to imagine something much easier.
 
I agree the PS4 will be an extension of the PS3 Cell architecture. It may take a little time getting accustomed to programming for Cell to make the most of its potential. But by the time PS4 does come, developers will have a much easier time with game creations which I believe Sony was looking at as well as future scaleability in the Cells creation.

Backwards compatability is a good thing :D
 
Shifty Geezer said:
By 2012, developers will be so used to writing for Cell that hardware management won't be needed. As it is, I don't know that anyone can say yet how hard it is/isn't to write for. I got the impression from the Toshiba demo that the video processing they were doing was distributed between SPE's automatically.

Multiple video stream handling != complex task.

Factor in PS4 being many times more complex, bobs your uncle, over complex coding task without help from hardware level managent.

Im not saying things wont work, Im just saying it will be difficult to get high performance without being extremely smart, and have lots of time money and effort.

This generation seems to be about smart software, I think next generation will lean on smart hardware. I just cant see how software devs will survive the curve next time without smart everything !

I know that Its only an opinion on my part, and perhaps you hardware technicians can point me in the right direction with those ideas.
 
kyetech said:
This generation seems to be about smart software, I think next generation will lean on smart hardware. I just cant see how software devs will survive the curve next time without smart everything !
Well Cell moves away from smart tech to get better performance per die area. I imagine the move will be for smarter development environemnts, as that's an area you can improve without needing to design and fab new CPUs all the time. Cell will become amzinigly chea[ if it's design doesn't change much, you squeeze loads into a machine to make it powerful, and resort to smart dev tools to make the most of them.
 
Back
Top