Predict: The Next Generation Console Tech

Status
Not open for further replies.
@ Squilliam

What royalities ?

They pay Nvidia for the RSX, I think they pay IBM/Toshiba for the Cell now they have divested much of their stake, they pay other members of the BR/DVD consortium for their drive, they pay royalties for the XD-Ram etc. The royalties on the GPUs are big enough to be mentioned on the financial reports of both ATI and Nvidia.

OT: What if they are using the Emotion Engine as a basis for a custom GPU? They could design an truly dumb 'GPU' around a few texture units and ROPS embedded in a certain quantity of embedded cache for bandwidth which leaves the Cell processor to do all of the computational legwork in relation to shaders etc.

They could have a 180-22mm^2 main CPU with an array of SPUs but offload the fixed function graphical work which requires extra bandwidth onto what is essentially a dumb external GPU which is specialised for these tasks. That way they only need the one memory pool.
 
The Wii has proven that less powerful hardware sells better, so I'm fully expecting Microsoft and Sony to release consoles on par with the Super Nintendo. Nintendo will go a step further and re-release the NES with full-body motion control.
It hasn't proven less powerful hardware sells better. Only that less powerful hardware can sell better if there's a gimmick or better games.
 
Its hard to really say at this point. Whilst the SPUs weren't good enough for the typical workload in 2004 when they were developing the PS3 due to programming constraints and the nature of the GPU workload at the time, they may fit the future model of GPU workloads even better given the programmable nature of the GPU architectures moving beyond the first generation compute shaders and the reduction of relative importance in terms of proportional die area of the ROPs and texture units.

We know Cell is pretty good at ray tracing, how is it at raster? Can those SPUs set up triangles and fill it fast enough to be competitive; can large number of them be orchestrated to work efficiently as ROPs ?

Next gen will still be 1080p so most of the FLOPS will go to shading, tesselation and displacement which put more burden on the triangle setup. If they decided to go all Cell, next gen Sony can fit around 128 of the current SPUs into PS4. They may need to add texture units in there somewhere and every generation they'll just add more SPUs. All software except texture, can that work within 3 years ?
 
But EE is only the CPU, and it's only a MIPS CPU with 2 additional Vectorunits. So it's not so different to the CELL Design.

Yep pretty much, EE is MIPS core, 2 Vector units, DMA engine, north and southbridge, and some image decompression hardware, designed by Toshiba engineers. Cell is just the continuation of that, they bring IBM for the PPU. SPU is still pretty much Toshiba idea.

That's the architecture that Kutaragi likes for Playstation. If Sony managed to get 65nm ready for PS3, I doubt RSX would be in there. And since Kutaragi isn’t at the helm anymore, I doubt PS4 is EE based. They must have realised by now that software is everything and they can put any current of the shelf components into PS4 and saved money on R&D.

Or they still don’t know how to make good software and see the safe route aka PS3 as failure and went back to their old way again aka PS2. It can go either way at this point. Sony is a little unpredictable, it’s their company culture.
 
We know Cell is pretty good at ray tracing, how is it at raster? Can those SPUs set up triangles and fill it fast enough to be competitive; can large number of them be orchestrated to work efficiently as ROPs ?

They probably wouldn't need to. It seems the best ROP implementation we've seen to date is mating X number of Rops onto fast high bandwidth embedded RAM. Theres no reason why they couldn't essentially copy the Xbox 360 setup there.

Next gen will still be 1080p so most of the FLOPS will go to shading, tesselation and displacement which put more burden on the triangle setup. If they decided to go all Cell, next gen Sony can fit around 128 of the current SPUs into PS4. They may need to add texture units in there somewhere and every generation they'll just add more SPUs. All software except texture, can that work within 3 years ?

Can it work? Well its simply an extension of what developers are currently doing with the PS3 so yeah I would say it can probably work. I can't say if its the best or most efficient way they could do it in terms of hardware cost and power efficiency however.
 
We know Cell is pretty good at ray tracing, how is it at raster?
Very goopd in architecture, but lacking RAM management (texture reads!) which slows it considerable, plus not as processing-dense as a GPU, so an amount of silicon of GPU will considerable outperform the same amount of Cell as is.

Perhaps most importantly, it'll 'alien' tech without the dev tools. Where developesr are comfortable with GPU shaders and nVidia/ATi's toolchains, a new architecture without that is going to be despised, even if it was more capable!
 
I agree with 3dcgi.
I also used to believe that Nintendo 'proved that less powerful hardware sells better'. But that's not entirely correct.
Nintendo proved that a new-ish idea, marketed really, really well to people who would never even think of picking up a normal Playstation controller, along with a decent enough library of software to take advantage of that idea in the short term, sells better.

The fact that the hardware happens to be a lot less powerful is not the cause of it selling better. What's inside the Wii is an instrument to sell that idea, and it just so happens that it is less powerful than PS3, X360 and made Nintendo a truckload of cash in a few years.
It's kinda genius, really. And this is coming from someone who played on the Wii a few times but got bored of it pretty quickly.
It really is the symbol of short-term, casual gaming gratification. I'm more of a medium-term, story-driven games, so for me the Wii doesn't really work.

Next generation, it won't matter - it never mattered - what console is the most powerful. What will matter - like it always has - will be how the console is marketed, at what price, what it offers that others don't, and of course the software that runs on it.

It is up to Sony, MS and Nintendo do decide whether they want to lose hundreds of millions (once again, for some) to go after that 'best technology' holy grail, or whether to go after what history proved is the best way to success. Sony 'got it' 15 years ago and seems to have almost 'forgotten'. Nintendo always knew and seems to have rediscovered it with the Wii. MS i think is on the right-ish track, just not quite there yet.
 
Perhaps most importantly, it'll 'alien' tech without the dev tools. Where developesr are comfortable with GPU shaders and nVidia/ATi's toolchains, a new architecture without that is going to be despised, even if it was more capable!

Good remark! Simple matter of learning curve versus costs.
 
About on par with NVIDIA ...

Could you please elaborate on this point? Thanks!

Aside:

How much of a learning curve would there be to run modern pixel shaders on a Cell architecture vs running them on a GPU? It seems most developers have nailed the Cell on Vertex shader problem so why couldn't they figure out how to run the pixel shaders on the same or similar architecture if its merely an extension of the vertex shader problem?
 
Disclaimer: trying again to find arguments backing Charlie's claim.
Could it be that Sony royalties aside is unhappy about their system overall power consumption and thermal dissipation? After three shrinking the system remains huge and bulky, they still need a healthy dissipation system, etc. I don't imply that the situation is better for other system say the 360 and I know that the Cell is highly effective in the Watts per Flop department still could have Sony came to the conclusion that this kind of thermal/power characteristics is no longer what they want for their next system?

In regard to software implications. I think it could be really "Sonyesque" to want to differenciate them selves from whatever standard Ms/Kronos set in the PC/360 realm? Sony is still building his army of studios and I think that their offering on its own to justify the acquisition of the system a bit like Nintendo (actually better as they can cover way more genres than them). All they need is execution, they now have the compelling line up in almost every genre, Gt/wipe out/ MNR for racing, GoW for H&S, Uncharted for action/adventure, FPS, whatever games using sackboy (imho the most charismatic game character I've seen in a while, huge potential if Sony is willing to get more traction in the casual and kid market), etc.
The side effect of owning so many studios (or having really such strong partnership with some studios) is that Sony has gathered quiet some talented/genius guys that are able to deal with whatever exotic archs Sony offer them to work with. On top of it if Sony were to create a EE v3 (given the time line v2 would be an unfair nickname) or a Cell V2, they would not start from scratch in regard to software.

Actually say the rumour is true is that really a problem? I mean perfs may fall short of PC part under disguise but I don't think that is what drive the console market, it's about execution and Sony vision for the product:
If Sony comes with with a low power system in a nice package,leverage all the search they did on motion/image recognition, ship a "complete" system from scratch (in regard to input/controller I mean pad, nun-chuck, cam, whatever tech they want to push ), a functional and complete on-line offering, and launch with the proper line-up covering most bases, all this at an acceptable price will the absolute 3d performances of the system be that relevant to the market reception of product? Honestly I don't think so.

Pointless blabla:
Out of curiosity I did some searches about the Ps2 hardware I found that the last hardware revision where EE and GS are on the same chip is worse 55 millions transistors and ~60mm² (@90nm process).

Caution a nice die shot is to follow :)
si17612-express-delayer-die-photo-125x1.jpg



I know it's pointless but I wonder with a silicon budget of 300mm² (5 PS2) where a reworked EE+GS so only one chip would have gotten them this gen in regard to competition :?:
To some extends if the rumour has some meat, Sony too :LOL:
They may not be happy about what the ps3 hardware got them with ~10 the transistor budget, more actually if you take in account the V RAM pool and its associated costs +the royalties and the huge R&D costs.
 
Of course that said, all of this presupposes the legitimacy of the rumor itself; we have to remember this is all thought experiment territory still.
This is also a rumour but the DVD could radiate youth a second time thanks to a new invention that would increase the DVD capacity to 25.000 GB (25TB!!).

It's totally compatible with current DVD drives.

At 24x reading speed it would take 9 days to load all the data. :D

The link to the article is in spanish....:

http://www.abc.es/20100528/ciencia-...evo-super-material-resucita-201005281254.html
 
Following the previous news here is the list of components to form this new material:

http://www.nature.com/nchem/journal/vaop/ncurrent/full/nchem.670.html

Nature Chemistry
Published online: 23 May 2010 | doi:10.1038/nchem.670


Synthesis of a metal oxide with a room-temperature photoreversible phase transition

Shin-ichi Ohkoshi1, Yoshihide Tsunobuchi1, Tomoyuki Matsuda1, Kazuhito Hashimoto2, Asuka Namai1, Fumiyoshi Hakoe1 & Hiroko Tokoro1


Photoinduced phase-transition materials, such as chalcogenides, spin-crossover complexes, photochromic organic compounds and charge-transfer materials, are of interest because of their application to optical data storage. Here we report a photoreversible metal–semiconductor phase transition at room temperature with a unique phase of Ti3O5, λ-Ti3O5. λ-Ti3O5 nanocrystals are made by the combination of reverse-micelle and sol–gel techniques. Thermodynamic analysis suggests that the photoinduced phase transition originates from a particular state of λ-Ti3O5 trapped at a thermodynamic local energy minimum. Light irradiation causes reversible switching between this trapped state (λ-Ti3O5) and the other energy-minimum state (β-Ti3O5), both of which are persistent phases. This is the first demonstration of a photorewritable phenomenon at room temperature in a metal oxide. λ-Ti3O5 satisfies the operation conditions required for a practical optical storage system (operational temperature, writing data by short wavelength light and the appropriate threshold laser power).
 
So this isn't just the annual holographic storage hoax, but coupled with an actual newfound material?
 
Status
Not open for further replies.
Back
Top