Predict: The Next Generation Console Tech

Status
Not open for further replies.
Will Sony really shoot for the same BOM cost and hence price at launch?

Even if there are savings to be had with the Blu-Ray drive and the CPU/GPU, wouldn't costs for bigger HDD and more RAM (probably faster, more expensive RAM) offset any savings?

With wii sucess showing a new paradign ...my guess is sony going to be more conservative than previus generations at BOM aspect.

More guestimate ....RAM of "ps4" is something like 4GB ,cause in this time(2011/2012) maybe pc have will have at least 16GB RAM and consoles(if not my mystake) never get less than 1/4 total RAM of pc part.

HDD if detachable could not be a problen, because consumer could change at any time for more specs etc.

(my wishes is 8GB like improvement at last generations aprox. 16* more RAM)
 
Last edited by a moderator:
If either next-gen Xbox or next-gen PlayStation use a significant amount of EDRAM, I'd expect total transistor budget to exceed 2 billion.

Im agree with you,cause original patent(32/64MB) and excelent performance Xenos gpu(incredible fill rate at 4 x MSAA) and show eDRAM is a way to go to save cost and bandwith at most times.
 
Last edited by a moderator:
SCE still hasn't posted a profit since the PS3 was launched and according to the current plan they are looking to do that in 2009. So, do you guy really think SONY is that eager to start the whole cycle over again after only 2-3 years to recuperate their mounting losses and add some profit on top of them?

Not to mention the developers, who at that time have very mature tools, methods, engines, practices set in place for the PS3 and maybe have figured out how to efficiently develop games for the CELL. Would they want to throw all that away and start over, making games that must leapfrog the previous generations ones in all terms i order to be deemed worthy of the generation leap, costing more to make just so they can look a little better/run at a higher resolution?

And what about those poor Japanese developers, who do not even have the budgets in place to make games for the current generation, won't they go completely extinct when the next get rolls in that soon?

Sure, this happens every generation, but I think from looking around in the industry the 6 year cycle just does not make sense anymore and it certainly should not be maintained for the sake of tradition alone.

Game development on the PC has stagnated significantly during recent years with the most prolific PC developers migrating over to the consoles with only a few die hard ones still sticking solely to the PC. This means, that in spite of the still ongoing hardware advancements on the PC the game software evolution will no longer follow suit in the same pace to create a stark contrast to the PS3 as it ages.

This means, I think, that by 2012 PS3 games will not look as outdated compared to anything else available that the PS2 did in 2006. Which of course would make the point of upgrading much more moot than it was in 2006. And developers who have plenty to work with on the PS3 will not feel they are creatively and technologically walled off from the current existing standards in gaming in 2012 that they felt they were on the PS2 in 2006.

Since gaming to a very large degree has been driving the development of bleeding edge 3D hardware and CPU's on the PC and the PC's recent slow shift towards a platform more for casual low spec casual games or MMOG's and less for the polygon-phallic crowd I see development of 3D hardware somewhat slowing down in the future because the costs would be high and the potential for profit falling.

Desktop PC's will continue their trend of becoming more low cost efficient, low profile or migrating from desktops to laptops and the 3D graphics companies will shift focus to supporting new markets, like low energy 3D hardware for mobile phones etc.

So maybe there wouldn't even be such a huge evolution in 3D/CPU power available to justify the generation shift motivated solely by the quest for better looking graphics.

I think asking SONY in 2012 if they would rather keep the current generation going while continually maximizing their profits due to falling hardware prices or they would want to stop making money and start loosing it again for 2-3 years straight by initiating a new expensive hardware launch then their answer should be obvious.

Asking developers if they would rather continue improving their tech and programming techniques to make better looking games at low cost or if they would want to have a hardware upgrade pushed on them that forces them to start over and puts pressure on them to make their games look that much more beautiful and next-gen looking so gamers feel they are getting their upgrading money's worth at huge costs I think most would prefer sticking with current hardware.

The only reason I see for SONY to make a generation change in 2012 is if the competition does one at that time or possibly even sooner and forces their hand for fear of being left behind.
 
SCE still hasn't posted a profit since the PS3 was launched and according to the current plan they are looking to do that in 2009. So, do you guy really think SONY is that eager to start the whole cycle over again after only 2-3 years to recuperate their mounting losses and add some profit on top of them?

Not to mention the developers, who at that time have very mature tools, methods, engines, practices set in place for the PS3 and maybe have figured out how to efficiently develop games for the CELL. Would they want to throw all that away and start over, making games that must leapfrog the previous generations ones in all terms i order to be deemed worthy of the generation leap, costing more to make just so they can look a little better/run at a higher resolution?
There is a great possibility that the PS4 will use the "Cell 3" chip with 34 cores. It is based on the same architecture as the current Cell. Therefore, the same knowledge and tools would most likely carry over from the PS3. That would make developing MUCH easier for them for the next generation. Too bad that means developers will be able to max the console out much earlier in it's lifecycle.

Also, the PS3 was reported as breaking even or making a profit on the cost since January this year. The PS4 will come out when they need to in order to prevent MS from taking the next-gen market (be it 2010, 2011, or 2012).
 
Wasnt sony's plan to use the cell in their new consoles? I thought they once said they wanted to make a design that could be used in new consoles too so there wasnt a complete renew needed for tools etc once a new console would be out.
 
Kamiboy, your point is interesting but why Sony and editors should stop developing for PS4 as soon as say the ps4 is available?
The ps3 will have a healthy userbase enough to justify new games development, even if it's not close to the p2 monopoly.

And will have to push out a new system as MS will and even more relevant I think that Nintendo will release the Wii2 sooner than later.

EDIT more on topic when I see the last Nvidia acquisition I tend to think that a super unbalanced design (read tiny CPU huge GPU) could find its way to ours houses.
 
There is a great possibility that the PS4 will use the "Cell 3" chip with 34 cores. It is based on the same architecture as the current Cell. Therefore, the same knowledge and tools would most likely carry over from the PS3. That would make developing MUCH easier for them for the next generation. Too bad that means developers will be able to max the console out much earlier in it's lifecycle.

Also, the PS3 was reported as breaking even or making a profit on the cost since January this year. The PS4 will come out when they need to in order to prevent MS from taking the next-gen market (be it 2010, 2011, or 2012).


http://appft1.uspto.gov/netacgi/nph...AN/"International+Business+Machines"+AND+simd

United States Patent Application 20080126745
Kind Code A1
Mejdrich; Eric Oliver ; et al. May 29, 2008
Operand Multiplexor Control Modifier Instruction in a Fine Grain Multithreaded Vector Microprocessor

Look at that patent: now imagine 3 clusters connected by a either a shared ring bus or better a cross-bar switch with 3xVTE's + 1 BTE (private L1 caches) + Mailbox + I/O Logic + Shared Cache (each VTE having two SIMD units attached to a shared register file) and also one small legacy cluster with optimized/re-engineered PPE + SPU in isolation mode to run the Hypervisor and the OS and the main memory controller (whatever Rambus has at that time).

Each VTE, as explained in that and other related patents, would be cache based (hardware pre-fetcher supporting software hints and no manual DMA management any longer and without a Local Store memory and with a single 32x128 bits register file (small thread context allowing switching between lots of threads thanks to fast context switching and acceleration for commonly used synchronization primitives). I'd see 256 bits registers Larrabee style (edit: even though Larrabee's registers are 512 bits wide) being used as that would guarantee real-world FP performance being much closer to the chip's theoretical peak.

Performance does drop when you execute two scalar instructions or one scalar and one vector instructions... basically when you do not execute work that causes the hardware of all 8 processing lane ( 4 lanes for each SIMD unit in each VTE) to be used together to calculate the final result (over a certain number of cycles...).

Taking the idea of using these vector throughput units as the main target for application writers I'd leave the BTE to run all the book-keeping for them, and be the kind of "offload/special functions accelerator" that some people tried to use SPU's as.
The BTE could be 4-way multi-threaded (SMT) to manage work for each VTE's in the cluster as well as its own and could be be under direct OS control (it would also be unit performing I/O functionality like handling stdio functions such as fopen, etc... for each VTE in the cluster).

The BTE might have a single FPU, but no Vector Extensions (mainly integer processing focused).

What the programmer sees would be a simple array of homogeneous processing units (the VTE's) with the heterogeneous architecture of the chip abstracted away by the OS and various libraries.

Size of the shared cache in each cluster: between 1 MB and 2 MB (less than 8 MB of SRAM cache in the chip counting the PPE's L2 and the 256 KB of LS of the lonely SPUv1 ;)).

Clock frequency at 45 nm (introductory manufacturing process unless 32 nm is already in high volume): 4 GHz.

FP performance: let's see...

1 VTE: 16 FP ops/cycle.

One cluster: 3xVTE -> 3 VTE's* 16 FP ops/cycle*VTE = 48 FP ops/cycle.

Three clusters: 3xcluster -> 3 clusters * 48 FP ops/cycle*cluster = 144 FP ops/cycle.


Peak SP FP performance: 144 FP ops/cycle * 4 GHz = 576 GFLOPS.

Depending on various factors the clock speed could be quite higher or maybe there could be space for one more cluster and you could take away the "legacy cluster" as I put it. In this case you'd have a peak SP FP performance of: 768 GFLOPS.

Or... you could keep the "legacy cluster" and add 2 of the new clusters as long as the chip size does not get unreasonable.


I think they will be more than happy of having more than 2x the performance of the CELLv1 chip inside PS3... do not expect miracles from PS4... the era of batting for the sky and then letting a quick series of manufacturing process shrinks bring the chip production at sane cost levels has already stopped being practical with PS3 (notice the fact that at 90 nm the CELL chip is smaller than the early figures for PS2's EE processor back when PS2 was near its launch).

A DP FP optimized chip would not be in consideration for PS4, but as happened with CELL, be developed as an evolution of the SP design (like optimizing the DP FP rate in each VTE).
 
Last edited by a moderator:
Why not get 3 Cell 1:7s on a chip and get the same level of peak performance while using all the existing libraries, software designs, and expertise, as well as providing easier BC, possibly even with performance enhancements, instead of needing your developers to start again from scratch?
 
Why not get 3 Cell 1:7s on a chip and get the same level of peak performance while using all the existing libraries, software designs, and expertise, as well as providing easier BC, possibly even with performance enhancements, instead of needing your developers to start again from scratch?


I think that chip could get quite big and you would need quite a lot of work already to create the successor to the current EIB to make it fly. It could be done and it would have its good set of advantages, but I am not sure STI wants to keep going alone against the trend the entire industry is taking with both hardware and software development which seems to prefer juggling lot of lightweight threads thanks to caches and memory management abstraction instead of custom management of DMA transfers and a big Local Store which is part of each thread's context (can you imagine context switches of 256+ KB each by 16-32 units at the same time ?)... how far can you scale it and manage it well when you have 16-32-64-etc... units to feed and synchronize at the same time ? That kind of CELLv2 would be the only mainstream high-performance multi-core processor taking that strategy (Larrabee, Niagara, and I am sure many others are following another approach and that will mean that R&D resources going to software development of compilers, new libraries, programming language extensions, etc... will be pushed heavily in one direction).

We will see...
 
Great find Pana: Habemus Papam!
Cell 2.0 is here and it has no local stores, IBM's going along Intel's route from this standpoint. (BTW..Larrabee has 512 bits registers, not 256)
 
Last edited:
Few more things:

1.) If you look at these VTE's you can clearly see the similarity to the current SPU's down to the single lane used for scalar operation (with the other three lanes in that specific SIMD unit sitting idle). I'd think these VTE's ISA would be an evolution of the SPU's one with the opportune differences.

2.) One of the important lessons CELL forced people to learn with the Local Stores of the SPU's applies just as well to CPU caches: if your program divides its work and data (its working set) in chunks that are way bigger than the cache you can see the performance drop quite a lot. CELL is particular unforgiving, but it is not like developers who care about performance are not aware of this problem.

3. The other and certainly not less important issue developers had to face with CELL is a preview of things to come in the desktop PC's and in the console arenas: simple rendering+game logic+ AI and Physics in two separate threads is not enough to tap the massively multi-core systems that are around the corner. Game developers had not been forced to take threading to that fine grained of a level before.

I stand by this line of thinking when I say that it would not be a waste of all the experience built upon the development and deployment of CELL.
 
Great find Pana: Habemus Papam!
Cell 2.0 is here and it has no local stores, IBM's going along Intel's route from this standpoint. (BTW..Larrabee has 512 bits registers, not 256)

Edit: I fixed the Larrabee comment I made.


Other than the bit about caches vs Local Stores, there is something else as I was telling Faf the other day... look at the addition to the Vertical/Real Men kind of SIMD processing employed...

I see swizzle instructions ;).

and

a critique to the use of permute instructions:

Typically, rearranging of operands in source registers is done by issuing a plurality of permute instructions that require excessive usage of temporary registers. Furthermore, the permute instructions may cause dependencies between instructions executing in a pipeline, thereby adversely affecting performance

Swizzle Instruction for Rearranging Operands

:D.
 
Last edited by a moderator:
There is a great possibility that the PS4 will use the "Cell 3" chip with 34 cores. It is based on the same architecture as the current Cell. Therefore, the same knowledge and tools would most likely carry over from the PS3. That would make developing MUCH easier for them for the next generation. Too bad that means developers will be able to max the console out much earlier in it's lifecycle.

I think you just made a few developer heads explode. 7 SPE's are hard enough for developers to handle but 34! I hope SONY has learned their lessons this generation and are aiming to create their next platform with ease of programming in mind. It would take some very serious rethinking of game developing paradigms to create enough parallelity in your game to be able to continually feed that many SPE's.

With mounting costs of game development that will only get worse on next generation hardware the last thing developers need is a very challenging development environment. They wouldn't want to dabble in the micromanagement required to feed 34 SPE's. I think they would much rather be able to abstract away all the complexities of underlying hardware and just assume that as long as their algorithms are efficient their games will run just fine.

Besides, I think the GPU is were the next generation hardware will focus most of its muscle with the CPU's getting modest upgrades. For the PS4 I hope SONY will drop the CELL architecture and go with a CPU with a handful of very powerful and efficient cores running at a high clock-rate.

Also, the PS3 was reported as breaking even or making a profit on the cost since January this year. The PS4 will come out when they need to in order to prevent MS from taking the next-gen market (be it 2010, 2011, or 2012).

Nikko Citigroup's Kota Ezawa estimates the games division will lose $1.4 billion this fiscal year, following last year's $2.1 billion loss. And while he doesn't expect the business to be prosperous until late 2009, Ezawa applauds Sony's efforts to shrink the PS3's chips and tweak its design.

The above is from the last paragraph from the article you linked, so as you can see profitability for the game division is scheduled for 2009. Last I heard SONY is till losing about 120$ or so on each console, or at least that was the official line.

I am not so sure SONY would follow suit if their competitor tries to cut the generation short again by, say launching new hardware in 2 years. It would be financial suicide for both of them but SONY, being in this buisness to make money would not tolerate a strategy that does not focus on profits in the long term. 3.5 billion has already been lost in the last two fiscal years, SONY would have to make this money back and then add a huge pile on top of it for the PS3 to have made sense.

Cutting the PS3 loose and engaging in another costly chase after their competitor before the PS3 has had a chance to prove a worthy investment would just be stupid. Of course, the competitor being aware of this, and not having to worry about petty things like profit could always try and break SONY's back by launching new hardware much sooner that they ideally should.
 
Little question in regard to what could be the Cell2, Is this patent a STI patent or a IBM only patent?
I mean Could Sony use this chip without paying royalties?

kamiboy, I agree with you in regard to the weight of gpu in next gen systems ;)

IN regard to cell and its children, GPU benefit from a shorter evolution cycle say 1.5 year.

Nvidia seems serious about using the gpu as general purpose accelerator and push the software to make it possible.
 
Last edited by a moderator:
I think you just made a few developer heads explode. 7 SPE's are hard enough for developers to handle but 34! I hope SONY has learned their lessons this generation and are aiming to create their next platform with ease of programming in mind. It would take some very serious rethinking of game developing paradigms to create enough parallelity in your game to be able to continually feed that many SPE's.

With mounting costs of game development that will only get worse on next generation hardware the last thing developers need is a very challenging development environment. They wouldn't want to dabble in the micromanagement required to feed 34 SPE's. I think they would much rather be able to abstract away all the complexities of underlying hardware and just assume that as long as their algorithms are efficient their games will run just fine.

Besides, I think the GPU is were the next generation hardware will focus most of its muscle with the CPU's getting modest upgrades. For the PS4 I hope SONY will drop the CELL architecture and go with a CPU with a handful of very powerful and efficient cores running at a high clock-rate.

Exactly!

It all comes down to programming -- at least that's what I have read. Today's programming languages, pundits say, are ill-equipped to tackle the heterogeneous multiprocessor environment.

Apparently, we need a new programming language. :unsure:

Rather than let the problem spiral out of control (by contributing to core inflation), AMD and Intel may be simplifying things by combining CPU and GPU. So, architecturally, we'll only need to deal with a single processor using one of today's serial programming languages. Internally, the chip will worry about the pernicious details -- parallelizing and optimizing data/instruction flow and execution.

So how does this affect the next console generation?

Well, as Sony is finding out, vertical integration is not a profitable way to solve an expensive problem. Specialization is. Like Microsoft and Nintendo, they will probably use off-the-shelf parts for their next gaming console, and the CPU + GPU will undoubtedly be one of them.
 
Rather than let the problem spiral out of control (by contributing to core inflation), AMD and Intel may be simplifying things by combining CPU and GPU. So, architecturally, we'll only need to deal with a single processor using one of today's serial programming languages. Internally, the chip will worry about the pernicious details -- parallelizing and optimizing data/instruction flow and execution.

Is this now a closed problem? Last I checked, automatically parallelizing code that's not written to be parallelized was still pretty inefficient.
 
With wii sucess showing a new paradign ...my guess is sony going to be more conservative than previus generations at BOM aspect.

More guestimate ....RAM of "ps4" is something like 4GB ,cause in this time(2011/2012) maybe pc have will have at least 16GB RAM and consoles(if not my mystake) never get less than 1/4 total RAM of pc part.

HDD if detachable could not be a problen, because consumer could change at any time for more specs etc.

(my wishes is 8GB like improvement at last generations aprox. 16* more RAM)


I have a hard time believing PC's will see 16GB of ram routinely in 20011/12.

I just built one with 4GB and it seems like massive overkill..dont see any game really using more than that for the next 1-2 years

4GB seems like where you'd expect the next gen consoles to be, based on 8X theory. 512MB=8X Xbox (16X PS2)
 
There is a great possibility that the PS4 will use the "Cell 3" chip with 34 cores. It is based on the same architecture as the current Cell. Therefore, the same knowledge and tools would most likely carry over from the PS3. That would make developing MUCH easier for them for the next generation. Too bad that means developers will be able to max the console out much earlier in it's lifecycle.

Also, the PS3 was reported as breaking even or making a profit on the cost since January this year. The PS4 will come out when they need to in order to prevent MS from taking the next-gen market (be it 2010, 2011, or 2012).

I agree. I think PS4 will use "CELL 3" with 34 or 36 cores
( 2 or 4 next-gen PPEs + 32 improved SPEs), for the 2011/2012 timeframe.

If PS4 arrives later, say 2013-2014 (upto 8 years after PS3) then I'd be looking to a 66 or 68 core CELL (64 SPEs), since we've seen CELL roadmaps that show a 64 SPE chip.

I'd like to see PS4 scaled up from PS3 in much the same way that PS2's Graphics Synthesizer was (in some ways) a massively scaled-up / parallel PS1 graphics chip, with added features & EDRAM.
 
Little question in regard to what could be the Cell2, Is this patent a STI patent or a IBM only patent?
I mean Could Sony use this chip without paying royalties?

I do not know... there are only IBM people in the Inventors Field... still that does not mean much... we'll see.
 
Too bad that means developers will be able to max the console out much earlier in it's lifecycle.

No it doesn't. It just means that you will start to see more creative ways of rendering stuff from the developers earlier in the lifecycle. Progress will allways be there no matter how easy the architecture is to understand, as you can allways find new ways to do stuff more effectively. The leaps will ofcourse become smaller and smaller as graphical fidelity starts reaching photorealism but there will allways be progress.

If we didn't release any new console for the next 15 years, the games on the X360 and PS3 in 15 years would have looked significantly better than what we see today. This has nothing to do with architecture complexity, but more to do with the fact that you will allways find better ways to code stuff.
 
Status
Not open for further replies.
Back
Top