Predict: The Next Generation Console Tech

Status
Not open for further replies.
An idea: PS4 comes equipped with stock Powercell8x (improved FP performance over Cell in PS3) but MEM, GPU and bandwidth are boosted to 2011-2012 'level'. Will anything good come from this concept?
 
The reality is that both Microsoft and Sony have lost so much money this generation that the chances of a 2010 launch of new hardware from either vendor are basically impossible. We have reached a point in the life-cycle where - maybe - they can start to claw back losses. Bringing another loss-leading console into the market next year is crazy talk.

I am unaware of anything that would make it impossible for Microsoft to develop and release a significant "upgrade" of their current console that would retail for $300 by the end of 2010. If Microsoft does this, then Sony is encouraged to do the same (offering substantially more performance for a 30% premium). As far as the technology is concerned, I doubt that Microsoft has anything to gain by waiting another year or two. In fact, I suspect they are hard at work thinking about how to come up with a more scalable architecture to keep up with Sony, and you can probably expect that that will take them more than a couple of years.

http://www.eetimes.com/showArticle.jhtml?articleID=216500010

Furthermore, it is quite possible that the step to the 8th generation of consoles will be somewhat less dramatic than the previous step: full backward compatibility by keeping with the same architecture; pretty much the same techniques, the same tools, etc. The economic situation is certainly affecting the industry, but instead of extending the console cycle, I believe it will only diminish the performance jump.

Additionally, if we were indeed 15 months from "launch" there would be far more awareness of any new hardware on the horizon. Microsoft's third party team is touring developers now, but they're talking about Natal, not Xbox Next.

I'm not so sure about this either. For both Microsoft and Sony, the most significant technological change involves motion control, and of course we've heard a lot about that. Performance improvements at this point can probably be kept quiet, with perhaps only a few first-parties aware of what to expect.

This discourse is probably little more than mental masturbastion, but it is intriguing to try and predict the near future of digital entertainment- to see how far one can extend from what we know right now to something we can't quite see. When things finally come to pass, some of us will look back realizing how obvious everything was. I am hoping, perhaps entirely in vain, to be one of those!
 
PS3 is already UMA-ish and was to be fully until the final cost of XDR hit them, I haven't seen anything from XDR2 that would suggest it'll be any cheaper. My guess is they'll pursue an option that lets the Tiawanese gov subsidize they're memory needs like everyone else.

There are several attractions of XDR2:

1) bandwidth-per-pin which lowers manufacturing costs

2) sustained throughput for data-hungry processors like a hypothetical Cell2

3) reduced data fetch size, which improves efficiency

4) overall higher efficiency than alternatives

Why would they go with a 40nm Cell when they're already at 45nm? That makes no sense at all. 4 PPE's are unneccessary, 2 PPE's would be more likely with possibly a 3rd if they went with a unified architecure approach. 28 SPE's is again unneccessary unless they went with a unified architecure, 12 is more likely for a stand alone Cell CPU.

It would appear that 45nm represents a stage on the semiconductor manufacturing roadmap, while 40nm would be Toshiba's actual process. Toshiba, Sony, and IBM are developing a 28nm process that is compared to 32nm on the industry roadmap.

http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=211600040

Anyway, the PS3 Cell and the hypothetical Cell2 would be different designs optimized respectively for low power and performance. As for what Cell2 would look like in terms of PPEs and SPEs, I would expect something that is both regular and optimized to take advantage of available resources. The design is supposedly scalable, so multiple Cells on a die with interconnects (ring bus or crossbar) and shared interfaces to memory and the GPU (through FlexIO) seems reasonable and conservative. I do not expect developers will have a difficult time trying to figure out what to do with all those cores considering all the things that can be done with regard to collision, geometry culling, animation, lighting, and what have you. Beyond this, I have no idea.

At this point, I doubt we'll ever see another Nividia gpu in a Playstation. It's not that everything is Nividia's fault, Sony made it's fair share of poor decisions too, but in the end (and that's what counts) they charged Sony a huge premium to repackage a 7800 with UMA which wasn't even used (fully) all the while using that money to finance their next gen chip which Sony couldn't even use (original G80 was as big as Cell and RSX combined with a 512 bit bus). Add to it, they launch the G80 3 days before the RSX launch which made it effectively obsolete on launch and you have Sony going wtf. To put it another way, Sony paid NV big bucks to create magic and instead got card tricks in return.

I do not find this compelling enough to suggest that Sony would go with another GPU designer.

As to edram, it's basically there to stabilize your bit rate to the gpu in a uma design. Yes you can do some tricks to take advantage of the larger bandwith but ultimately you can't pass thru more than your fed. It may have made sense in 2005 for Xenos but I don't believe it does today or in the future.

I see EDRAM as a lower-cost substitute for SDRAM that can reduce fetches to main system memory when some limited dataset needs to be repeatedly processed. I can see why it wouldn't make a lot of sense within the PC architecture where system memory is farther away and much less available, so GPUs have their own dedicated memory. I do not design GPUs, so there may be more significant trade-offs involved of which I am completely ignorant. The following is about IBM's recent advances regarding EDRAM, which might have some bearing on the next generation:

http://arstechnica.com/old/content/2007/02/8842.ars

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=197006869

A lengthy IBM article on the history and merits of EDRAM:
http://findarticles.com/p/articles/mi_qa3751/is_200501/ai_n9521086/?tag=content;col1
 
Last edited by a moderator:
Very interestingly, Ars Technica hints at eDram being used by IBM in CPUs.
thus, in a PS4 based on Cell2 and a GT300 derivate, eDram might be used in the CPU (especially as SPE local storage), rather than the GPU. (I'm not fond of edram in or along the GPU)
 
I'm not so sure about this either. For both Microsoft and Sony, the most significant technological change involves motion control, and of course we've heard a lot about that. Performance improvements at this point can probably be kept quiet, with perhaps only a few first-parties aware of what to expect.

So they're launching in 15 months and not telling any one about it! So not only are they developing a new console when they have no need to whatsoever, it's not going to launch with any third party games?!
 
So they're launching in 15 months and not telling any one about it! So not only are they developing a new console when they have no need to whatsoever, it's not going to launch with any third party games?!

Maybe with all the Sega talk lately, Sony and MS are going to crib a play from the Saturn playbook. :eek:
 
Maybe Sony and MS will combine to make 1 machine that runs of happy thoughts and pixie dust?

And Sega and Nintendo will combine and come out with a machine that does full 1080p @ 240hz and is controlled by thought?
 
An idea: PS4 comes equipped with stock Powercell8x (improved FP performance over Cell in PS3) but MEM, GPU and bandwidth are boosted to 2011-2012 'level'. Will anything good come from this concept?

That would allow awesome-looking games with a terrible framerate. That Cell revision brings good FP64 performance which help it extend its potential market. An amount of FP number crunching can be moved to the CPU, but I feel the PPE would be much of a bottleneck. It already looked weak even next to a Pentium 4, and will look very weak in 2012.
 
I am unaware of anything that would make it impossible for Microsoft to develop and release a significant "upgrade" of their current console that would retail for $300 by the end of 2010. If Microsoft does this, then Sony is encouraged to do the same (offering substantially more performance for a 30% premium). As far as the technology is concerned, I doubt that Microsoft has anything to gain by waiting another year or two.

Yes, nothing is stopping them from releasing a new console from a technology stand point. They do have something to gain though, and that is a lot of money. Even an evolutionary upgrade from the 360 will cost them billions in R&D, manufacture and marketing. Why do that now when they can put it off a few years while raking in cash? They know Sony won't do a new launch any time soon, there is just no reason.

I guess my question is, why do you so adamantly believe MS will launch a new console next year when almost no one in the industry believes so?
 
So they're launching in 15 months and not telling any one about it! So not only are they developing a new console when they have no need to whatsoever, it's not going to launch with any third party games?!

I seem to recall that Microsoft officially announced the 360 about the middle of 2005 (May?) and launched later that year. A spec sheet leaked out about a year before the official announcement, and some details changed by launch (CPU speed & memory). The console was (still?) plagued by quality issues indicative of a rushed design. When did the industry know that Microsoft was launching in 2005, only 4 years after the launch of their original console? I do not know, maybe you do?

That was a major upgrade from their original console with 8 times the memory, 20x the theoretical performance for their CPU, 10x(?) performance for the GPU, etc. I am not expecting their next system to have a CPU with 500+ Gflops of theoretical performance, 4 GBs of GDDR5 memory, etc. As a consequence of low-balling the performance of the next iteration (others are expecting something far more powerful later), an earlier launch would fair better against the competition, especially 5 years after the 360 has been on the market. I would not expect the system to launch without 3rd-party support, but with a more modest upgrade, late support might not be a big problem. Especially if you consider that the major change involves motion control.

If Sony is concerned about their market position, then they are probably going to respond to an early Microsoft launch with one of their own (major assumption).

My guess as far as what Sony is capable of:

a) By 2010: 32-core Cell @ 40nm (~1 TFlop SP FP?)
b) By 2011: 64-core Cell @ 28nm (~2 TFlops SP FP?)

If these implementations incorporate certain changes that IBM included in their PowerXCell 8i Variant, performance will be perhaps a bit less than half for double-precision (which may be useful in games- I don't know).

If you were Microsoft considering your current architecture, would you rather compare yourself to (a) or (b)? Even if you had GPGPU capability?


I am making many assumptions here, any of which could be wrong because I am not in the industry, and I hope no one takes my speculation too seriously!
 
I seem to recall that Microsoft officially announced the 360 about the middle of 2005 (May?) and launched later that year. A spec sheet leaked out about a year before the official announcement, and some details changed by launch (CPU speed & memory). The console was (still?) plagued by quality issues indicative of a rushed design. When did the industry know that Microsoft was launching in 2005, only 4 years after the launch of their original console? I do not know, maybe you do?
They must have know may be a year before that, here the lunch games:
Amped 3 (2K Sports)
Call of Duty 2 (Activision Inc.)
Condemned: Criminal Origins (SEGA Corp.)
FIFA Soccer 06 Road to 2006 FIFA World Cup (Electronic Arts Inc.)
GUN (Activision)
Kameo: Elements of Power (Microsoft Game Studios and Rare Ltd.)
Madden NFL 06 (Electronic Arts)
NBA 2K6 (2K Sports)
NBA LIVE 06 (Electronic Arts)
Need for Speed Most Wanted (Electronic Arts)
NHL 2K6 (2K Sports)
Perfect Dark Zero (Microsoft Game Studios and Rare Ltd.)
Peter Jackson s King Kong: The Official Game of the Movie (Ubisoft)
Project Gotham Racing 3 (Microsoft Game Studios and Bizarre Creations Ltd.)
Quake 4 (id Software and Activision)
Ridge Racer 6 (Namco Ltd.)
Tiger Woods PGA TOUR 06 (Electronic Arts)
Tony Hawk s American Wasteland (Activision)
Even if they recieve the final dev kit 3 months before launch publishers had to know a while ago, I would bet a tiny year.
But Ms clearly made a fast move but hopefully the system was quiet standard with I guess decent tools API etc. In regard to the xbox "dumping" I guess publisher were not surprised either even a company as big as Ms can't support that level of bleeding for years.
That was a major upgrade from their original console with 8 times the memory, 20x the theoretical performance for their CPU, 10x(?) performance for the GPU, etc. I am not expecting their next system to have a CPU with 500+ Gflops of theoretical performance, 4 GBs of GDDR5 memory, etc. As a consequence of low-balling the performance of the next iteration (others are expecting something far more powerful later), an earlier launch would fair better against the competition, especially 5 years after the 360 has been on the market. I would not expect the system to launch without 3rd-party support, but with a more modest upgrade, late support might not be a big problem. Especially if you consider that the major change involves motion control.
I don't see why Ms would be in a different situation than Sony or Nintendo when it will come to their next system. They will make their choice based on what they want to achieve and what can be made hard wise in regard to costs, power consumption, heat dissipation. Mostly likely by fall 2012 Sony or Ms will use 32nm process for the CPU 28 nm for the GPU. Between I don't expect 500GFlops from their next CPU, but it should be clearer now after the PR mess we faced once again this gen that GFlops alone are a poor indicator of CPU performances. If Ms sticks with something close to xenon they would better improve it that try to augment its throughput at all cost. I don't see in which way a modest update will help late support, basically I expect the CPU to be a pretty standard SMP CPU stuck to a pretty standard/slightly modified directx12 GPU, it will be accesible and it's usual for launch games to not push the system. Say devs would be unprepared to deal with 6 cores instead of 3, they will simply do with three cores. Idem if they don't know what to with some gpu extra capatibilitie they will use it as a standard part.

If Sony is concerned about their market position, then they are probably going to respond to an early Microsoft launch with one of their own (major assumption).

My guess as far as what Sony is capable of:

a) By 2010: 32-core Cell @ 40nm (~1 TFlop SP FP?)
b) By 2011: 64-core Cell @ 28nm (~2 TFlops SP FP?)

If these implementations incorporate certain changes that IBM included in their PowerXCell 8i Variant, performance will be perhaps a bit less than half for double-precision (which may be useful in games- I don't know).
I don't think that it would be a smart move, if Sony stick to the Cell they should assume that IBM was right hen they wanted fewer SPU and thus a tinier, more money maker chip (read that IBM want 6 SPUs).
2 Improved PPU and 12 spu should be more than enough muscles.
If you were Microsoft considering your current architecture, would you rather compare yourself to (a) or (b)? Even if you had GPGPU capability?
Actually I Sony will also have a GPU able to run some general purpose calculations.
Basically Ms would have to balanced their cpu/gpu budget depending on how well things run on the cpu or the gpu. I think they would do this within the bundaries they will fix themselves in regard to respective die size. Actually you should consider thing the other way. Optical shrink allow to pack more transistors but power consumption doesn't go don't as fast, thus such a cell (mostly as big as the actual cell at its launch) would most likely be consistantly more power hungry. That would put constrain on the rest of the system. Not too mention that it would cost more to produce and thus affect how much Sony is wiling to spend on the GPU. GPU is improtant and must not be overlooked, and Sony I think will be carreful to not price themselves out of the competition this time around.
Imagine Sony do what you say, Ms come with a tiny CPU (say they pack four fixed/improved xenon cores along with 2MB of cache) they have a lot of budget left for the GPu and the RAM and they could have way neater package. While a four cores CPU may be enough, Sony may have to spend quiet a lot on its GPU to keep up in this regard => along with its supposed Cell2 the system will be consistantly more expansive, power hungry etc.
Sony has to balance its silicon budget properly or they could be at either or both a graphical disadvantage or cost disadvantage.
I am making many assumptions here, any of which could be wrong because I am not in the industry, and I hope no one takes my speculation too seriously!
Nothing wrong with speculations :) (I've done a lot here some... to say the trust).
 
Last edited by a moderator:
PS3 was already hyped to be a cloud computing device (even though that term wasn't known back then and wasn't used if I recall correctly).

here, that's only a claim to get attention.
 
But obviously that was just marketing hyperbole, far far from reality.

I do not know anything about the original claims, but Folding@Home is an interesting and useful application of grid computing, which is related to cloud computing in concept (scalable, distributed computing). But if one were to consider what the likes of Google's AppEngine, Amazon's AWS, and Microsoft's Azure actually does, then yes, the PS3 does not do anything like this.
 
I think it will all depends on how Onlive, Gaikai, and OTOY do in maybe next two or three years. If they work as advertised and become successful, then we might see the complete transformation of the industry...if not, then we will see at least one more generation of so called traditional consoles.
 
I do not know anything about the original claims, but Folding@Home is an interesting and useful application of grid computing, which is related to cloud computing in concept (scalable, distributed computing). But if one were to consider what the likes of Google's AppEngine, Amazon's AWS, and Microsoft's Azure actually does, then yes, the PS3 does not do anything like this.

A the time Kutaragi said something like "PS3 will be a connected console, so that whe you will play GT5, more ps3 will be connected, more gorgeous will be the grphic rendered!".
This back in 2002 circa, back when the ps2's dominated the planet.
Nobody belived him for a single istant, except for the fanboy + cellrulezwillchangetheworld people, but at the end the idea of a connected ps3 remained.
 
I do not know anything about the original claims, but Folding@Home is an interesting and useful application of grid computing, which is related to cloud computing in concept (scalable, distributed computing).

Yeah but its not PS3 exclusive. In fact it was already available on the PC (and in the form of SETI at home for a long time) before the PS3 launched.
 
I do not know anything about the original claims, but Folding@Home is an interesting and useful application of grid computing, which is related to cloud computing in concept (scalable, distributed computing). But if one were to consider what the likes of Google's AppEngine, Amazon's AWS, and Microsoft's Azure actually does, then yes, the PS3 does not do anything like this.

Actually I thought grid computing and cloud computing were quite different?

As has been said folding@home would be a classic example of grid computing, where by the end user has a system which is connected via software to a central server. the server sends out the data to be processed by the end users hardware, which then comes back to the central server once the job is finished. Or that a users hardware is designated the central server, it sends out the jobs which get sent to other users hardware and servers etc.

I thought a cloud setup was basically when a central server does ALL the heavy lifting and sends the results to a simple receiver device with software in the users home. The end user does almost no processing.

Think Kutargi's vision was based on grid computing and not cloud computing.

Perhaps someone with more knowledge than me can help answer this question.
 
Status
Not open for further replies.
Back
Top