Predict: The Next Generation Console Tech

Status
Not open for further replies.
That's clearly a point in favor to larrabee no matter its efficiency in graphic department at least we know that Intel 45nm and likely 32nm process are/will be good. Shortly Intel may pack in significantly more transistors than its competitors.
The foundry company (if it actually turn to exist) has at least AMD 45nm process looks pretty good,may be they will need to keep some foundry busy? :)
Back to larrabee I think that the news about the bad TMSC 40nm process is clearly a good sent for Intel. By fall 2009 they might very well be able to launch a larrabee that significantly bigger than its competitor that extra raw power could very well be a pain in the ass for both ATI and Nvidia (even if software/side side is likely to suck), at least gpgpu is more and more of a given for Intel. With 32nm likely to be here by 2010... both ATI and Nvidia have some work on the table I guess.

I agree completely with you. This is the point i made in one of my previous posts. This could be a win win for Sony: 22nm gpu and giving up fabbing it in its fabs that would be cost prohibitive for that process.

Look for example AMD. When have they got to make 40nm chips ? at least a year later than Intel. Intel will start 32nm this year and what is more important: reaching new processes will imply very expensive new technologies.

Put a slightly improved and tiny cell on it and you have a great machine cheaper than PS3 for the time ( 2012? ) and also very powerful.

Thinking about another point, that could be the subject for another topic, i think next generation could be the one to last at last for almost 10 years as Sony says with the PS3. It will be very difficult to improve 22nm downwards. So, the one that launchs a weak system could have strategical market problems ( except that everybody goes the Wii route ): can i compete so much time being weaker than my competitor ? should i launch another system although i won´t be much more powerful than the one i have ?....
 
I know really few of this aspect of the market but does TMSC really need bleeding edge process?
While they are cheap enough it could be good enough for most their activity, no?
Developping bleeding edge process is more and more expansive but it may more of a problem for companies trying to push bleeding edge tech while wanting to keep margin as a high level, I mean both ATI or Nvidia could ask IBM or even AMD to produce their next chip but it would cost them too much.
What I mean is that this "bad' process can be more of a problem for ATI and Nvidia than for TMSC, no?

For Intel I really have no clue but it looks like they planing to follow the same route for a while, they are accelerating the transition to 32nm and they plan for 16nm in 2013. It must cost them two legs and one harm but they must have theirs reasons. From my POV for some time it will still help them to grow their market share. They have a good chance to take a significant part of the juicy HPC market + mid/high end GPU market they may compete on low end cpu market with unmatched in perfs mono/dual CPU bringing more pain on the like as AMD, VIA.

But it;s really for the sake of the conversation I know really few about all these economic aspect.

My point is clearly that if estimated figures Intel gave for a 16 cores@1GHz were right so ATI/Nvidia may have a problem. @2GHz perfs would be that far.
Say such a part is ~1 billion transistors would be tinier mostly due to cache density and in regard to other Intel product using the same generation process I fail to see it consuming ~300Watts. Intel has a lot of legs to make up for larrabee lacking (wether they come from soft or hard):
1) Price (likely tinier =>better yields, likely cooler=> cheaper colling solution, lesser memory requirements => cheaper).
2) brute force add more and more transistors till they reach the same power budget as ATI/Nvidia.
32nm nm coming next year will make things worse.

If we remove the burden of software present in the pc realm I wonder if what Intel offer could be matched in the console realm. That's why I feel like Sony may have do a good choice if the rumour is true.
BC put aside in 2012 they could almost have a single chip say a 100- 150mm² part @22nm with enough power meeting their power requirements allowing for a tiny package, etc.
The rumour may be based on thin air but I'm surprised to see no enthusiasm even if all this is really hypothetical.
 
Last edited by a moderator:
I agree completely with you. This is the point i made in one of my previous posts. This could be a win win for Sony: 22nm gpu and giving up fabbing it in its fabs that would be cost prohibitive for that process.

Look for example AMD. When have they got to make 40nm chips ? at least a year later than Intel. Intel will start 32nm this year and what is more important: reaching new processes will imply very expensive new technologies.

Put a slightly improved and tiny cell on it and you have a great machine cheaper than PS3 for the time ( 2012? ) and also very powerful.

Thinking about another point, that could be the subject for another topic, i think next generation could be the one to last at last for almost 10 years as Sony says with the PS3. It will be very difficult to improve 22nm downwards. So, the one that launchs a weak system could have strategical market problems ( except that everybody goes the Wii route ): can i compete so much time being weaker than my competitor ? should i launch another system although i won´t be much more powerful than the one i have ?....
I dont completely agree with you tho ;)
I think that (if true) Sony should forget about BC, they should offload as much as possible of the design to Intel and focus on dev environment and launch games and package.
by package I mean the aspect of the console, they should aim for something convenient (not tiny as the wii but you get the picture). One chip not too much ram reasonable power requirements, shortly be reasonable. They don't have that much room for shrinking but if the machine is affordable enough at launch with proper launch titles available (Sony can have in spare, GT, Wipe out, GoW4, Uncharted3, singstar, a wii like controller+matching games,some good multis) they may not need to cut the price (even without matching the Wii effect).
Something like 2Gb of ram, tiny SSD (really tiny for the OS, caching, some save and patches) a average chip (100-150mm²) for yields @22nm @GHz, x6 BRD, standard + wii like controler and aim for 250$ at launch 299$ being the absolute maximum.
 
My point is clearly that if estimated figures Intel gave for a 16 cores@1GHz were right so ATI/Nvidia may have a problem. @2GHz perfs would be that far.

That's a big IF, Intel also said Itanium was going to be far faster than the RISC competition, the reality was rather different.

Say such a part is ~1 billion transistors would be tinier mostly due to cache density and in regard to other Intel product using the same generation process I fail to see it consuming ~300Watts.

It's impossible to draw any judgement of power usage from their other processors, they're far too different.

Intel has a lot of legs to make up for larrabee lacking (wether they come from soft or hard):
1) Price (likely tinier =>better yields,

On a brand new process yields are not that likely to be great.

likely cooler=> cheaper colling solution,
lesser memory requirements => cheaper).

Everything I've read seems to indicate huge memory requirements. That alone means high power requirements.

They also use coherent caches, that's going to require a big internal bus that will be perpetually busy using more power.

AFAIK GPUs use read only caches so this isn't a problem for them.

2) brute force add more and more transistors till they reach the same power budget as ATI/Nvidia. 32nm nm coming next year will make things worse.

You're assuming they are not at the same power budget already...
 
First a big disclaimer I tend to be a bit too much optimistic/enthousiastic when it comes too larrabee... :LOL:
That's a big IF, Intel also said Itanium was going to be far faster than the RISC competition, the reality was rather different.
Actually every companies do the same, whoopass , etc. And actually it's just fair when they get bashed when they don't meet the expectations they set. That is why I put "if blabla true". As we are speaking of past Intel presentation do you or other persons on the board have managed to get their hand on Intel GDC presentation? (Supposely it was this week).
It's impossible to draw any judgement of power usage from their other processors, they're far too different.
You're right, I strech thing too much here. I still believe that if ATI or Nvidia a HD48xx or Gf2xx using Intel 65nm I'm sure that the power consumption would be significantly lower that what we have now.

On a brand new process yields are not that likely to be great.
I'm not sure you got what I wanted to mean, I'll try again ;)
What I wanted to say is that tinier chip tend to have higher yields than bigger one. If we take a 1 billions transistors GPU and a larrabee my bet was that the larrabee would end tinier as cache tend to be pretty dense. But the ring bus may end eating space etc.
Once again a bit too much enthousiastic.
Everything I've read seems to indicate huge memory requirements. That alone means high power requirements.
I disagree everything I read said that bin rendering does a most efficient use of bandwidth and that bandwidth requirement grow slower than for actual gpu.
In regard to memory usage, I read nothing but I would tend to think that it's the same situation as with the xenos and its edram => you only have the back buffer in RAM (might be supa wrong tho...).
They also use coherent caches, that's going to require a big internal bus that will be perpetually busy using more power.
AFAIK GPUs use read only caches so this isn't a problem for them.
Well I will compare apple to orange once again ( :LOL: ) but in the cell the bus doesn't look to consume much power.
attachment.php

I would say nothing about the cache as larrabee L2 are set to be way more performant than what can be found in PPU/Px.

You're assuming they are not at the same power budget already...
Right the same error repeated all along my post.
 
Personallly I think that the only way to see the Larrabe in a console is if Intel is somewhat "desesperate" to put it on a console (and that seems very hard).

Because for them to put in a console means that a) they will give the IP or that b) S/MS/N would be first in line in new process instead of making their owns C/GPUs better/more mainstream. Intel probably dont like of any option.

For a console maker should be far from a ideal option because as a CPU (like the article sugests) it would give more problems than Cell being (I suposse) even less capable for things like AI, game code, memory management, thread syncronization etc... As a GPU it is unproven tech specialy when there are things that you what fixed fuction and the others are really good at, plus diferent programing models, a bigger high curve (all that may mean fewer ports)... It would make more sense if paired with a x86 CPU, but that would give us the first problem again probably amplified (specially in the case of b) ), or if they use a AMD CPU, but then AMD should gain advantage in the CPU field.

As much as price/cost I think that easy of code for and getting the game up and runing is one of the prioritys of them because it will mean more games in the console, or anyone belive that if PS3 (having being im third) had a no Nvidea GPU (and a no Xenon alike PPE/ISA) it would have so many ports ?
 
How will be 32nm? could be good or bad... But if a console were to launch this year @40nm its gpu for example would have to be significantly less powerfull than a HD48xx which burn 300watts.

what ?

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-3.html

These tables show the power consumption of the test system in watts, which consists of a Core 2 Duo running at 2.93 GHz, such as used in a standard PC. The wattage figures are not the actual load on the power supply, but the power measurement at the outlet

The whole system doesn't even use 300watts when in full 3d mode. It uses just 288 watts.

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-4.html

the video card the 4870 512 meg in this case seems to use up 184watts full load. However just down clocking the card to 4850 speeds reduces that to 133 watts.

thats a 55nm gpu. I don't believe that 40nm is going to actually raise the power consumption of a gpu the same tranistor size. Also considering if the xbox next launches in 2011 the tmsc 40nm process would have matured greatly in the better part of two years and 32nm would not be very far behind.


Thats a 900m tranistor chip. The xenos with its edram is 320m tranistors i believe , without it 220m tranistors. So your looking at almost 4 times the tranistors and you can get very powerfull results , coupled with a dx 11 graphics card.

I think depending on when the sytem launches and the switch to 32nm is we will see a big gpu in the xbox at least . The gpu will most likely be the most power hungry thing in it , but they will use alot of pwoer saving features for when the system is idle or doing things like netflix.
 
what ?

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-3.html



The whole system doesn't even use 300watts when in full 3d mode. It uses just 288 watts.

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-4.html

the video card the 4870 512 meg in this case seems to use up 184watts full load. However just down clocking the card to 4850 speeds reduces that to 133 watts.
Oups my bad, I looks like I confuse values for X2 version and values for the whole system (they are dangerously close), I didn't to be precise it was more to give an overall figure so I threw 300 Watts but I didn't recheck benchs. But it happens that my point is still valid even 133-184 Watts is still way too much for a console, manufacturers would be held not by the cost of the chip (in relation with its size) but by power consumption (heat dissipation too).
thats a 55nm gpu. I don't believe that 40nm is going to actually raise the power consumption of a gpu the same tranistor size. Also considering if the xbox next launches in 2011 the tmsc 40nm process would have matured greatly in the better part of two years and 32nm would not be very far behind.
Nor do I, but early report suggest going to 40nm may bring little gains (if any) in regard to power consumption.
The process may mature but I was just try to make a point in regard to process all xxnm are made unequal and emphasis Intel strength in this regard.
By 2011 32nm will likely be available for every body my point being that it may fail tio deliver on promises too.

Thats a 900m tranistor chip. The xenos with its edram is 320m tranistors i believe , without it 220m tranistors. So your looking at almost 4 times the tranistors and you can get very powerfull results , coupled with a dx 11 graphics card.

I think depending on when the sytem launches and the switch to 32nm is we will see a big gpu in the xbox at least . The gpu will most likely be the most power hungry thing in it , but they will use alot of pwoer saving features for when the system is idle or doing things like netflix.
Idem here while I agree with you it was not my point, it was more about not overlooking Intel process advantage when the last news we had from TMSC were not good.
 
And Intel-based console based around a 4 or 8 core derivative of their next-gen Sandy Bridge CPU architecture, combined with either 48 core Larrabee, would be extremely interesting.

I don't see a 16 core Larrabee being competitive with modern high-end GPUs but I think a 48 core Larrabee would be. Even more so, the next-next gen Larrabee2 (currently in development) will probably offer the same kinds of improvements over first-gen Larrabee, that we see from one generation of Intel CPU to the next (i.e. Core 2 Duo to Nehalem/Core i7).

Another possibility that I've been thinking about is the possibility of a console with a Larrabee chip and a traditional AMD/ATI or Nvidia GPU, but no traditional CPU. Or, an Intel Larrabee with a few large complex CPU cores. That would be the best of all worlds. Imagine a 22nm Intel chip with 2 or 4 i7 Cores or Sandy- Bridge cores, combined with 48-core Larrabee.
Then a seperate AMD/ATI or Nvidia GPU with on-chip EDRAM, all on the same die (not two dies like 360's EDRAM). The console has just 2 main chips. I don't think it would be good to have a single CPU/GPU chip except in Nintendo's case where high performance is not needed or wanted. Nintendo could easily get away with a single chip that offers 360++ performance on 32nm or 22nm for very tiny cost and very little cooling needed.

I believe in the idea of Larrabee, if not the first implementation/actual product.
 
How about Larrabee being the key to what Sony wanted to do with the Cell in the first place? A single chip for both CPU and GPU? Throw enough cores in the system and make sure it's designed for high bandwidth, and you can emulate cell with 8 cores and RSX with about 30, so you'd have b/c. Maybe a 60+ core Larrabee for PS3. It would make motherboard design much simpler and the cost savings with process shrink would be significant. Also, I believe having one larger chip instead of 2 small ones could cost less since you can improve that larger chip to be smaller each time.
 
How about Larrabee being the key to what Sony wanted to do with the Cell in the first place? A single chip for both CPU and GPU?

Then you run into everybody's favourite problem of code and (software) architecture reuse. A system like that would need a complete rewrite of probably pretty much everything. This is something that is possible for first-party developers, but if I have two (or three) consoles and the PC to think about, I don't want to completely rewrite the game for one of them. It's the Wii-disease.

You want a regular CPU. You want that really, really bad. It just takes a *lot* more time to write massively parallel code, if it is possible for your problem at all. Try CUDA some time.

On a slightly different note, if I was a console manufacturer, I'm not sure I would want to enable people to siphon graphics power into classical CPU tasks. If I do that, I lose the graphical "minimum standard" (if there is such a thing).
 
There is no point in adding a ATI/Nvidia on top of a larrabee if a manufacturer were to have an ubber system (in regard to the silicon budget) it would better add more larrabee cores.
In regard to the cores count 64 is huge, way too huge imho.
I would not be surprised if a 32 cores larrabee is close too 2 billions transistors even @32nm it would not be a tiny chip. 48 could happen if the chip is intended to be produce @22nm
(I'm not telling that Intel couldn't pack more cores @32nm but it's likely that the chip would exceed what a manufacturer would want in regard to power consumption, heat dissipation and price).
 
There is no point in adding a ATI/Nvidia on top of a larrabee if a manufacturer were to have an ubber system (in regard to the silicon budget) it would better add more larrabee cores.
In regard to the cores count 64 is huge, way too huge imho.
I would not be surprised if a 32 cores larrabee is close too 2 billions transistors even @32nm it would not be a tiny chip. 48 could happen if the chip is intended to be produce @22nm
(I'm not telling that Intel couldn't pack more cores @32nm but it's likely that the chip would exceed what a manufacturer would want in regard to power consumption, heat dissipation and price).

32 cores is def the optimum number of cores for Larrabee as most of you know it scales linearly upto 32 cores. and at 48 cores the performance scaling drops off by around 15 - 20% ish.

Interestingly from the POV of consoles. if 32 cores fits inside 2 billion trans then the chip will be less than 200mm^2@32nm which for a console is comparitivly small to previous generations of GPUs.

for rereference.

rsx 258mm^2
Xenos + Edram = +/- 250mm^2
Graphics Synthesizer (ps2) : 279mm^2

etc.

EDIT: Oh and one other thing, if consoles launch in 2012 then 32nm tech will be very mature by then, and transitioning to 22nm would be possible very rapidly after launch (within one year)
 
Last edited by a moderator:
EDIT: Oh and one other thing, if consoles launch in 2012 then 32nm tech will be very mature by then, and transitioning to 22nm would be possible very rapidly after launch (within one year)
In theory, although this gen showed a very retarded progression to 65nm over what was expected. With Intel producing the chips though, the chances of a more advanced tech being mainstream are better than usual.
 
Oups my bad, I looks like I confuse values for X2 version and values for the whole system (they are dangerously close), I didn't to be precise it was more to give an overall figure so I threw 300 Watts but I didn't recheck benchs. But it happens that my point is still valid even 133-184 Watts is still way too much for a console, manufacturers would be held not by the cost of the chip (in relation with its size) but by power consumption (heat dissipation too).

http://forum.beyond3d.com/showthread.php?t=49447

according to my tests the 360 original launch unit with a hardrive is using 270 watts. Thats not far off the total power usage of the full system at toms hardware with a 4870 a 3ghz core 2 duo , 2 gigs of ram a 500 gig hardrive , a 150 gig hardrive .

Thats a complete system far beyond whats avalible in the 360. IF the 360 could have a similar power draw then i don't see a problem.

The x2 parts of course use more power , you have 2 chips and cloned memory pools. I don't think you'll see this in a console. However a single big gpu should work well. Remember if we are looking at 2011 for a console the 40nm process will be very mature and most likely the systems could launch on 32nm or would be very close to being able to release on them. I would think 2 years would give tmsc time to incease the power savings on the process and improve the process.

perhaps intel's process is better but don't you think intel would use its best process for its cpu's that make it the most most money. Would intel rather produce its cpus that sell for $100-$1400 or sell chips to sony/ms who would want bargin basement pricing ? You could argue that it would be consistant revenue over x amount of years but that means intel will have to devote fab space for those years which could hamper their cpu busniess
 
In theory, although this gen showed a very retarded progression to 65nm over what was expected. With Intel producing the chips though, the chances of a more advanced tech being mainstream are better than usual.


Thats true as well, but like you said, if anyone can make it good Intel would be the ones to do it.

PS: Shifty, only 5 more posts till you hold the B3D posting record. lol. :)
 
perhaps intel's process is better but don't you think intel would use its best process for its cpu's that make it the most most money. Would intel rather produce its cpus that sell for $100-$1400 or sell chips to sony/ms who would want bargin basement pricing ? You could argue that it would be consistant revenue over x amount of years but that means intel will have to devote fab space for those years which could hamper their cpu busniess

by 2012 32nm would not be anywhere close to Intels premium process. it would be well into the mainstream bulk production by then.
 
and what about 2011 ? Not to mention that other fabs will have had time to enhance their 32nm and 40nm tech. I highly doubt the 40nm tmsc process that we have today will be what we have in 2011.
 
Status
Not open for further replies.
Back
Top