Predict: The Next Generation Console Tech

Status
Not open for further replies.
I don't think that MS necessarily has to go the Sony route next gen and make a console based on tech from the current gen. While Sony spent billions developing Cell and have to try to milk it for all they can, I think MS has more room to experiment as long as they can maintain backwards compatibility and release before PS4 at a decent price. I'm not sure how it is in Europe now, but here in North America the Xbox has replaced the Playstation as the hardcore gamer's go-to console, and I don't see that changing any time soon. Almost all my gamer friends in highschool and college had PS2s, now only two have a PS3 and almost all of them have 360s and some have more than one. When the next gen comes along, the hardcore crowd that Xbox now has in the pocket will expect great games to keep coming and they will not care or even know if MS went a different route with the hardware and publishers/developers would just have to live with it. That said, as much as I would like to see a Larrabee Xbox it's probably highly unlikely.
 
They can but this is a clear case of better to have than have not. Tools, libraries, examples, infrastructure (farms,mo-cap,network,servers,etc), standards, delayed and immediate support paths, etc all need to be addressed. The hardware is only one piece of the puzzle and there is great deal of technology that is just as critically important. This gen was just a sampling of many things that must mature and quickly or there will be a rather nasty price to pay.

True, but it is the initial thing, they can only build around the HW, anyway, like pointed by the tesselator, there is the question if there is a real point in having to much specialized HW.

Without some rework it's more than likely. I remember Capcom (early on) stating that they were able to achieve (with the xenon) the same level of performance as a dual core P4.
The P4 was crushed per cycle per Athlon who got crushed by Intel's core architecture the upcoming nehalem will raise the bar again( likely not by the same amount nor on every workload).
I'm just trying to enphasis your "how weak the cores are" ;) but it's also to show that there is room for improvement. I also think that MS may not need (and most likely could reach) the level of performances Intel or AMD will be able to deliver by this time.
Ms needs something cheaper/tinier than what intel or AMD has to offer and with lower power requirements. Xenon is weak in so many in regard to top of the line x86 CPU and can be improved in so many ways that I think that MS/IBM have room to make substantial gains in perfs without breaking their power/die space budget.
How much? I think some members may come with proper estimates, by watching at AMD/Intel progress (witch came at a cost) I would say that MS/IBM should aim at an overall 30% improvement per cyle.
Overall that wouldn't be a great jump in power but basically really good serial performances as found in the PC realm is likely out of reach of what MS can afford.


Why spend more time and money with a "flawed" CPU , in 2010 any dual core soluction (or a cut down one) would beat it in every single aspect, many of them offering OOE, branch prediction, good latency, power effeciency... Only for very easy BC.

Anyway dont take to hard on Xenom in some things it is probably way better tahn PC CPUs (raw fp or dot products)


Maybe MS on top of offloading a lot of calculations to the gpu could consider the use fixed function untis enslaved to the cpu cores to free CPU resources. I think especcially of units handling decompression, maybe network acceleration too.
I remember seeing average cpu utilisation for PGR3 and basically more than one core was dedicated to decompression speak about a waste of silicon.
Using ~30millions transistors running 3.2GHz to achieve what a dsp would handle faster while being way tinier... not too mention power efficience.
Shortly

That is the beauty of Fusion like designs, althought there may be a problem of such units going undersude (likethe tesselator), depending of how much it brings and how easy it is to use.

Anyway it seems to me that a FUsion design is perfect for a console...
 
I was reading some articles today and thought for a minute. It seems perfectly logical if indeed Valhalla is in the works, to take the same chip, scale it upwards while shrinking the process and using that as the main CPU in the nextbox. What a clever and cost effective solution that does not sacrifice on anything while providing the needed "umph" next gen. For those that don't know what Valhalla is, it's Xenon and Xenos combined on a single 65nm die for the xbox 360 that indication points towards around a 2011/2012 release (eh what do you know? around the approximal time the next xbox is predicted to hit. What better time to offer a cost reduced... dare I say.... "360 mini?").

Taking Valhalla and scaling it upwards. Perhaps doubling L2, twice the threads, and increased clock speeds, along with minor tweaks and keeping the Xenos portion intact and unchanged. Developer kits can effectivly scale accordinly and the learning curve is just simply carried over from last genertion. Backwards compatibility would not be sacerficed either which to me is the golden egg in all this. though to pull something like this off, I of course imagine them sitting down with IBM and AMD very closely and to perhaps hit 45nm with it as well. Now fitting what we know about Valhalla, I was also hit in the head on this good old thread here at B3D underling this important news link on the very subject by newyork times. Please, just never mind the close proximity architecture thing with Sun and leave that out of this.

For more than two decades, Microsoft’s software and Intel’s processors were so wedded that the pairing came to be known as Wintel. But as that computing era wanes, Microsoft is turning to a new source of chip design: its own labs.
The design effort will initially be split between research labs at the company’s headquarters in Redmond, Wash., and its Silicon Valley campus here. Tentatively named the Computer Architecture Group, the project underscores sweeping changes in the industry.

They have the personal, budget, and the right partners to pull this off too. This to me is the project under Microsofts Architecture group.
 
Taking Valhalla and scaling it upwards. Perhaps doubling L2, twice the threads, and increased clock speeds, along with minor tweaks and keeping the Xenos portion intact and unchanged. Developer kits can effectivly scale accordinly and the learning curve is just simply carried over from last genertion. Backwards compatibility would not be sacerficed either which to me is the golden egg in all this. though to pull something like this off, I of course imagine them sitting down with IBM and AMD very closely and to perhaps hit 45nm with it as well. Now fitting what we know about Valhalla, I was also hit in the head on this good old thread here at B3D underling this important news link on the very subject by newyork times. Please, just never mind the close proximity architecture thing with Sun and leave that out of this

Perhaps not, because the resultant chip might be overly large and the thermals, yields, and expense could thus be thrown off. The Valhalla project makes sense and is sensible because it takes small disparate chips size wise and combines them into a single larger package with decent thermal bounds and workable yields. If you were to take that resultant chip, however, and scale it up - granted to what extent would matter quite a bit - you would find yourself back at the overly large result that might rule out the combination of such chips outright from an economic (or potentially even engineering) point of view.

On 45nm in theory you could double the size of the chip logic and end up at the die size of just one of the original 360 chips... not bad at first glance perhaps, but there's several variables at play.

Beyond any of that also, I do expect there to be an architectural shift for the 720 to reflect the work taking place on MS' DX evolutions - it just makes sense for them to bring that to the console side as it benefits their overall ecosystem. So I would expect modernized, rather than just expanded, graphics hardware in that regard.
 
I allways thought that the 360 had a very basic one like the r600. I would also bet that the lack of exclusive engines for the 360 si one of the reasons its gone unused. I think viva pinata is one of the titles that uses the feature and they use it for the grass if i'm nto mistaken

The Xbox 360 tesselator might be simple, but it's very flexible. Combined with random address fully programmable vfetch (data input) and random address fully programmable memexport (data output) you can basically do almost any kind of tesselation, geometry manipulation and creation purely on the shaders, and even store the results for reuse later on.

Sadly the PC hardware seems to be lagging a bit behind on this department. Even DX11 does not seem to support memexport from the pixel shaders (for tricks like outputting several pixels simulaneously, reusing the blur kernel sampled data, etc). Compute shaders should help a lot (by providing proper random addressable memory input and output).
 
Beyond any of that also, I do expect there to be an architectural shift for the 720 to reflect the work taking place on MS' DX evolutions - it just makes sense for them to bring that to the console side as it benefits their overall ecosystem. So I would expect modernized, rather than just expanded, graphics hardware in that regard.

Well, I'm not saying they should expand the Xenos portion. Just the Xenon portion. Expand, tweak, increase clocks etc on the CPU portion. They could add another GPU outside of the combined chip. A "modernized" 2011/2012 GPU.

As to what you said above, I think a 45nm Valhalla scaled upwards (to what degree?) could maintain thermals to a perfectly acceptable thresh hold.
 
Well, I'm not saying they should expand the Xenos portion. Just the Xenon portion. Expand, tweak, increase clocks etc on the CPU portion. They could add another GPU outside of the combined chip. A "modernized" 2011/2012 GPU.

As to what you said above, I think a 45nm Valhalla scaled upwards (to what degree?) could maintain thermals to a perfectly acceptable thresh hold.

There is little to no benefit in doing that. Xenos and RSX alike will be absolutely thrashed by GPU/VPU tech in 20112. If there is another GPU in the system then Xenos should be shown the door and thanked vociferously for its past service to the cause. There is no need for 2 GPUs to compete in the system especially when one of them would be severely over matched.

Carl B has already pointed out why the next CPU/GPU combo is very unlikely to appear on the same die. I would add if you scale up the clocks on Valhalla then you also need to scale up the cooling system for it in addition to fabrication concerns. Cooling is not cheap on the BOM.

The Xenon core and the PPE core of Cell need to be dropped into a very deep pit never to return. It is unlikely MS or Sony has much desire to see them return when better cores can be had quite easily from IBM smoothing BC concerns considerably.
 
Last edited by a moderator:
There is little to no benefit in doing that. Xenos and RSX alike will be absolutely thrashed by GPU/VPU tech in 20112. If there is another GPU in the system then Xenos should be shown the door and thanked vociferously for its past service to the cause. There is no need for 2 GPUs to compete in the system especially when one of them would be severely over matched.

Carl B has already pointed out why the next CPU/GPU combo is very unlikely to appear on the same die. I would add if you scale up the clocks on Valhalla then you also need to scale up the cooling system for it in addition to fabrication concerns. Cooling is not cheap on the BOM.

The Xenon core and the PPE core of Cell need to be dropped into a very deep pit never to return. It is unlikely MS or Sony has much desire to see them return when better cores can be had quite easily from IBM smoothing BC concerns considerably.

So their was absolutely no nobility in adding the PS2's logic in the original PS3's? Or am I mistaken here in that was only the PS2's CPU? I can see the merits to the idea, but as well as the short comings. I think the fundamental issue here is that I'm still very much ignorant to the overall scheme of things.

Why would Xenos have to compete with the new graphics hardware? Would you not need Xenos to
properly support full BC? How easy do you think it can be mapped out on a new graphics arc.? Then again, I'm not keen on the idea of building next gen systems around old hardware, but merely thought of this is possibility. I do believe to a rather good extent that Valhalla is the project or one of the projects in MS's computer Architecture group. Not some iteration of Suns close proximity tech CPU.

Also, what kind of better cores are we talking about from IBM?

Thanks.
 
So their was absolutely no nobility in adding the PS2's logic in the original PS3's? Or am I mistaken here in that was only the PS2's CPU? I can see the merits to the idea, but as well as the short comings. I think the fundamental issue here is that I'm still very much ignorant to the overall scheme of things.

Why would Xenos have to compete with the new graphics hardware? Would you not need Xenos to
properly support full BC? How easy do you think it can be mapped out on a new graphics arc.? Then again, I'm not keen on the idea of building next gen systems around old hardware, but merely thought of this is possibility. I do believe to a rather good extent that Valhalla is the project or one of the projects in MS's computer Architecture group. Not some iteration of Suns close proximity tech CPU.

Also, what kind of better cores are we talking about from IBM?

Thanks.

I think that PS2 HW is just in the first models then BC is made only on software, even not being the slightest similar in any way.

Anyway if you have to diferent GPUs it will only bring problems on the programing side, add to the cost, create resource problems, add to the heat/power, making bigger...

Plus DX10/11 is almost fully compatible it Xenos (MemoExport, or going directly to CPU cache is not), so BC in the GPU side is a easy thing.

On the CPU is a bit harder but there is SW soluction for it (even at the 360 released there was one very popular because some seid it would bring BC to 360, dont recal the name...). Xenon could now die a agonizing death for all I care, if they go for a small upgrade, just costumize a already inexpensive X2 or a Pentium C2D meybe even a cut down from Power 7 (a dual or tri core, if cheap enought), BC would be harder but they probably would save money doing the tools for it.

PLus some problems will only go worst (eg, latency with faster RAM).
 
Last edited by a moderator:
So their was absolutely no nobility in adding the PS2's logic in the original PS3's? Or am I mistaken here in that was only the PS2's CPU? I can see the merits to the idea, but as well as the short comings. I think the fundamental issue here is that I'm still very much ignorant to the overall scheme of things.

Why would Xenos have to compete with the new graphics hardware? Would you not need Xenos to
properly support full BC? How easy do you think it can be mapped out on a new graphics arc.? Then again, I'm not keen on the idea of building next gen systems around old hardware, but merely thought of this is possibility. I do believe to a rather good extent that Valhalla is the project or one of the projects in MS's computer Architecture group. Not some iteration of Suns close proximity tech CPU.

Also, what kind of better cores are we talking about from IBM?

Thanks.

The new GPU, if it originated with ATI/AMD, would likely ensure full B/C simply simply by being a forward extension of the architecture. Throw in some specific instruction supports and it can be derived from whatever family (I do think DX11 though) they choose to branch off of when the time comes.

In terms of the CPU, well... we'll see what they do. If it's IBM again then a scaling of the XeCPU or a new branch off a more current architecture with some instructional supports again should leave them just fine on B/C.

For PS2 on PS3, the RSX from the Graphics Synthesizer was just too dramatic a break architecturally, so yes they needed that chip in there to make it work out. They put the EE in there at first as well until they could get emulation up to par on Cell, but they did so in fairly short order.

Thinking that XBox 720 might be a scaling of the 360 architecture is a fine theory; the Sun architectural stuff I don't buy into in the least anyway, so that one's not even a factor. But for myself, I do think that the hardware will be modernized rather than simply scaled, and if scaled, probably not from a single-chip solution (unless truly truly the next gen becomes one of incrementalism).
 
So their was absolutely no nobility in adding the PS2's logic in the original PS3's? Or am I mistaken here in that was only the PS2's CPU? I can see the merits to the idea, but as well as the short comings. I think the fundamental issue here is that I'm still very much ignorant to the overall scheme of things.

Why would Xenos have to compete with the new graphics hardware? Would you not need Xenos to
properly support full BC? How easy do you think it can be mapped out on a new graphics arc.? Then again, I'm not keen on the idea of building next gen systems around old hardware, but merely thought of this is possibility. I do believe to a rather good extent that Valhalla is the project or one of the projects in MS's computer Architecture group. Not some iteration of Suns close proximity tech CPU.

Also, what kind of better cores are we talking about from IBM?

Thanks.

For the next Xbox just about any Power core from 5 on with proper tweaking would be a real improvement and still make BC relatively easy due to a similar ISA to Xenon.

Keeping the CPU and GPU seperate is likely to offer better yields and more flexibility in taking advantage of fabrication advancements during the generation.

Any D3D 11 GPU will be able cover Xenos's feature set in 2012 barring a few eccentricities here and there. If D3D 12 is an option then emulating Xenos will only be that much easier.

BC should be much easier to implement for MS and Sony alike without any need to place the previous generations chips on the BOM.
 
Last edited by a moderator:
The new GPU, if it originated with ATI/AMD, would likely ensure full B/C simply simply by being a forward extension of the architecture. Throw in some specific instruction supports and it can be derived from whatever family (I do think DX11 though) they choose to branch off of when the time comes.

In terms of the CPU, well... we'll see what they do. If it's IBM again then a scaling of the XeCPU or a new branch off a more current architecture with some instructional supports again should leave them just fine on B/C.

For PS2 on PS3, the RSX from the Graphics Synthesizer was just too dramatic a break architecturally, so yes they needed that chip in there to make it work out. They put the EE in there at first as well until they could get emulation up to par on Cell, but they did so in fairly short order.

Thinking that XBox 720 might be a scaling of the 360 architecture is a fine theory; the Sun architectural stuff I don't buy into in the least anyway, so that one's not even a factor. But for myself, I do think that the hardware will be modernized rather than simply scaled, and if scaled, probably not from a single-chip solution (unless truly truly the next gen becomes one of incrementalism).

For the next Xbox just about any Power core from 5 on with proper tweaking would be a real improvement and still make BC relatively easy due to a similar ISA to Xenon.

Keeping the CPU and GPU seperate is likely to offer better yields and more flexibility in taking advantage of fabrication advancements during the generation.

Any D3D 11 GPU will be able cover Xenos's feature set in 2012 barring a few eccentricities here and there. If D3D 12 is an option then emulating Xenos will only be that much easier.

BC should be much easier to implement for MS and Sony alike without any need to place the previous generations chips on the BOM.

Thanks for the much better understanding.
 
In terms of the CPU, well... we'll see what they do. If it's IBM again then a scaling of the XeCPU or a new branch off a more current architecture with some instructional supports again should leave them just fine on B/C.

Just a few questions to throw out there...

a) Would there be much sense in taking VMX128 any further :?: And if so, in what way ?
b) Considering the closed-system development environment, what might be an optimal setup for cache hierarchy (considering L2 size or scope of L2 or existence of L3). I was thinking along the lines of how L2 is different between Core 2 and Nehalem i.e. L2 is large and shared in Core 2, but small and per core in Nehalem. I suppose it is a question of how large is too large to be fast enough... Does a higher level cache for the SPE's make any sense?
 
1024 Stream processor?! Even in the "worst" case, with the next generation consoles delayed to the end of 2012 due to the economic recession, there will be no way to have 1024 superscalar ALU on a 200mm^2 die.. it would require a 16nm pp, expected in 2014 or later, not in 2012.
For the next Xbox, if Microsoft will use again an Ati chip, it would probably be son of the next DirectX11 architecture (R800).
I expect the use of the 28nm/32nm, and a ratio between shader and tmu 6:1.
From the spec point of view:
200 mm^2
256bit/192bit@GDDR5 or XDR2 -> bandwidth 200-300 Gb/s
20ish ROPs
400-500 Vec5 ALUs
60-80 TMU
One mempool of 4 to 8 Gb GDDR5/XDR2

If they stick with IBM, the CPU will be son of Power7 architecture.

Well then a multi-gpu system in the ATi fashion (Many middle-road power GPUs paired), then hope to have a proper cooling system? Hmmm............

Considering the vast amount of increased detail in rendering I think more than 20 ROPs would be possibly needed if true 1080p rendering is desired. These current consoles barely get by at 720p with only 8 ROPs. And I think 4 GB will probably be the limit in terms of memory needed. Does any PC game really make practical use of more than 512 MB 'cept maybe Crysis? I can't see PC graphics cards in 2012 having more than 1.5/2.0 GB of VRAM for the more practical cards (of course there will be the crazy over the top cards with 2x the VRAM they really need), therefore I think 4.0 GB will be close to what the next consoles will come equiped with if 1080p with 2x AA is the target. Looking back on console developement, 8x seems to be the factor of increasing things whether it's "enough for the tasks at hand" or not.

Xbox --> 64 MB Unified RAM
Xbox 360 --> 512 MB Unified RAM + 10 MB eDRAM
Xbox 720 (or whatever) --> 4096 MB Unified RAM + possible x eDRAM
 
Just read this.
"The 32 nm technology is getting ready to go into the manufacturing phase, we are lining up fabs to support it and we expect great demand," said Mark Bohr, director of Intel's technology and manufacturing group. "We are on track for shipping products in the fourth quarter and have 22 nm technology in development for 2011," he said.

The current trend is that the process going into the consoles is usually trailing two years after intels cpus. I´d expect 32-35 nm cpus to appear in the PS3 and the 360 available on the market for the holiday season of 2011. If the next generation consoles will appear within the 2011 and 2012 time frame we can expect them to be built on the same process.

What hard facts are available concerning the 360 Valhalla system, does anyone have a link to the original source(s)?
 
Last edited by a moderator:
I have a gut feeling that MS won't increase the number of cores beyond four, since their own research concluded that, "for the types of workloads present in a game engine, we could justify at most six to eight threads in the system (Jeff Andrews and Nick Baker, "Xbox 360 System Architecture", IEEE Micro, March-April 2006, p. 35)."

How about just doubling the cores from 3 to 6 and have 1 hardware thread for each core? If the process node were to be 32nm they could double the number of cores/cache and maybe even increase the clock. They might even be able to use beefier cores that are 100% backwards compatible similar to what IBM did from GC's Gekko to Wii's Broadway.
 
Even with state of the art efficiency per transistor per cycle for their superscalar cores this won't stop being true. This is more an argument for spending more area on SPUs/Larrabee than anything else.
True, but it is the initial thing, they can only build around the HW, anyway, like pointed by the tesselator, there is the question if there is a real point in having to much specialized HW.
PC999 said:
That is the beauty of Fusion like designs, althought there may be a problem of such units going undersude (likethe tesselator), depending of how much it brings and how easy it is to use.

Anyway it seems to me that a FUsion design is perfect for a console...
I will somehow answer to both of your posts (even if one of your posts PC999 is not aimed at me).
Well SPu or larrabee are tinier than a xenon but remains significantly bigger than dedicated parts. Decompression for example is the kind of task that will always come into play. The choice but between specialized/fixec function hardware and more generic ressources is likely to be made on needs and perfs and how they balance each other (I'm a master of obvious...).
Gpu have moved from dedicated units for vertex and pixel shading because it did make sense for flexibilty, harware utilisation/etc but they kept texture filtering tied to dedicated hardware. Basically the trade off for flexibilty and hardware utilization (/whatever) are too expansive => it's slow.
I feel like it's the same for compression/decompression, GPU are more than 5 times faster at decoding/decompressing than CPU (badboom and the likes), OK. But do you need that many trasistors to beat athe speed of a cpu? clearly no SPURS engine is a ~tenth of an actual GPU and it is a match for top of the line GPU (I don't remember the figure, I guess it beat them, not too mentioned power efficience). And actually the Spurs engine can do much more, matching is perf on decoding/decompression alone would actually need way less transistors.
As I said decompression will always be part of the equation as you will have to stream compress data to the RAM or the gpu even using a SSD. I think that it would make sense to dedicate a tiny portion of silicon on devices that make the job done for cheap.

Why spend more time and money with a "flawed" CPU , in 2010 any dual core soluction (or a cut down one) would beat it in every single aspect, many of them offering OOE, branch prediction, good latency, power effeciency... Only for very easy BC.

Anyway dont take to hard on Xenom in some things it is probably way better than PC CPUs (raw fp or dot products)
I don't think it's "flawed" I would better say it needs to be perfected :LOL:
I think that BC will be have it weight on the design. As you pointed xenon is not bad for every thing I guess that for some workload its clock speed may have some merits (along with the altivec units as you pointed out). I think that MS will need a CPU running as the same speed to ensure no problemo BC. The point is that a CPU granted with a potent OoO execution engine great branch prediction, etc. running at least @3.2 GHz will too big too hot. My point was MS can't won't afford top of the line X86 cpu (nor AMD/intel have reasons to sell them cheap).

That's why I think that MS should work on "reasonable improvement" instead of starting from scratch again (keeping the POWER ISA). It's not to say that at some point the number of pipeline stages may be modified for example but they should stuck to xenon philosophy.

POWER6 are in order processors but I remember reading that indeed they do some kind of simplistic form of OoO execution sadly I can remember where I read it. If my memory is right MS could look into that direction to improve perfs with a tinier energetic cost than a potent OoO engine would have.

Obviously Ms should look at better prefetching, lower latencies, branch predictions etc. but expecting X86 level of performances is desillusional.

I quote a part of your post here Alstrong ;)
Just a few questions to throw out there...

a) Would there be much sense in taking VMX128 any further And if so, in what way ?

While I was searching infos on POWER6 (see above) I found out interesting stuffs on wiki:
http://en.wikipedia.org/wiki/POWER6
http://en.wikipedia.org/wiki/Power_Architecture#Power_ISA_v.2.03

There is an AltiVec unit to POWER6, and the processor is fully compliant with the new Power ISA v.2.03 specification. POWER6 also takes advantage of ViVA-2, Virtual Vector Architecture, which enables the combination of several POWER6 nodes to act as a single Vector processor.
Actually I don't know if that would be useful as VMX128 are already pretty large, or may be doubling the width would be a better move, basically I've no clue insights welcome here too :)

Power ISA v.2.03

The specification for Power ISA v.2.03[7] is based on the former PowerPC ISA v.2.02[3] in POWER5+ and the Book E[1] extension of the PowerPC specification. The Book I included five new chapters regarding auxiliary processing units like DSPs and the AltiVec extension.

Book I – User Instruction Set Architecture covers the base instruction set available to the application programmer. Memory reference, flow control, Integer, floating point, numeric acceleration, application-level programming. It includes chapters regarding auxiliary processing units like DSPs and the AltiVec extension.
This is interesting if Ms find out that decompression units and why not network accelerator would fit their goals.

I wonder if implementing scatter/gather would help for the kind of workload likely to run on the cpu?


How about just doubling the cores from 3 to 6 and have 1 hardware thread for each core? If the process node were to be 32nm they could double the number of cores/cache and maybe even increase the clock. They might even be able to use beefier cores that are 100% backwards compatible similar to what IBM did from GC's Gekko to Wii's Broadway.
Well multithreading is a way to improved IPC, such cores would have to use other tricks to keep up with perfs that are likely more power consuming but it's not impossible.
 
I don't think it's "flawed" I would better say it needs to be perfected :LOL:
I think that BC will be have it weight on the design. As you pointed xenon is not bad for every thing I guess that for some workload its clock speed may have some merits (along with the altivec units as you pointed out). I think that MS will need a CPU running as the same speed to ensure no problemo BC. The point is that a CPU granted with a potent OoO execution engine great branch prediction, etc. running at least @3.2 GHz will too big too hot. My point was MS can't won't afford top of the line X86 cpu (nor AMD/intel have reasons to sell them cheap).

Thanks for the answer, althought I do have some doubts about X86 cores not having 3.2Ghz at confortable speeds/prices/temp, both Pentium C2D and X2 reach, at least ,3Ghz without trouble, being quite inexpensive at 65nm (a 45nm X3 shouldnt be hard/pricier too?), how hard would it be to improve (eg add a similar VMX)?

Anyway you are right Intel does not have any reason to sell them cheap, AMD may have many, but I am just tring to point that there is many options even if they are not from IBM. Personally, unless they go for a full Fusion design, or they can get real cheap a Intel CPU+GPU, IBM is probably a good (the best) option, as they should give many options too, even a dual core Power 7 (1/8 of the real thing) should be real interesting.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top