Predict: The Next Generation Console Tech

Status
Not open for further replies.
I assume you've played both so you can say this? What's so special about condition zero AI? I've seen presentations about KZ2 AI calculations and I've seen it in action, and it's really special, compared to a mod of a mod of an old game. I've played all half life games and the AI didn't compare to KZ2. Just by the fact that I don't see any PC exclusive games have better AI than anything on a console disproves your statement. Console cpu's are able to handle AI just fine, no x86 bloat needed.

The CS:CZ AI and the further improvements that turtle rock has made over time actually reacts realistically, performs maneuvers etc. The KZ2 AI reacts way to predictably making its pretty useless. In contrast the CS:CZ AI could react in ways that would make logical sense and actually react to your strategies.

Where is this "Latest Data?" How many cores is being compared? Since I don't expect you to actually give credible links, here's the best I could find from August 2008, Software refers to a QX9650 (a very expensive CPU), PPU refers to Aegia Physics processor (they were bought by NV), and other the GPU.

http://forum.beyond3d.com/showthread.php?t=56878

I know, its hard for you to leave the console forums and do any research on your own.
 
MS went from x86 to custom CPU so you're obviously wrong about the cost here. Intel/AMD just won't sell the CPU's that cheap to console makers when the PC market is much much bigger. No other console makers have used x86, so I guess you're going to take the laughable position of comparing yourself to pros in the industry and saying that you're better, just like you did with Credit Suisse analysts.

Sony likely won't make the same mistake of wasting billions developing dead end hardware again. The 360 cpu has been very much less than good. IBM likely doesn't want another disaster like PPU/CELL again.

MS went away from x86 because they signed the worst possible contract on everything to do with the original xbox.

The reality is that the PC processors are way ahead of anything else in the industry for mass market volume.


BS. It wouldn't be able to do half the graphics processing that cell does. Most game related calculations suit GPU's and Stream processors well, not CPUs. Writing to the metal would only help it to get by with 512MB total memory.

The SPUs are largely unused in a lot of games and having 2-4 real cores more than makes up for not having SPUs that are difficult to program, require lots of dev resources, and don't really provide that much benefit compared to even SSE. You seem to think CELL is a godsend, if so, you might want to ask yourself why everyone has already bailed on it.
 
Sony likely won't make the same mistake of wasting billions developing dead end hardware again. The 360 cpu has been very much less than good. IBM likely doesn't want another disaster like PPU/CELL again.

MS went away from x86 because they signed the worst possible contract on everything to do with the original xbox.

The reality is that the PC processors are way ahead of anything else in the industry for mass market volume.

The crux of the matter is how cheaply the X86 CPUs can be had if they can routinely fetch $100+ on the regular market. The only way Intel is ever getting into bed with Microsoft or anyone else is if they decide they want to take on Larrabee or one of his family members and start pushing a ray-tracing console. On the other hand AMD would probably be a lot more forthcoming but that doesn't mean it'll be cheap as any console manufacturer has to stillpay them what the I.P. is worth and X86 is worth quite a bit.

Im open to your having some special insight here as you are involved more closely than anyone else in this matter that I can tell.
 
NRE = non-reccuring engineering

Its called google. Literally the FIRST result answers the question. NRE is basically all the design work that doesn't factor into the marginal production cost. So its sunk cost basically.

Sorry, yes I should have Binged, Googled, Yahood or otherwise searched it. I had assumed it was something more specialised in the industry or specifically the semi-conductor industry.
 
The crux of the matter is how cheaply the X86 CPUs can be had if they can routinely fetch $100+ on the regular market. The only way Intel is ever getting into bed with Microsoft or anyone else is if they decide they want to take on Larrabee or one of his family members and start pushing a ray-tracing console. On the other hand AMD would probably be a lot more forthcoming but that doesn't mean it'll be cheap as any console manufacturer has to stillpay them what the I.P. is worth and X86 is worth quite a bit.

From a business perspective, AMD would have some significant incentive to block nvidia in the console market as a way to prevent further penetration of things such as CUDA, physx, etc. Also it provides them with additional volume for their CPU business and ways to distribute out their NRE costs for the base components.
 
From a business perspective, AMD would have some significant incentive to block nvidia in the console market as a way to prevent further penetration of things such as CUDA, physx, etc. Also it provides them with additional volume for their CPU business and ways to distribute out their NRE costs for the base components.

I understand the incentives. But that doesn't stop them from charging about what X86 is worth minus the other fringe benefits from having their chips in the console. X86 is still worth quite a lot, even AMDs implementation of it. I cannot see how a console manufacturer is going to walk away from a deal with AMD with less money changing hands than there was in the previous generation with IBM.

In respect for CUDA etc, I doubt that the platform holders particularly trust Nvidia and nor will they want to find themselves locked into Nvidias proprietary solutions. Nvidia has as much working against them as a competitor in the console space as they have working for them. In addition to this, until they sort out their perf mm^2/watt they aren't exactly a viable alternative for designs being tabulated right now, especially as ATI has close ties with GloFo as an alternative supplier of the physical chips.

In many respects I can see Nintendo cutting a deal with Intel because they aren't going to be demanding access to the cutting edge fab processes every other year and will probably just stick with whichever process they launch on. Intel would love to sell millions of chips at a good rate of return off of their legacy process nodes. In addition to this, Nintendo will probably benefit the most in terms of implementing whatever Larrabee has turned into as they would probably out of the three be most interested in the unique visual presentation and potentially lower development costs of ray-tracing.
 
The SPUs are largely unused in a lot of games and having 2-4 real cores more than makes up for not having SPUs that are difficult to program, require lots of dev resources, and don't really provide that much benefit compared to even SSE. You seem to think CELL is a godsend, if so, you might want to ask yourself why everyone has already bailed on it.

This (SPUs being largely unused) is much, much less true in 2010 than it was in 2007.

And programming the SPUs isn't all that hard. It's the organizing the code into multiple discrete tasklets that requires the effort. The SPUs can be programmed acceptably well with straight GCC.

As far as 'everyone else', well, it's a console part, designed for what it's being used for in the PS3. We'll know more when Sony announces a direction for PS4, whenever that comes about.
 
I understand the incentives. But that doesn't stop them from charging about what X86 is worth minus the other fringe benefits from having their chips in the console. X86 is still worth quite a lot, even AMDs implementation of it. I cannot see how a console manufacturer is going to walk away from a deal with AMD with less money changing hands than there was in the previous generation with IBM.

In respect for CUDA etc, I doubt that the platform holders particularly trust Nvidia and nor will they want to find themselves locked into Nvidias proprietary solutions. Nvidia has as much working against them as a competitor in the console space as they have working for them. In addition to this, until they sort out their perf mm^2/watt they aren't exactly a viable alternative for designs being tabulated right now, especially as ATI has close ties with GloFo as an alternative supplier of the physical chips.

In many respects I can see Nintendo cutting a deal with Intel because they aren't going to be demanding access to the cutting edge fab processes every other year and will probably just stick with whichever process they launch on. Intel would love to sell millions of chips at a good rate of return off of their legacy process nodes. In addition to this, Nintendo will probably benefit the most in terms of implementing whatever Larrabee has turned into as they would probably out of the three be most interested in the unique visual presentation and potentially lower development costs of ray-tracing.

It would be incredibly interesting if Nintendo actually went with some kind of Larrabee derived chip that on it's own could both orchestrate and render games on a single piece of silicon. The issue would then be backwards compatibility which I think for Nintendo would be important as too many Wii owners would be confused about BC. Maybe such a Larrabee derivative could run an emulator? It's easily done on the PC already with dual core x86 architecture. Sure it isn't perfect, but it's pretty damn good and doesn't have as many problems as PS2 emulation.
 
The CS:CZ AI and the further improvements that turtle rock has made over time actually reacts realistically, performs maneuvers etc. The KZ2 AI reacts way to predictably making its pretty useless. In contrast the CS:CZ AI could react in ways that would make logical sense and actually react to your strategies.
1. There is absolutely no proof why a console CPU wouldn't be able to do that.
2. The lack of popularity and the coverage of a mod of one of the most popular PC shooters shows that it's not that important, the market doesn't care for that.

http://forum.beyond3d.com/showthread.php?t=56878
I know, its hard for you to leave the console forums and do any research on your own.
1. That's a synthetic benchmark, who know what optimizations were done to each sides, who's to say the PC code isn't taking any shortcuts etc.
2. It still shows that a dedicated GTX260 for physics processing is twice as fast as a quad core intel. So 2-4 lightweight integer cores plus a bunch of GPU "cores" that are just GPU processing units without graphics specific parts would be a better design.
 
Sony likely won't make the same mistake of wasting billions developing dead end hardware again. The 360 cpu has been very much less than good. IBM likely doesn't want another disaster like PPU/CELL again.
Most of Sony's billions did not go into the Cell, since that was a joint effort. Blu-ray was a major factor as well as having to build an online service from scratch.
MS went away from x86 because they signed the worst possible contract on everything to do with the original xbox.
Really, do you think you could have done better regarding the CPU contract? What are your qualifications? You think that Intel would have said to MS, "ok, since you asked nicely, we will sell our chips to you with next to no profit rather than selling them to PC manufacturers with fat profit margins."

The SPUs are largely unused in a lot of games and having 2-4 real cores more than makes up for not having SPUs that are difficult to program, require lots of dev resources, and don't really provide that much benefit compared to even SSE. You seem to think CELL is a godsend, if so, you might want to ask yourself why everyone has already bailed on it.
BS, it's 2010 and SPU's are being used for many things, and run rings around SSE. I'm not saying Cell is a godsend, but it's a flawed step in the right direction. The future is GPU's becoming fully programmable and doing all the heavy math, and then being included with the cpu, either on the same die or package, in addition to a dedicated gpu. which is similar to the Cell idea. x86's strengths are irrelevant to gaming, or at least mainstream gaming, and not some fringe PC game nobody cares about. Look at the advancement in games for the past 5 years, it's all about graphics and physics, both are better suited to GPGPU's.
 
It would be incredibly interesting if Nintendo actually went with some kind of Larrabee derived chip that on it's own could both orchestrate and render games on a single piece of silicon. The issue would then be backwards compatibility which I think for Nintendo would be important as too many Wii owners would be confused about BC. Maybe such a Larrabee derivative could run an emulator? It's easily done on the PC already with dual core x86 architecture. Sure it isn't perfect, but it's pretty damn good and doesn't have as many problems as PS2 emulation.

They already have an X86 based Wii emulation software package. They distributed it to developers before the official kits could be sent out. A Sandy Bridge based X86 Intel chip with 2 cores and 6 hardware threads linked to a Larrabee style CPU array would be a pretty good fit on Intel's 22nm process in 2012/2013 and im pretty sure it would have no problems emulating the more straight forward architecture from circa 2001. In addition to this, they wouldn't have to worry about the vaguearies of process scaling past this node as typically they have not bothered with process shrinks and if the system itself is cheap enough they need not have to bother.

Really, do you think you could have done better regarding the CPU contract? What are your qualifications? You think that Intel would have said to MS, "ok, since you asked nicely, we will sell our chips to you with next to no profit rather than selling them to PC manufacturers with fat profit margins."

His qualifications are a google search away. Just look up Aaron Spink and something like Semiconductor or engineer and you'll see who he is.
 
The 360 cpu has been very much less than good.

By which criteria? It's running multiplatform games about as well as the fastest pc cpus from 2005, and presumably was more cost effective than going with an x86 part. If MS had gone with an off the shelf PC CPU what would they have chosen?

Pentium 4? Hot and slow. Even Intel wanted to get away from it, and they had to threaten and bribe to protect it from the A64.

Pentium M? A single chip couldn't match Xenon, and iirc there were no multi chip Pentium Ms in 2005.

Athlon 64 x2? The last thing MS would have wanted was to be lumbered with an additional pool of DDR1.

C2D would have been a better fit but that didn't come until mid 2006.

MS went away from x86 because they signed the worst possible contract on everything to do with the original xbox.

I think it's more likely they went away from x86 because they couldn't get the right deal this time rather than because they got the wrong deal last time.

The reality is that the PC processors are way ahead of anything else in the industry for mass market volume.

And yet console makers run away from them like they were the plague. Even MS ran away. There's got to be a reason. Things may change for next generation of course.
 
I'd rather they spend some of the transistor budget on a boatload of atom cores with a switched networking fabric :) Cell might have been too hard to program due to it's cache and DMA limitations, but it's still the right idea IMO. I'd say one wide superscalar core is perfectly enough, After that it's time for higher density architectures ... now of course development costs might still make simply going with some multicore COTS processor the better deal, but I'd prefer to see something a little more elegant.

What about a kind of middle-ground, with a bunch of Bobcat cores with a high bandwidth bus to the GPU / memory controller (like the 360). Bobcat is supposed to be synthesisable, so a custom configuration (rather than off the shelf part) designed to work with an AMD GPU and manufactured on GF's best process could work out well. Maybe.
 
For instance ... I simply don't think something like that has to take billions. It would essentially work as a cluster on a chip. Hell, Intel is making such a beast just for shits and giggles (a research project) except with slightly larger cores.
 
The crux of the matter is how cheaply the X86 CPUs can be had if they can routinely fetch $100+ on the regular market. The only way Intel is ever getting into bed with Microsoft or anyone else is if they decide they want to take on Larrabee or one of his family members and start pushing a ray-tracing console. On the other hand AMD would probably be a lot more forthcoming but that doesn't mean it'll be cheap as any console manufacturer has to stillpay them what the I.P. is worth and X86 is worth quite a bit.

Im open to your having some special insight here as you are involved more closely than anyone else in this matter that I can tell.
I checked the price of an i3 530 on the web, the price is anywhere between 99€/$ and 110€/$. But that retailers price I wonder at which price the one as HP, Dell, Acer bought them. May be Aaron may indeed have insight on the matter ;)
Anyway it won't tell the whole picture as as far as volumes are concern the above are likely to order more CPUs than what console manufacturers would buy.
Between I considered the i3 as it's an interesting chip as it includes a 81mm² CPU and ~120mm² IGP/Northbridge and transistor count is close to the one of our consoles (560millions of transistors, see here).
I'll try to get my hand on last isupplies analysis on the 360 and the ps3 for what there are worse and the sake of comparison.

EDIT:
I didn't find anything interesting or recent enough for the 360 here my find the ps3:
2009-12-11_PS3.jpg

It's from videogame2play.com

Below estimate from 2006:
11_17_2006a.gif

Found this at emsnow.com
 
Last edited by a moderator:
I found this on repi's presentation, slide 49:
http://www.slideshare.net/repii/parallel-futures-of-a-game-engine-2478448?from=ss_embed
screenshot20100405at932.png

irrelevant seems to be the keyword here.

Some points you missed:

1. 2015.
2. GPUs of 2010 are in the 2TFLOPs range; do we expect a 25x FLOP increase in less than 5 years?
3. GPU architecture is very far away from running gamecode.
4. CPUs are still lagging back in the 200GFLOP range.
5. Yes, OOOe is irrelevant for significant amounts of FLOPs (especially graphics which when combining programmable FLOPs with non-programmable ones found in TMUs and such is already in the tens of TFLOPs) but ...
6. but ... the game loop, as non-FLOP intensive as it may be can hold you back a lot.
7. And parallelizing it isn't so easy/efficient.
8. Creating a parallel game loop on GPU-like "cores" would be a nightmare. Or even SPEs.

You could increase the list but there is this major factor: Even if DICE, with all their mojo and EA moneyhats wants a system like this, the real question is if it is good for the market.

The market is growing decidedly in the area of mobile and "arcade" like sectors. Likewise a slew of software is done by smaller studios. Not to mention development expectations are increasing while turn around rate has stabilized. There is a premium on turning games around, on time and on budget, not getting the most out of esoteric hardware.

For all the mountains of posts about how the PS3, especially Cell, are not only 2x faster BUT architectural superior to the 360 CPU (which is a dog) so little has been done with it in terms of the industry. It doesn't mean it is useless, but the bottom line is developing hardware that caters to the market demands.

That said I do think we may see some compromises where there will remain a small number of very efficient, fast, serial oriented CPUs (like x86 OOOe processors) and a consolidation of the "FLOP" resources with many, very simple, cores ala GPUs. Performance per mm^2 is very high for GPU cores so if simple, peak FLOPs what you are going for that is the direction you would want to go.

A 2nd or 3rd generation Llano style CPU (a handful very fast OOOe CPU cores--meets the needs for the serial gameloop, "deadline non-efficient code," indie devs, etc) with the vector built on (could use extensions, on the same die similar to old-style ondie FPUs; giving you your high peak FLOP performance on die as well as be a setup and/or post processing monster for the GPU as well as physics and such if the libraries ever catch up) and then a normal GPU. Down the road 5-7 years after this style of system we could see single chip solutions with a fast OOOe core(s) on a sea of GPU styled cores.

Anyhow, after reading all the PS3 owners shrug over losing Linux because it was piss slow at basic tasks (like web browsing), I am not sure how argueing going for even simpler, more basic, cores than the PPE in the Cell processor and how serial performance isn't important ... my 1.4GHz Core Solo netbook runs FF faster ;) While ND and DICE may not need a faster main processor I think we have heard many developers note that speeding up their core loop and having some "forgiveness" for some bad code when crunch hits could really make a difference in a lot of games. Just because the industry is moving toward one direction doesn't mean it should be done overnight--probably the biggest problem with CELL. Sony tried to use their market share to force the industry in a particular direction. But they didn't anticipate the strength of the competition, the importance of tool chains, were late (and half baked), and the industry didn't buy into their vision. Right direction, but wrong road it seems.
 
I checked the price of an i3 530 on the web, the price is anywhere between 99€/$ and 110€/$. But that retailers price I wonder at which price the one as HP, Dell, Acer bought them. May be Aaron may indeed have insight on the matter ;)
Anyway it won't tell the whole picture as as far as volumes are concern the above are likely to order more CPUs than what console manufacturers would buy.
Between I considered the i3 as it's an interesting chip as it includes a 81mm² CPU and ~120mm² IGP/Northbridge and transistor count is close to the one of our consoles (560millions of transistors, see here).
I'll try to get my hand on last isupplies analysis on the 360 and the ps3 for what there are worse and the sake of comparison.

But the thing is, instead of looking at an overall bill of materials for a console at between $500 and $800 we may very well be looking at an overall bill of materials of less than $300 or at worst $400. Some sacrifices have to be made in terms of overall cost and the chips which make up the console are a big part of that initial cost especially.

In the overall figure some major sacrafices probably have to be made. For instance, you'll probably not see any optical drive movie playback or it'll be an optional extra which costs money on top of the overall console. We'll probably also see a movement away from mechanical HDDs for the cheaper $200-250 consoles in favour of flash based storage on the motherboard itself and perhaps even a movement towards unified CPU/GPU architectures as a means to lower overall console sizes, cooling costs and board complexity.

The obvious reason why this puts X86 as a possibility so long as it is cheap enough is that both Intel and AMD are working towards fitting a GPU and CPU onto the same die although their methodologies differ somewhat. The cost is still an important roadblock here. Can they get X86 + GPU cheap enough or will AMD/Intel charge closer to what X86 is really worth on the market as seen with the Core i3 in your example or will they charge something closer to the cost of production which is less than half the wholesale price of that chip?
 
Status
Not open for further replies.
Back
Top