Predict: The Next Generation Console Tech

Status
Not open for further replies.
Another issue here with only a single platform adapting Larrabee, is that developers wanting to keep with cross platform games, would likely not take advantage of much of Larrabee's flexibility/advantages because it simply wouldn't be portable to the other system. Somewhat similar to how many cross platform developers have problems making good use of the PS3's SPUs. Unless Larrabee has some advantages in terms of DX11 level GPU performance/cost, a Larrabee platform could suffer similar issues as the PS3 does now: (a) being the slower at the lowest common feature set between platforms, (b) requiring lots of re-engineering of engine design to take advantage of the hardware. Seems like a lot of people are thinking x86+caches of Larrabee somehow solves programming complexity for parallel code and everything will be easy, when the reality is that its going to take a lot of really smart parallel programmers with a keen understanding of cache control, vectorizing general purpose code to make good use of the hardware. Would reduce the primary advantage that the 360 had in this generation in that it was much easier for lazy developers to port PC code over.

As for Larrabee and other DX11 level GPUs, seems like Sony will be in a strange situation. IMO seems as if the compute shader has some serious advantages (because of vector scatter and cached vector gather) over the SPU model. Why have two forms of almost the same thing? SPUs arguably currently have a slightly better local store to ALU capacity ratio than current GPUs, but by next generation this gap could very well be gone. Would be a hard choice to either force developers to migrate SPUs jobs to the compute shader and toss the SPUs, or offer developers a mess of both.
 
Also, do not judge PS4's GPU pitch by GT200's performance or by some shortcomings of RSX (which are not entirely nVIDIA's fault given the R&D time they could spend on it), but by GT300 and GT400 projected performance figures and plans for better CELL/CELLv2 CPU-GPU integration (i.e. maybe going to UMA and solve that CELL reading from VRAM at 16 MB/s problem/gaffe?).

I get that but... I suppose GT300 wont be wildly different to GT200 and GT400 would be too late to base on consoles (needs 1.5 year lead time at least right?

So based on current and past track record ATI are looking strong to offer something compelling going forward. I dont think thats wildly off base?
 
As for Larrabee and other DX11 level GPUs, seems like Sony will be in a strange situation. IMO seems as if the compute shader has some serious advantages (because of vector scatter and cached vector gather) over the SPU model. Why have two forms of almost the same thing? SPUs arguably currently have a slightly better local store to ALU capacity ratio than current GPUs, but by next generation this gap could very well be gone. Would be a hard choice to either force developers to migrate SPUs jobs to the compute shader and toss the SPUs, or offer developers a mess of both.

As I contended long ago, the CELL/SPU architecture is/was a dead end. The problem is the local store itself lends itself to only 1 programming model and requires significant hand-holding vs a cache based architecture with locking and NTI.
 
I think larrabee just have to be as ~good as what others provide.
I think that Intel is aiming for high volumes through different incarnations of the product for different markets.
There are a lot of pressure for tools/libraries that's why Intel searches for as much help as possible.
But to some extends it's quiet the same for GPU, no? (I mean Nvidia and ATI architectures are different for compatibility you have to work not to close from the metal )

I don't think they intend to kill oponent but it's more like:
we want our share of the gpu
we want our share of the gpgpu
we want our share of the console market
For IGP well we have our already our share but if the produt scale well... less products less R&D

I feel like Intel want to play it like if one product can fit them all...

Once Intel will have volume then their strengh will really show (in the same manner as in the CPU market).
 
Last edited by a moderator:
I get that but... I suppose GT300 wont be wildly different to GT200 and GT400 would be too late to base on consoles (needs 1.5 year lead time at least right?

So based on current and past track record ATI are looking strong to offer something compelling going forward. I dont think thats wildly off base?

lead times strongly depending on programming interfaces and models. The historical reason was that everything in a console was custom and unique. As we're seeing all consoles going in the same general direction of main cpus with graphics/parallel subsystem, much of the early work on the games and dev environment can be done on proxy systems.

For example, a lot of the initial 360 games were developed on Mac Pro boxes with standard graphics cards and then tuned as prototype hardware became available.

So what is going to be the likely limiter is pre-silicon design/validation and post-silicon validation timelines.
 
As I contended long ago, the CELL/SPU architecture is/was a dead end. The problem is the local store itself lends itself to only 1 programming model and requires significant hand-holding vs a cache based architecture with locking and NTI.

I take it you see the DX11 compute shader as a similar dead end by that reasoning.
 
So what is going to be the likely limiter is pre-silicon design/validation and post-silicon validation timelines.

I meant lead time for the development of the GPU. IE, having the GPU designed and manufactured ready for launch. When the GT400 launches as a product in its own right, will there be enough time to fit it in to the console environment within a circa 2012 timeframe?

If not then looking at a GT400 wont work, and the GT300 I pressume wont be too different from the GT200, which for the purposes of consoles doesnt fit for various reasons mentioned above.
 
I take it you see the DX11 compute shader as a similar dead end by that reasoning.

in a lot of ways, yes. Its just another stop gap on the way to fully programmable graphics. I think the end point is basically execution APIs that expose fully programmable complexes, texture interfaces, and memory. After all, for most of the hardware out there, that is what they really are.

Basically the endpoint is when you can program your own interface functions and link them in with the vendor supplied interfaces in order to create new programming interfaces and exploit whole areas of realtime3d graphics that are currently off limits in large part to the few remaining fixed function portions of the designs as well as the API.

It shouldn't be a requirement to program your own 3d pipeline but it should be an option to change parts of the 3d pipeline.
 
Last edited by a moderator:
If not then looking at a GT400 wont work, and the GT300 I pressume wont be too different from the GT200, which for the purposes of consoles doesnt fit for various reasons mentioned above.

GT400 will likely be a modification of GT300 like GT300 will be a modification of GT200.

As far as fitting into consoles, I think a GT200 based design can fit in as easily as a RV770 based design can. All these designs are built to scale up and down as required.

Though one thing is certain, as long as the graphics vendors and customers are willing to deal with 300W+ boards, the consoles will never has as much graphics power as the PCs. Those days will never come again.
 
MS: Larrabee could be in Xbox360 if the deal is really really sweet. But more likely it will be an all AMD solution this time around.

Yeah I'm starting to think an all AMD Xbox 720 box is the way to go. Throw the latest Phenom in there along with a decent ATI video card and call it a day. They are simple to code for, everyones familiar with them, tons of mature tools exist for them, and they run relatively cool which is very important for Microsoft in particular. Porting those games to PC would be cake. They could swing a package deal from AMD for the cpu/gpu guts of the machine and maybe land a $299 launch price point.

Even if an AMD cpu is slower than a Cell2, it won't really matter. It takes a lot of extra cpu power before the typical consumer will even notice the difference, more so if the tools are comparatively poor on a Cell2. So if an AMD driven cpu is ~25% slower than a Cell2, no big deal. Mature tools will let your launch titles look really good, and the titles at launch + 1 year will look very very good which is critically important since those are the titles that will likely be going against the PS4 launch. Larrabee seems like too much of a risk at this point. Plus, letting Intel sit out yet another generation of consoles will make them even more malleable next time around for Xbox 1080 negotiations.

I think the whole idea of spending a fortune on a console and hope to recoup it in 10 years is over. Get in cheap, milk it for ~5 years, then get the next box out. Start making profit from just 6 to 12 months in, instead of waiting years. Part of the logic of keeping a console out there for a really long time was because it takes developers and tools a long time to get to speed. But that argument gets thrown out the window if the machine is simple to code for from day one, as an all AMD box would be.
 
They could swing a package deal from AMD for the cpu/gpu guts of the machine and maybe land a $299 launch price point.

I'm very skeptical of this. Consumers have proven they will buy at a higher price at launch and through the generational cycle. Something people mistake about the Wii's sucess is that it has proven the high end is unsuccessful. (and this is not in direct response to you joker) It has not. Well over 60% (too lazy to get actual numbers) of the market if not more still prefer to buy more expensive consoles instead of or along side the Wii.

Secondly, I don't think hardware which will be competitive for 5+ years can be delivered at the $299 price point. I believe that would require a bit too much good will from AMD and fabs alike. Perhaps the $299 price point can be reached sooner but I don't think that $299 is a very likely launch target.

Even if an AMD cpu is slower than a Cell2, it won't really matter. It takes a lot of extra cpu power before the typical consumer will even notice the difference, more so if the tools are comparatively poor on a Cell2. So if an AMD driven cpu is ~25% slower than a Cell2, no big deal. Mature tools will let your launch titles look really good, and the titles at launch + 1 year will look very very good which is critically important since those are the titles that will likely be going against the PS4 launch.

I'm skeptical of this as well. If Cell2 is bundled with a relatively more capable GPU as well as the PPU is improved I don't see a repeat of PS3 being all that likely. Between now and the launch of PS4 tools will only improve and there is no way to argue developers will be "clueless" about Cell2 when that happens.

It is an interesting question in my view as to whether developers will be able to make that 25% or more difference in CPU capability matter in 2011. Chances are the answer is yes considering PS4 will not launch into the abyss of the unknown in nearly the same manner as the PS3 did.

I think the whole idea of spending a fortune on a console and hope to recoup it in 10 years is over. Get in cheap, milk it for ~5 years, then get the next box out. Start making profit from just 6 to 12 months in, instead of waiting years. Part of the logic of keeping a console out there for a really long time was because it takes developers and tools a long time to get to speed. But that argument gets thrown out the window if the machine is simple to code for from day one, as an all AMD box would be.

I agree and disagree with this. In the case of MS who outsources all HW it makes perfect sense. Let the competitors for their contracts duke it out. For Sony there is little need to invest billions in the development of PS4. PS4 should be evolutionary not revolutionary. Particularly Cell2 should not require near the same R&D cost to get off the ground given lessons learned for Cell1. I also expect both vendors to go with an evolution of the Blu-Ray standard in the next round which again significantly reduces costs for all parties involved as compared to what happened to Sony in this round.

To be more concise I don't see where spending a relative fortune to get a competitive console out by 2011 is required by either Sony or MS. Most of the HW assets required are already being worked on now either internally or externally through would be contractors.

That said, I see no reason to dissolve X360 or PS3 if they are still making money when the next batch of consoles arrive. Long term support has worked out quite well for both Nintendo and Sony to date and I really don't see why MS wouldn't try their hand at that lot too.
 
Last edited by a moderator:
Perhaps we shouldn't assume that next gen consoles are going to be very powerful...

Yes, that is a very thoughtful post. Who will be the buyer of these new super consoles with 16XAA 1080p or whatever super spec we may dream up on this board? Considering people are currently buying Wiis en masse and the 360 and PS3 still have plenty of room to improve the visual fidelity, not to mention that the PS2 is still selling at decent levels.

food for thought:
Shuhei Yoshida: In a sense, intentionally or not, Nintendo's choice in staying with the same core technology from GameCube to Wii, and not to make games more realistic-looking but adding more interactivity - that was very smart. They know Japanese culture has only so much impact when it's just visual - anime is popular worldwide, but when it comes to movies, there are hardly any movies that are popular everywhere.

The game publishers/developers are probably not that keen to have too many platforms to support at the same time and they want the transition to the new platform to be as simple and smooth as possible, read cheap. This generation it looks like we will have three healthy consoles that may make the next transition more complicated as it can mean that many titles will be released on 6 consoles not counting handhelds (perhaps 7 if the PS2 is still around).

Nintendos move to stay close to the GameCube hardware probably helped to get developers quick aboard. Sony approach with a radical paradigm shift has proved to be expensive, maybe it will pay off in the long run, we don´t know.
HD games are very expensive to make today, which means there are big risks involved, unless this changes dramatically game publishers are probably hesitating to take HD gaming to the next level, whatever that means it will for sure add more costs and risks.

Maybe we should expect "GC -> Wii" like bumps in the consoles specs, maybe the really powerful next-gen consoles will arrive much later than 2010-2012.
 
GT400 will likely be a modification of GT300 like GT300 will be a modification of GT200.

I was assuming GT400 would be a brand new archtecture (think DX11) So in that scenario GT300 > GT400 would be a much bigger change than GT200 > GT300. I realise the DX11 pipeline has many of the same steps but even so.

As far as fitting into consoles, I think a GT200 based design can fit in as easily as an RV770 based design can. All these designs are built to scale up and down as required.

Thats obvious but not quite what I meant. Its not so much can you make it fit, since all architectures scale. Its more like: would you want to make it fit? the RV770 will always be quicker per mm^2 for this generation of gaming. Therefore GT200 wouldnt make a good chip to choose for console gaming, I assume GT300 will be based on a not too dissimilar architecture (still better of course) And I assume the GT400 will be completely different (dx11) Do yuou see what I mean now? The whole point goes back to the fact that there is a hypothetical chance that Nvidia, may not be the best choice for next gen consoles.... Im not saying that is the case, just that 2.5 years ago you wouldnt have thought AMD/ATI stood a chance. The picture is different now.
 
lead times strongly depending on programming interfaces and models. The historical reason was that everything in a console was custom and unique. As we're seeing all consoles going in the same general direction of main cpus with graphics/parallel subsystem, much of the early work on the games and dev environment can be done on proxy systems.

These days, game development is all about content, and the sooner you can get your art pipeline up and your designers designing, the sooner you'll finish your game. Even if you're not on final hardware (or any hardware), getting a head start on the years of content to be developed is a huge help. That's another reason why Epic and others can charge millions for their engine/toolset, you can get to production that much faster.
 
Yeah I'm starting to think an all AMD Xbox 720 box is the way to go. Throw the latest Phenom in there along with a decent ATI video card and call it a day. They are simple to code for, everyones familiar with them, tons of mature tools exist for them, and they run relatively cool which is very important for Microsoft in particular.

One big advantage of being more PC-like for MS is that it will hurt Sony a lot. Right now, if you're multi-platform, you have the consoles on one side and the PC on the other. Really, 360 and PS3 are more similar to eachother than any one of them is to the PC. If MS releases before Sony, people will have very x86-centric code bases by the time they'll get their hand on PS4. Now porting all that to Cell is a lot less attractive than porting it to PPC was. People who would today say "We're already doing 360, so let's do PS3 as well." will have to think twice if the investment is worth it. Hence, more MS exclusives...
 
Nintendos move to stay close to the GameCube hardware probably helped to get developers quick aboard. Sony approach with a radical paradigm shift has proved to be expensive, maybe it will pay off in the long run, we don´t know.
HD games are very expensive to make today, which means there are big risks involved, unless this changes dramatically game publishers are probably hesitating to take HD gaming to the next level, whatever that means it will for sure add more costs and risks.

Maybe we should expect "GC -> Wii" like bumps in the consoles specs, maybe the really powerful next-gen consoles will arrive much later than 2010-2012.

I think people are putting far too much stock in the Wii. Wii is something only nintendo could pull off with their massive in house development and IP.

I would wager that the success of the Wii is only peripherally due to them just upgrading the game cube and much more with the innovative and unique controller and games to go with it. If either Sony or MS had the same games and controller technology at launch I think you would of seen the Wii fall by the wayside.

Looking at the Wii success, it seems that Nintendo's games are the primary sellers and 3rd parties aren't making much of a dent. For either Sony or MS to think they can replicate the Wii's success next gen would be folly at best. Neither has the in house IP in order to make it work nor the in house development.

Just looking at what has sold on the Wii, it doesn't look like its picked up any more outside development beyond what the game cube had. And looking at the best selling games, it almost entirely dominated by nintendo properties. Also everyone will have everything the Wii does in their next generation consoles, they will all have the motion control and the party games...

The company that has benefited the most this round from having top notch dev support and the easiest to program console is by far MS. Likewise the company hit hardest is Sony.

Do I think Sony will do another console that costs them upwards of 800+ at launch again with esoteroic hardware with unknown development enviroment totally foreign to most devs? No. I think as already stated that everyone will go with several fairly powerful main cpus that primarily focus on single thread performance and a gpu/parallel subsystem that deals with physics, graphics, etc. Its a model all the devs will be familiar with (its what 2 of the 3 consoles this gen already are and its what already available on the number one development and prototyping system in the world: the PC).

As far as real HD, etc. The general performance curves will already take them there. Part of the problem this generation is that all the consoles were designed on the wrong side of a technology transition and it shows. Next gen all the consoles will be able to do 1080P with reasonable graphics. It will all come down to making the systems easy to develop on for both the normal game studios and allow the engine creators to experiment in a simple and easy way with advanced techniques over the consoles lifetime.

Its almost certain they will all ship with BR drives and modest harddrives in the 500-1000 GB range.
 
These days, game development is all about content, and the sooner you can get your art pipeline up and your designers designing, the sooner you'll finish your game. Even if you're not on final hardware (or any hardware), getting a head start on the years of content to be developed is a huge help. That's another reason why Epic and others can charge millions for their engine/toolset, you can get to production that much faster.

which is why most devs did engine and content development for x360 using the Mac Pros with video cards. Its not exactly the same but pretty close to the final hardware. I think you'll see this trend continue and be enhanced with next gen consoles from a basic hardware perspective looking even more like closed box PCs.

A) because its the easiest path for the developers
B) cause all the companies designing the hardware will be PC companies. (aka AMD, Nvidia, etc.)
 
Thats obvious but not quite what I meant. Its not so much can you make it fit, since all architectures scale. Its more like: would you want to make it fit? the RV770 will always be quicker per mm^2 for this generation of gaming. Therefore GT200 wouldnt make a good chip to choose for console gaming, I assume GT300 will be based on a not too dissimilar architecture (still better of course) And I assume the GT400 will be completely different (dx11) Do yuou see what I mean now? The whole point goes back to the fact that there is a hypothetical chance that Nvidia, may not be the best choice for next gen consoles.... Im not saying that is the case, just that 2.5 years ago you wouldnt have thought AMD/ATI stood a chance. The picture is different now.

RV770 has too main advantages in a lot of this, one its on a better process and two its got lower performance. Nvidia could scale down the design to match ATIs performance and put it on a smaller process and be easily in the same ball park. They both really are fairly close on both a perf/watt and a perf per normalize mm2 perspective.

It all comes down to the fact that ATI has a big brute force advantage and Nvidia has a hefty efficiency advantage in their respective designs. ATI in general needs a ~50%(+/- 15%) flop advantage to get the same performance.

Where ATI was smart this generation was realizing that last 10-15% performance adds a LOT of costs to the end design and that only a small minority of the market is really willing to pay for that differential.
 
Yeah I'm starting to think an all AMD Xbox 720 box is the way to go. Throw the latest Phenom in there along with a decent ATI video card and call it a day. They are simple to code for, everyones familiar with them, tons of mature tools exist for them, and they run relatively cool which is very important for Microsoft in particular. Porting those games to PC would be cake. They could swing a package deal from AMD for the cpu/gpu guts of the machine and maybe land a $299 launch price point.

Even if an AMD cpu is slower than a Cell2, it won't really matter. It takes a lot of extra cpu power before the typical consumer will even notice the difference, more so if the tools are comparatively poor on a Cell2. So if an AMD driven cpu is ~25% slower than a Cell2, no big deal. Mature tools will let your launch titles look really good, and the titles at launch + 1 year will look very very good which is critically important since those are the titles that will likely be going against the PS4 launch. Larrabee seems like too much of a risk at this point. Plus, letting Intel sit out yet another generation of consoles will make them even more malleable next time around for Xbox 1080 negotiations.

I think the whole idea of spending a fortune on a console and hope to recoup it in 10 years is over. Get in cheap, milk it for ~5 years, then get the next box out. Start making profit from just 6 to 12 months in, instead of waiting years. Part of the logic of keeping a console out there for a really long time was because it takes developers and tools a long time to get to speed. But that argument gets thrown out the window if the machine is simple to code for from day one, as an all AMD box would be.

An "all AMD" machine really isn't a horrible idea if AMD offers a solid cost outlook (and adding ~10M CPU and ~10-20M GPU orders a year wouldn't hurt, nor would potential royalties, cross platform support, pushing forward platform specific features, and so on). e.g. Look at the lifecyle of a 5-7 year console:

Stage #1: Press Release Stage
Stage #2: Initial Wow Factor
Stage #3: Longterm Potential

In Stage #1 you need (a) great paper specs (programmable performance/FLOPs) and (b) tech demos. Assuming a fixed area budget for silicon there would be good reason to shift realestate to a GPU as GPUs have a lot more execution units. Per mm2 a GPU is going to offer more theoretical bang for buck; Cell may walk all over an X86 CPU in peak FLOPs per mm2, but a GPU does the same to CELL. iirc The Xenos parent die was ~180mm2 (w/ 232M transistors @ 90nm, TSMC) and the daughter die was ~70mm2 (w/105M transistors @ 90nm, NEC). Xenon (iirc) was 165M transistors (1MB cache) ~160mm2 on 90nm (Charter, TSMC). That is 410mm2 for the Xbox 360 silicon budget. Some devs have suggested the performance bottlenecks will swing in a direction where eDRAM's cost/benefit will be more of a negative, so if MS opts for an eDRAM-less design they have about 400mm2 of silicon to work with (assuming similar budgets and givens as the 360). AMD, per mm2, wasn't too far off what IBM did with Xenon (e.g. AMD's X2 had 154M transistors, 2x512KB of cache included, within 142mm2 on 90nm).

Looking at 32nm, does MS really want/need 12-16 X2 style CPUs? An X2 core is much faster than a Xenon CPU core per clock, but even then you have to wonder the utilization of 12-16 standard cores. What if MS chose 6 AMD style cores and shifted the left over CPU budget and the eDRAM budget over to the GPU (~150mm2)--that is nearly a 100% increase in GPU silicon realestate. Execution units in GPUs are quite small, but even assuming only a 50-80% increase in programmable GPU FLOPs hits home as the Peak FLOPs (important in marketing, Stage #1) have shot through the roof.

Unless Sony and Nintendo made similar design concessions it would appear MS would easily take the FLOPs crown.

From a tech demo perspective MS would be able to push out some amazing graphical and GPU-based physics demos to wow the press and consumers alike. Just look at Cinema 2.0 or the Ruby/City Raytracing tech demos. And it wouldn't be hard with DX and massive developer support/experience to hire out Epic, id, Crytek, etc to do something special for the release announcement.

Stage #2: Initial Wow Factor. Long before you hit your release date you need to give developers the resources to produce that will be available, and wow consumers, in the launch window. A mythical all-AMD machine would leverage fast x86 cores and DX. The latter is important because MS controls DX and is one of MS's key assets in the console arena. Getting developers to migrate from 6 threads of Xenon code on 3 cores to ~6 discrete x86 cores would be a chore but many developers are already multi-platform on the PC so it isn't a complete rewrite and it won't hurt that these x86 cores will be substantially faster than the Xenon threads. And while an x86 core is going to have a much lower potential than some other designs what it does do for the launch window is important: sluggish code and/or engines that continue to be capped by a primary thread dependancy could benefit in the short run (yes, write better code; yes, task your jobs better so you don't have this sort of bottleneck). Very demanding and/or low fruit could be tasked to the GPU (e.g. cinematic physics).

But the big win in Stage #2 is graphics. Release games look good compared to the old generation, but titles in year 2 and especially 3 and 4 really make launch titles look pretty poor. One hurdle is the reality of diminishing returns (if we jump from the ~10k models we have now to ~100k next gen how many consumers will really notice?). But if you can toss 50-80% more silicon resources at graphics (compared to the previous generation budgets) you have a higher likelyhood of wowing with your initial visuals and convince consumers of your power. Developers will get x86, DX, proven tools, and a ton of GPU resources. They should be able to migrate over code fast to hit the release date and really push visuals. Longterm they may be fretting over the fact they are already at the wall on the CPU cores, which leads to:

Stage #3: Longterm Potential. Whether MS goes with Cell CPUs, a Larabee style CPU, or something else the problem will always remain that developers are going to have to learn something new, adapt new tricks and techniques, and overall work outside the box and invest a lot of R&D to get the last % performance. They are going to face issues where large teams are going to run into issues where junior programmers can really screw things up and the best approach isn't always transparent. In an industry dominated by strategic release dates spending months testing new techniques takes away from adding features to the core product. Any direction MS goes they are going to have developers complaining (some very vocally) and it isn't going to be easy for anyone. But it is important MS have the resources available for developers that, who desire, to have something tangible to tap into.

The big picture is MS owns DX, they can do with it what they want. And through DX they can nudge and direct IHVs to a small degree. GPUs are already on the path of opening extensive GPGPU support so it wouldn't be so wild for MS to leverage this as "their direction." Even with less efficient algorhyms I think this could prove a long term win. Imagine a technique (e.g. a physics task) that takes 2% of CPU resources per frame but 10% on the GPU. We could shift our silicon budget back to the CPU and gain 50% (and get to 1%) ... but the flip side would be losing 30-50% GPU performance. I am sure there are scenarios where a GPU centric comes out short, but there will be many, many examples where GPU centric comes out ahead in the first thing consumers see (visuals) and may offer peak performance that a competing CPU cannot attain in some areas.

A GPU centric design would open the door to potential. It doesn't force developers to go this route, but it would be in their interest long term ... as well as MS's. Remember, MS owns DX. Seeing developers write important non-graphic game code through DX would make them giddy. Of course if Sony and Ninny go with GPUs with basic DX feature support MS also just created a portable platform: If you write some nifty physics code on MS's Xbox/AMD GPU you could port this over to Sony's mythical NV GPU PS4. Sure, Sony may want you to rewrite the code to their Cell CPUs (if they go that route) and depending on the PS4's budgets it may be necessary, but it may remain an option.

Right now, though, who knows what will happen. This gen is just kicking off into the mainstream. We do not know the release windows yet. We don't know the partners. We only have a vague idea of what process nodes will be available in the 2010-2012 window. We don't know who has what up their sleeves beyond 32nm. We don't know how useful DX11 will be yet (especially compute shaders), and we have no idea if Larabee will offer similar visual quality/performance per $ compared to AMD & NV. We don't know the future of Cell. We are in the dark about future input mechanisms being developed. At some point we would need to do an overview of all the players and what we know they may be able to offer.

NV. Strong DX GPUs as well as blossoming GPGPU potential. Great visual performance and great visual quality. The Cuda and AGEIA moves are strong considerations as well as Cg. Between the handheld market and consoles NV can be picky about what deals work best for them and may view their product as a premium and they don't seem included to special designs. Fiscally strong although the G200 misstep as well as potential GPU recalls show how fickle the industry is. Being squeezed by AMD and Intel doesn't help so they may be inclined to be aggressive.

AMD. Inhouse GPUs and CPUs. Strong DX GPUs as well as blossoming GPGPU potential. Great visual performance and great visual quality. Moving toward multi-GPUs offers some new potential (especially if silicon realeastate is expanded) if memory issues can be resolved; also allows for more flexibility if someone tries a last minute performance poison ball. Experience with eDRAM. Quality x86 CPUs, working on new vector unit designs as well as new CPU designs. Ability to use own fabs or 3rd party fabs in some situations. AMD needs cash and getting a strong demand in GPU and CPU orders could give them some cash flow. ATI/AMD has shown in the past a willingness to work closely with MS on DX and do console specific work.

Intel. Killer x86 CPUs. New CPUs (Sandybridge) and Vector units on their way. Great compilers, a ton of software support/experience. Way ahead on process technology and appears best situation to handle future hurdles. Larabee is a huge unknown (will AA, AF be any good? Will it be within 20% performance per $?) Intel has picked up Havok as well as some developers. Is Intel willing to be cost competitive for a performance ballpark to get Larabee into a console? How soon will Larabee v.2 be out? Will Intel be able to fix performance issues with their first design? How committed is Intel?

IBM. Would they do a custom Cell-style chip for MS? They are always doing neat stuff. IBM is pretty flexible but aren't in the news a lot (and my tooth hurts) so I will end here...

But you get the point. MS has a lot of options. The keys are keeping accessibility high, early performance high, longterm performance potential viable, leverage their assets, keep costs down, and develop means to facillitate more efficient game development. I think software, more than anything, is vital to next gen. Would MS be crazy to pick up an engine developer? If MS went with NV, there is an engine developer quite fond of NV and uses AGEIA... Content creation is a big issue, as are budgets and release dates. Toss in storage issues, online, memory designs, input mechanisms and MS's 1st party issues and I think they have a LOT cut out for them. If they don't release a year early all those cheap exclusives are GONE. I don't think you can gamble on what Sony will do this time, so contracting exclusive support early on is going to be important.

Btw, if my post pains you to no end, well then join the club. Thank my dentist for ripping, cracking, and snapping a wisdom tooth out. I gotta share the pain. :LOL:
 
I think people are putting far too much stock in the Wii. Wii is something only nintendo could pull off with their massive in house development and IP.

I would wager that the success of the Wii is only peripherally due to them just upgrading the game cube and much more with the innovative and unique controller and games to go with it. If either Sony or MS had the same games and controller technology at launch I think you would of seen the Wii fall by the wayside.

Looking at the Wii success, it seems that Nintendo's games are the primary sellers and 3rd parties aren't making much of a dent. For either Sony or MS to think they can replicate the Wii's success next gen would be folly at best. Neither has the in house IP in order to make it work nor the in house development.

Just looking at what has sold on the Wii, it doesn't look like its picked up any more outside development beyond what the game cube had. And looking at the best selling games, it almost entirely dominated by nintendo properties. Also everyone will have everything the Wii does in their next generation consoles, they will all have the motion control and the party games...

The company that has benefited the most this round from having top notch dev support and the easiest to program console is by far MS. Likewise the company hit hardest is Sony.

Do I think Sony will do another console that costs them upwards of 800+ at launch again with esoteroic hardware with unknown development enviroment totally foreign to most devs? No. I think as already stated that everyone will go with several fairly powerful main cpus that primarily focus on single thread performance and a gpu/parallel subsystem that deals with physics, graphics, etc. Its a model all the devs will be familiar with (its what 2 of the 3 consoles this gen already are and its what already available on the number one development and prototyping system in the world: the PC).

As far as real HD, etc. The general performance curves will already take them there. Part of the problem this generation is that all the consoles were designed on the wrong side of a technology transition and it shows. Next gen all the consoles will be able to do 1080P with reasonable graphics. It will all come down to making the systems easy to develop on for both the normal game studios and allow the engine creators to experiment in a simple and easy way with advanced techniques over the consoles lifetime.

Its almost certain they will all ship with BR drives and modest harddrives in the 500-1000 GB range.
I agree with most of what you say, but I think you missed my main point. Will there be enough buyers of this next generation of powerful consoles, will the next generation add so much appeal that it warrants a fast uptake among consumers and are publishers/developers willing to support it?

I think the next generation Nintendo console will be a significant bump in spec, because it is likely to go HD, the other two consoles may have more moderate spec bumps and stay with their current technology, because consumers may see diminishing returns between 4XAA and 16XAA or whatever.
 
Status
Not open for further replies.
Back
Top