Business ramifications of a 2014 MS/Sony next gen *spawn

Status
Not open for further replies.
Well Prophecy nintendo could allways do what they did last gen and just releas a slightly more powerful last gen system.

a quad core bobcat with a radeon 6850 class gpu with 1 gig of ram launched next holiday would run rings around current gen systems and devs could port existing xbox 360/ps3 games to it and just up the res /frame rate and improve textures and filtering.

Then the nintendo exclusives would take advantage of the hardware. Slowly games would switch to the nintendo system getting the games developed for it and the xbox 360/ps3 getting the down graded ports.

I bet nintendo could make such a system for around $200-$250

LMAO...people speculated about a much more powerful Revolution back then too but Nintendo ended up going with a GC+ in the form of the Wii.:LOL:
 
Why would they be unable to clock their CPU's to 3 GHz?

With 28nm, Arm chips now can approach 2 Ghz. It has to do with choices to make the chip more power efficient.

There are many choices made in designing ARM chips to be power efficient that impact speed. NEON is an add-on to ARM to supply a faster method for intensive (CODEC) tasks not suited to the ARM power efficiency choices. It is Cell SPE like in the idea behind it, it's faster than the ARM at certain tasks although requiring more power. It's so much faster that it completes it's task in so much less time than the ARM CPU would that there is still an overall savings in power when compared to just the ARM processor alone.

There are many computing tasks that are not suited to ARM processors. Editing video for instance is better suited to the Cell, Database servers would use a cell processor over an Arm but Media server farms would use an Arm over a Cell.

http://www.semiaccurate.com/2011/01/28/arm-talks-about-20nm-cpus-and-chips-your-eye/

The following article explains why ARM is currently dominating the market and confirms that portable is the wave of the future. The slower clock speed limitation of ARM chips can be overcome with multiple CPUs and GPUs in a package.

The problem is that once you have a desktop PC chip that uses tens of Watts, if you double the transistor count you also double the power use. For a chip line to not melt a hole in the earth after a few shrinks, you need to halve the power used per transistor every shrink too. Until fairly recently, that was the case, and the semiconductor industry was famous for delivering huge performance increases year after year in a smaller package that used roughly the same power.

Over the last decade, that power use has crept up, and the rate of increase is increasing too. Not a good sign. Mr Muller pointed this out with a nice chart that showed that scaling is still working for physical size and current, but not as well for capacitance, and voltage has effectively stopped dropping. What this means is the power consumed per circuit used to go down by a factor of 1/(a^2), where 'a' is the shrink factor, it now decreases by 1/a.

For those that don't do graphs and intersection points well in their head, that means the power is going down linearly now when it used to be an exponential drop. If you are power limited and just about every chip out there is, your transistor growth is also linear now too, not exponential. This means that, for the most part, chip performance is now on a linear curve as well. Pity the GPU makers.....

Mr Muller summed this up by saying the first wave of computing, defined by mainframes and minis up to the 1980s, was defined by adding performance with each new model. The 1990s saw the advent of the PC, and the overriding metric for that era was performance/$. In the 2000s, notebooks were the hot commodity, and those added another metric, power use. The dominant factor here was performance/(power * $), basically that if you wanted to sell a chip in to that market, power use was a very real concern. Power use could go up, but performance had to go up commensurately or more.

The future as he sees it will be dominated by 'mobiles', be they phones, MIDs or whatever form factor dominates. These have very different requirements from a laptop, all-day battery life, days of standby, always on, and many other things that a PC was never engineered to do. The overriding concern for this era adds another metric to the metric, energy cost.

With laptops, the problem was more one of finding an engineering solution to a problem of power draw. In the future, the question will not be, "Can we do that?", it will be "Is it worth it to do that?". Cost of energy is increasing so the problem becomes one of how to get a specific performance level with the minimum Watts used. Performance going up does not matter much any more, the unwritten message is that we are on the verge of 'fast enough'.

Strangely enough, ARM is very well positioned here, they have been designing chips that use power at levels that round to zero for years. Every other CPU company out there is doing the same to one degree or other, and the peripheral chip makers are also following suit. In very short order, every chip out there is going to have as standard what would be considered bleeding edge power management features a few years ago.

Semiconductor foundries are going to play their part in keeping scaling going as much as possible while using as little power as possible, but the days of easy and assured power scaling is over. New materials and tweaks on the atomic level will help, but will not get us back to where we were.

On that down note, things turned to the more technical side, and the role ARM was playing there. On the 32nm process, there have been ARM cores fabricated on it as early as 2008. IBM made a chip called Explorer in July of that year, followed by a full Coretex-M3 in October of that year, and Global Foundries did the same in May of 2009.

This was followed up by Alpha PDK (Product Development Kit) IP validation chips from IBM in June 2009 and Samsung a month later. Most interesting is that the chips listed as being on the 32LP TC1a process from IBM and 32LP TC1b process from Samsung. Full silicon validation of the ARM IP was first done by Samsung in February 2010 on a process labeled 32LP TC2.

That said, 32nm is almost old news by now, and 28nm is far enough along so there won't be any major changes. 20nm is the next big thing, and ARM did not disappoint there. The company talked about their CP (Common Platform) 20nm SOC test chip based on a Coretex-M0. This core is .2mm x .2mm and contains 8K gates, 20K if you count the entire processor subsystem. This was overlaid to scale on an ARM2 chip, built on a 2µ process with 6K gates in total. The Coretex-M0 was a speck on the older chip, and probably used a commensurate amount of power.

Now that a company can make a CPU that is 1/25 of a square mm but has more performance than a cutting edge RISC machine from a few decades ago, what do you do with them? NXP is currently selling their Coretex-M0 variant for $.65 each, how much cheaper do you need? How do you communicate with them, or do you at all?

There isn't a specific answer to these questions, but the goal is what many call the 'internet of things', basically everything will be sensor enabled and aware of what it needs to be aware of. If you can make a chip that small and power it, all sorts of opportunities become available.
 
Last edited by a moderator:
What Nintendo has to do is offer a better experience, one which is significantly different or better in order to get peoples attention. If they don't do this then it is possible that some other console maker is going to steal their lunch if they do what Nintendidn't.
Almost every generation has offered a better experience by being exactly the same, only better! You don't need a completely new product to sell systems. XB360 and PS3 are just HD versions of the previous consoles. There's nothing wrong with taking the appeal of Kinect and improving its tech to provide a better experience like finger tracking at distance, integrating Move for accuracy lacking in Wii games and Kinect games, and providing a unique experience in terms of a system targeted at these technologies, along wtih better everything else. I don't for a moment believe that if this Wii2 was released, all the existing console gamers would say, "it's just a Wii with Kinect and fancier graphics. I'm not interested," in exactly the same way they didn't say of SNES, "it's just a NES with fancier graphics," and of PS2, "it's just a PS1 with fancier graphics," and of XB360, "it's just an XB with fancier graphics."

By the time Nintendo gets around to releasing a new console we'll be 3 holiday seasons and 2 whole years into the new motion controls for Kinect and Move and therefore it is likely they'll both have an adequate motion game library and probably more than competitive.
But they'll be old tech. Whatever PS3 does with Move, Wii2 could do better, and so appeal to PS3 owners to upgrade. And also it could do Kinect games better, the equivalent of going from one analogue stick to two. so although these experiences won't be new, they'll be 'next-genified'. Whereas consoles coming a couple of years later probably won't be able to improve enough to differentiate. Unless Move and/or Kinect manage to sell masses to the Wii crowd, and having just bought new consoles they no longer want a Wii2 offering the same experience, than Move and Kinect's success shouldn't detract at all from a next gen Wii incorporating all these technologies.

Even so you have to consider that the people who have been drawn into multiplayer and these online networks are locked out from Nintendo's network because they have already invested in content for Live and PSN and they have friends on Live or PSN which they play with. So even if they developed and rolled out their network tomorrow they would have a considerable problem getting these already attached people to switch networks.
Surely you'd have the old console to play those old games? Or are you saying now that every XB360 owner is locked into the XB brand and cannot ever buy into another consoles, etc.? It's not like friend's are only allowed onto one network! And there's no guarantee or expectation that download games will run on new consoles. So for Wii upgraders, this is a non issue, and for PS360 owners wanting new hardware, each generation tends to start anew from scratch without any particular upgrade path and they don't lose anything on PS360 as long as they have their PS360, which they'll choose to get rid of when they no longer want that stuff any more.
 
2014 is sounding more and more likely really. If it was 2012-2013 I'd have heard something by now, some scrap of info, hint or whatever. But talk of next gen couldn't be any more non existant. Between that and how slow console pricing is trundling down it's seeming like 2014 is very likely.

There probably aren't many business ramifications of releasing that late, assuming they both release around similar times. PC will never provide competition to consoles again, and as long as console owners keep buying games in the millions, then why bother making a new console? It's just more profitable for them to milk current gen. I can't imagine a new Wii offering any competition because I wouldn't expect a new Wii to be graphically that far beyond what 360/PS3 offer anyways. So they both effectively have all the time in the world. I'm guessing they will just wait it out untill people stop buying games in droves as they currently still are, and then both will release new machines around the same time.

If this turns out true then that just sucks. I always predicted 2012 for Ninty and 2013 release for MS and/or Sony.

It's interesting to see how developers are pushing the ps3/360 more and more, but I question how much more blood can be squeezed out of that stone.

Amusing how we're usually neck deep in leaks, reports, rumors, and information about the next round of consoles this far in the generation. I wouldn't mind seeing what a xbox 720 or ps4 could do now. :p
 
It's interesting to see how developers are pushing the ps3/360 more and more, but I question how much more blood can be squeezed out of that stone.

Considering that the majority of games are multiplatform, there may still be yet some things to squeeze. The stable archticture may be just what everyone needs so devs can worry about fixing production issues rather than dealing with scaling up art budgets and readjusting again just so they can compete with everyone there.

The progression for trilogies is certainly interesting e.g. Gears of War 1-3, Mass Effect 1-3 so on.
 
The progression for trilogies is certainly interesting e.g. Gears of War 1-3, Mass Effect 1-3 so on.
I hope they just stick to the story and not try to stretch it out ala deathly hallows. It's hard to imagine they'll have time for another trilogy within the generation.
 
I hope they just stick to the story and not try to stretch it out ala deathly hallows. It's hard to imagine they'll have time for another trilogy within the generation.

In theory, MS/Sony could give key developers ideas about what their 2014 console "might" be like. Heck if all they had was an idea of the size of the memory pools and target rendering resolution they can start building some of the art assets.

Gears releases this year. That would give someone like EPIC (or any other studio) 2-3 years of developement for a potential launch title on a next gen console. The same could be said for the KZ3 team with developement for that wrapping up this year.

2012 and 2013 can easily be filled with games that are already in the pipeline, when those are finished those studios could in theory then start developement on next gen console games.

So a console launching in 2014 doesn't necessarily mean we'll see current gen trilogies being "stretched" to fill in the remaining life of current gen consoles.

Hell, I wouldn't be surprised if the Gears team wouldn't mind a break from making another Gears game (Just like Bungie wanted to do something other than Halo for a while).

Regards,
SB
 
Would it be possible for the new hardware to automatically add more AA and AF to current games as long as BC was possible?
I know that it's possible to do this with PC games and add these but it would be great if the new consoles could too.
The only problem I can see is with the change to the online marketplace where sony has now seen new revenue releasing games like god of war HD as the PS2 BC was removed.
Also I know they add different SKU's but how about different SKU's with different cards like the nvidea 560, 570 and 580 for example.
With 3 different prices low,middle and high but are capable of adding more AA and AF to each game release.
The cheap SKU could do 720p with the highest capable of displaying the same game at 1080p.
 
I can't see mixing performance SKUs as a good idea.

You'll be rated on your worst performing part and you lose the advantages of a fixed SKU.
 
Almost every generation has offered a better experience by being exactly the same, only better! You don't need a completely new product to sell systems. XB360 and PS3 are just HD versions of the previous consoles. There's nothing wrong with taking the appeal of Kinect and improving its tech to provide a better experience like finger tracking at distance, integrating Move for accuracy lacking in Wii games and Kinect games, and providing a unique experience in terms of a system targeted at these technologies, along wtih better everything else. I don't for a moment believe that if this Wii2 was released, all the existing console gamers would say, "it's just a Wii with Kinect and fancier graphics. I'm not interested," in exactly the same way they didn't say of SNES, "it's just a NES with fancier graphics," and of PS2, "it's just a PS1 with fancier graphics," and of XB360, "it's just an XB with fancier graphics."

The circumstances have changed considerable since the last generation and the present day. We're no longer limited as much by performance as we are by how much a publisher is willing to budget for a particular title release and additional performance isn't the most important metric a console is judged by. The most important consideration is the overall user experience and whether that experience would indeed be better enough compared to the Wii or Xbox 360 or PS3 to entice the current console users to upgrade. If the overall user experience is similar there simply isn't the same enticement for users to want to get the next model in the range. Releasing a console which is too similar to current generation consoles doesn't make sense, it would be like releasing the Wii with conventional controls instead of the motion control hype machine we had this generation.

But they'll be old tech. Whatever PS3 does with Move, Wii2 could do better, and so appeal to PS3 owners to upgrade. And also it could do Kinect games better, the equivalent of going from one analogue stick to two. so although these experiences won't be new, they'll be 'next-genified'. Whereas consoles coming a couple of years later probably won't be able to improve enough to differentiate. Unless Move and/or Kinect manage to sell masses to the Wii crowd, and having just bought new consoles they no longer want a Wii2 offering the same experience, than Move and Kinect's success shouldn't detract at all from a next gen Wii incorporating all these technologies.

How exactly can they make a better Move? You can't get better than 100% accuracy and Move is close enough to that target. If they make a controller which is even more accurate would it even matter? Would people even notice? They would have to do better in some other fundamental way which changes how people use the controller to make it a more complete and functional interface rather than making what is effectively a slightly better but indistinguishable product. It isn't good enough to research and develop a console which isn't significantly better than products which are on the tail end of their effective lifespans. I don't believe a little more performance and a little more accuracy on the controller makes a good follow up for a console which was called 'Revolution'. Nintendo wants a product which will help them expand their market, invite previous users to upgrade and steal market share from competitors and I simply cannot believe that a system which represents the same values as the Wii with a few taken from the PS3 and 360 can achieve these objectives effectively.

Surely you'd have the old console to play those old games? Or are you saying now that every XB360 owner is locked into the XB brand and cannot ever buy into another consoles, etc.? It's not like friend's are only allowed onto one network! And there's no guarantee or expectation that download games will run on new consoles. So for Wii upgraders, this is a non issue, and for PS360 owners wanting new hardware, each generation tends to start anew from scratch without any particular upgrade path and they don't lose anything on PS360 as long as they have their PS360, which they'll choose to get rid of when they no longer want that stuff any more.

I don't really believe most people will want to use more than one console concurrently, to most people it'd be needless duplication. If you buy a new console from a competing brand you can't bring your friends, your saved games, your downloaded games and movies and music collections and the overall process of transferring some of that content would be much more difficult. A game collection is something you may get bored with but you don't tend to want to abandon your friends as easily as you would last years Call of Duty. Since you can't log into Activision servers to play Call of Duty with anyone on any console if this is a part of your gaming life you have to consider it when the time comes to think about purchasing a new console. Im not saying it locks people into a particular console but it does mean that I believe there will be many people who may find it difficult to justify switching to another console brand if they've already invested in a particular online network.
 
I can't see mixing performance SKUs as a good idea.

You'll be rated on your worst performing part and you lose the advantages of a fixed SKU.

I always thought that it could be viable to make a 'salvage part console'. If you've got two SKUs, one which cannot easily perform many of the background OS tasks due to say lacking a mechanical HDD like the Xbox 360 Arcade units then perhaps it might make sense to give it a salvage part since it wouldn't need that performance.

For instance

12 core bobcat + 1,000 Dave processing unit (DPU) GPU with two cores allocated to the OS. It wouldn't be that bad if they disabled one of the cores on the CPU for yields because that could mean they could get both higher yields and lower voltages because they would have more flexibility in binning the dies so they wouldn't have to run the console at as high a voltage due to parts which have higher than usual leakage.
 
Maybe even if the makers dont want a new console, the developers can pressure them into it.

It's certainly getting harder to outdo the competition graphically, rendering engine features are converging very quickly. It's also evident that developers could do a lot more with faster hardware, everyone is making serious compromises here or there to get at least close to 720p and 30fps.

On the other hand, a more level field in graphics would open up the stage for significant gameplay innovations. I for one would prefer devs to focus on that, I don't think it has been pushed hard enough except for a few games.
 
The progression for trilogies is certainly interesting e.g. Gears of War 1-3, Mass Effect 1-3 so on.

We haven't seen anything from ME3 (apart from our CGI).
Granted, it's unlikely to see a step as big as the one from the first game to the second, ME2 pushed the 360 really hard in some scenes.

But nevertheless, when are we going to see at least a screenshot, Bioware? I'm like exploding from curiosity here!
 
We haven't seen anything from ME3 (apart from our CGI).
Granted, it's unlikely to see a step as big as the one from the first game to the second, ME2 pushed the 360 really hard in some scenes.

I can see into the futar. ಠ_ಠ :p (I meant to write the post in the future tense)
 
Nintendo should be able to launch Wii HD within a short time and without much effort be able to surpass the 360 and PS3 specs. Would be fantastic if they did, and kind of fun to see the best PC port on the Nintendo platform while the PS3 and 360 looks dated :)
 
Again, there is doubt that 4K displays will not be affordable, or even available until next decade or so. Just like there was doubt on 1080p displays when PS3 was launched.
Already this spring there is going to be an AV receiver from Onkyo that scales up to 4K, and it'll be on their lower range products (~ $600). 4K wil be the next 1080p, and it'll be here already next year. True, for the first year they'll be expensive, but after a year they'll be pretty much affordable.
 
I for now have strong doubts that if a new console launches in 2014, it will target anything above 1920x1080. Instead I'd expect more from higher framerates / 3D support, with pixel quality being put first now. That's not to say that the new devices wouldn't support video playback higher than that, but 1920x1080@60fps minimum is already a big step up from 1280x720@30fps. Games need to be able to have more going on on screen, and going to a 4K display will basically mean that these consoles won't be able to surpass current gen in that respect, only at best manage the same thing at a higher resolution.
 
I for now have strong doubts that if a new console launches in 2014, it will target anything above 1920x1080. Instead I'd expect more from higher framerates / 3D support, with pixel quality being put first now. That's not to say that the new devices wouldn't support video playback higher than that, but 1920x1080@60fps minimum is already a big step up from 1280x720@30fps. Games need to be able to have more going on on screen, and going to a 4K display will basically mean that these consoles won't be able to surpass current gen in that respect, only at best manage the same thing at a higher resolution.

IF we use the same design criteria for a PS4 as the PS3 had, few games render at 1080P but could be upscaled to 1080P (if there were enough memory), we could have a PS4 that few games rendered at 4K but could be upscaled. The target being something over 1080P and determined by cost and what the competition was doing.

Arwin, I don't disagree with your view but question the marketability of a "New Generation" 10 year life game console without new generation specs. The tabling of a Next generation machine by both MS and Sony would seem to support my view as your target is doable now with just slightly more I.E. PS3.5.

The NGP GPU EDIT: uses Processes for screen generation can help achieve a higher display resolution (TBR and TBDR).

There is also that 4k is at 24Hz framerate and 4K TVs must have some anti-judder scheme. The lower target framerate means higher res games will not need, can not use faster framerates. There is a performance savings there too. The PS4 would then have a framerate target for all games of 24Hz with the PS4 performing anti-judder (outputting at a higher frame rate) for older 1080P TVs and allowing the TV to perform anti-judder for 4K displays at 24 Hz.

For 4K, it would roughly translate into 16 (Quad PowerVR) NGP GPUs (64 total). Larrabee was scalable (more elements could be added to the design) and 1080P was speculated at 32-40 elements, 4K would then be 120. This puts Larrabee at a lower performance level than a PowerVR which is why the project was dropped by Sony. A low power GPU handheld design had 2X better performance than a higher (Console) power design.

Imagine what a PowerVR could do if it was designed without power limitations and running at 6 Ghz (14nm) instead of 1+.
 
Last edited by a moderator:
What the hell is "screen generation"?

I'm sorry but it seems to me that you're just making things up to try to support some completely ungrounded theories.
4K is silly, the cost of producing content for such resolution and the realtime/offline rendering costs are astronomical - and there's absolutely no customer demand for it. Electronics manufacturers already have a hard time selling 3D to the masses, they won't start to invest in something this serious before there's a guaranteed market. So there's absolutely no incentive on anyone's side to pursue another large jump in resolution.
 
What the hell is "screen generation"?

I'm sorry but it seems to me that you're just making things up to try to support some completely ungrounded theories.
4K is silly, the cost of producing content for such resolution and the realtime/offline rendering costs are astronomical - and there's absolutely no customer demand for it. Electronics manufacturers already have a hard time selling 3D to the masses, they won't start to invest in something this serious before there's a guaranteed market. So there's absolutely no incentive on anyone's side to pursue another large jump in resolution.

"Screen Generation" Sorry, I'm using terms that apply to the CE industry. We even have "engines" in LCD and DLP TVs. If you will do a Google search using the terms screen generation and rendering you will find many references to "generator" as in computer generation of video. You may be more familiar with "render". As Ray tracing and OpenGL becomes more popular to reduce the cost of producing content rather than rasterization, my understanding of how the terms are used in my industry would have generation of the screen rather than render as more accurate. I have read and understand that the OpenGL process descriptions do use the term render.

The same could have been said for 1080P. Low demand and expensive for the first 3 years. The rate of acceptance for 4K is debatable because we have just recently reached a point where 1080P is in just about every TV over 32 inches.

Have you done any research into 4K TVs? Most of the 4K LCD TVs offer polarized 1080P 3-D with 4K, rather than shutter glasses. The idea is to make 3-D more attractive. 4K is easily and cheaply possible on the player end as everything is already in place. IF you have the hardware for 3-D, you just about have the hardware for 4K. The HDMI 1.4 specs reflect this.

The display end is another matter. And there can't be any cheating like they did early on with 720P TVs accepting 1080P and displaying at 720P. For the 1080P 3-D to use polarized glasses, it's necessary to have the full 4K display.

For projectors and rear projection DLP, it's possible to have a much less expensive 4K TV, just slightly more expensive than a 1080P.

And yes, "So there's absolutely no incentive on anyone's side to pursue another large jump in resolution." that is probably true for developers as well as Sony and MS and is possibly why the next generation Game machines have been shelved, it's mentioned in the article cited by Shifty that started this thread.

4K TVs will be released and will become a standard within the 10 year life of a PS4 just as 1080P did with the PS3. Consumers will expect support for their new TV during that 10 years and any Next generation console would have to support it eventually.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top