Larrabee, console tech edition; analysis and competing architectures

Not only diminishing returns, it will probably not even work under the same clock speed (with the same latencies). Same goes for increased cache size.

I confess, I actually don't quite understand what an SPE is in relation to a full fledged PPE (why an SPE is much smaller), but on the high level, it's a specialized SIMD unit :?: Wouldn't that be analogous (digitalogous? :p) to the VMX units?

edit:

Ok, just recalled the layout of the chip, and I suppose the SPE & LS aren't that much smaller than a PPE sans L2.... and there is at least one power point presentation that describes them as "VMX accelerators".

So shouldn't MS be able to get IBM to design something simlilar with VMX128?
 
Or go a completely different path to offset & balance complexity and raw performance/marketing metrics.
 
You can't just slap more cores on that CPU given that they share the L2 cache, it would scale badly.

Sharing an L2 isn't that bad. More cores would increase the size of the L2 proportionally as well as increase the number of banks (and thus the bandwidth) of the L2. The L2 is only 1MB, so making it 1.5MB or so likely wouldn't slow it down much (maybe a clock cycle or two), but it wouldn't be a big scalability issues for modest number of cores.

BTW..SPUs can execute up two instructions per clock cycle, and on decently written code a single SPU runs circles around a XeCPU core at any time of the day.

I've never programed for either core. Certainly Cell has a much higher peak FLOP rate than XeCPU. But I'm surprised that a single SPU runs circles around a single XeCPU core. Why would that be?

Both are dual issue. Both have 128 128-bit SIMD registers. Furthermore, the SPUs can only perform a single vector compute operation per cycle (if I recall correctly). I'm pretty sure it can't do a two vector compute instructions per cycle, but perhaps it can do a vector load + one vector compute instruction in the same cycle.

In contrast, the XeCPU has dual issue, and both of them can be vector operations. Both CPUs are at similar clock frequencies. The XeCPU also has an additional 32 general-purpose registers, too.

The XeCPU will have cache misses, but some of that latency can be hidden by the dual-thread nature of the cores. In fact, XeCPU can execute instructions from both threads in the same cycle, further hiding pipeline stalls and such.

So, what gives a single SPU the edge over single XeCPU core? (honest question)
 
Not only diminishing returns, it will probably not even work under the same clock speed (with the same latencies). Same goes for increased cache size.

Sure, the L2 might take a few extra cycles, but I don't think it would be a big impact. The first-order factor that determines how fast a cache access will be is the size of the SRAM array. A really rough rule of thumb is that it is sqrt(number of bits). So, you can make the cache 4x larger for something like 2x the latency. Of course, you'd also have a larger overall cache that could be shared more efficiently (perhaps reducing the L2 miss rate).

The bigger issue is that if you look at the die photo for XeCPU, it rotates and mirrors the cores so that the L2 control logic for the three cores are all close to each other in the center. If the number of cores was to be increased, that entire chip would need to be re-floorplanned. It wouldn't be a trivial change.
 
I was wondering about console cpu upgrade paths for the next generation. Lets take the nextbox as an example. Consider the possibility of a "Tick/Tock" cycle for a console. Would it be better to update this type of CPU by going down the "easier" route of slapping two together onto a die. Dual to Quad core for example. This depends on whetcher the current unit is considered to be a part of this hypothetical upgrade cycle or not. Going this route as the Prof mentoined to me would involve the "tick" cycle making the system a hexagonal core. It seems from my understanding of the architecture, something similar to a Core 2 Quad makes the most sense to me.

There is also the possibility of having the nextbox console be complete overhaul, making that the Tock of the cycle. I would pick that it would be heterogenus, so scaling the CPU would involve simply keeping the old cores, and adding new cores depending on what type is needed. In my opinion a flexible architecture designed for this from the beginning would be required. My understanding of this means that to me, a Nehalem/Larrabee type architecture would be desireable to scale in a 8-32 core model.

My whole point is that it's cheaper in the short run and the long run to produce your consoles under this model. An early nextbox need not be prohibitively expensive following this model. Infact it could launch quite soon at a low price to speed adoption (I think full backwards compatibility is a must though) . and by the time the next playstation is released, the architecture would be well optimized, with an updated CPU/GPU being all thats required to maintain a performance lead. Think of buying a new P.C every 3 years, vs the cost/benifits of an incremental upgrade path. This is the model that helped destroy 3dfx, the entrenched graphics heavy weight. Perhaps this type of aggression could put Microsoft on top as the provider of home entertainment.

Under this model, two consoles will always be current life. Sony does this with the PS1-2-3, when the 3 came out the PSX became EOL. Also under this cycle consumers simply have to stay within one generation to remain current. As soon as the new console is released, the old one becomes the value part (Like Gpus for instance) In addition to this, services such as TV on demand etc, can be more flexibly altered to fit the needs of the consumers.


Thanks Professor for taking the time to PM me about this. I can't thank you enough. I hope im not stealing your thunder.

This is a spin off of part of our discussion. Mostly his ideas though. :)

Edit: Complete overhaul. Much easier to read and my thoughts are more clearly defined.
 
Last edited by a moderator:
Sharing an L2 isn't that bad. More cores would increase the size of the L2 proportionally as well as increase the number of banks (and thus the bandwidth) of the L2. The L2 is only 1MB, so making it 1.5MB or so likely wouldn't slow it down much (maybe a clock cycle or two), but it wouldn't be a big scalability issues for modest number of cores.



I've never programed for either core. Certainly Cell has a much higher peak FLOP rate than XeCPU. But I'm surprised that a single SPU runs circles around a single XeCPU core. Why would that be?

Both are dual issue. Both have 128 128-bit SIMD registers. Furthermore, the SPUs can only perform a single vector compute operation per cycle (if I recall correctly). I'm pretty sure it can't do a two vector compute instructions per cycle, but perhaps it can do a vector load + one vector compute instruction in the same cycle.

In contrast, the XeCPU has dual issue, and both of them can be vector operations. Both CPUs are at similar clock frequencies. The XeCPU also has an additional 32 general-purpose registers, too.

The XeCPU will have cache misses, but some of that latency can be hidden by the dual-thread nature of the cores. In fact, XeCPU can execute instructions from both threads in the same cycle, further hiding pipeline stalls and such.

So, what gives a single SPU the edge over single XeCPU core? (honest question)

I do think XeCPU cores like the PPE in CELL (being both born basically from the same original 1 GHz PPC64 project you quoted earlier in the thread) can only issue from a single thread at a time and I do not think it can dual issue Vector Instructions every cycle... also IIRC another area in which sometimes you might have to pay more attention on XeCPU cores is if you pass data between the FXU and the VMX-128 registers as you do not have a direct path on chip (that meaning you have to go to memory, or better the L2 Cache [which yeah is still on chip :)], IIRC) while the SPU has a unified register file.

(http://www.ibm.com/developerworks/library/pa-fpfxbox/)

Comparing the diagram in there:

http://www.ibm.com/developerworks/library/pa-fpfxbox/figure2.gif

with the same diagram made by Kahle for the PPE:

http://researchweb.watson.ibm.com/journal/rd/494/kahle2.gif

(http://researchweb.watson.ibm.com/journal/rd/494/kahle.html?S_TACT=105AGX16&S_CMP=LP)

make both core designs appear very similar.

yet, still leaves me to this day a lot of questions about particular details regarding the differences between the PPE or better the PPU and what we could as well call a PPX core excluding of course the VMX unit and the VMX-128 unit from the comparison.

What still makes me confused the most is how at the same process node (90nm SOI) the PPE minus its 512 KB L2 cache manages to be bigger than a single PPX core (the PPE really grew from its initial revision to the more final ones) with a much smaller in theory VMX unit and the identically featured (from the outside) execution pipeline.
 
I confess, I actually don't quite understand what an SPE is in relation to a full fledged PPE (why an SPE is much smaller), but on the high level, it's a specialized SIMD unit :?: Wouldn't that be analogous (digitalogous? :p) to the VMX units?

edit:

Ok, just recalled the layout of the chip, and I suppose the SPE & LS aren't that much smaller than a PPE sans L2.... and there is at least one power point presentation that describes them as "VMX accelerators".

So shouldn't MS be able to get IBM to design something simlilar with VMX128?

SPE is a real processor, VMX is an execution unit.
They need to wrap at least parts of a minimal RISC processor around VMX, and copy CELL's memory model, put a few of those together and yes they can call it XELL. Of course it probably won't be as efficient as SPUs (area, power and performance wise for a comparable ISA) but should still be faster than Xenon for data local or streamlined algorithms.

... ignoring the Sony angle.
 
My whole point is that it's cheaper in the short run and the long run to produce your consoles under this model.

It's cheaper yes, but will it be effective? That depends on several factors. Frankly, I think every generation, the entire architecture must be evaluated from the ground up on these consoles. If you decide to keep legacy aspects, good, but you cannot keep a corporate strategy of that nature over the lengths of time we're talking. Intel's classic "tick-tock" strategy, which I have to imagine is what's being echo'd here, serves a double-edged function. At the same time that a new process is coming online, a legacy architecture is shrunk and improved; the focus is more on the process. During the 'tock' cycle, the now mature process is used as the basis for a more dramatic architectural shift.

And in essence, this is supposed to be a yearly thing. To an extent it's what the GPU makers do as well.

But for consoles, whose design goals can be tied to many different factors, and that can be constrained by very real cost ceilings/targets, I think it absolutely must be case-by-case at all stages. Nintendo did a 'tick' this generation, in some ways... though without the fundamental increase in transistor count expected. Either way though, it only panned out for them because their thrust this generation is so inherently shifted from prior gens. It's worth noting that in the modern console era, there has *never* otherwise been a tick/tock strategy adopted... and not for lack of these companies awareness of the scheme.

Things simply change too much in five years, and to risk going pure legacy is to risk being left seriously left behind from a performance standpoint vs active competition. Now, at this point in time, with the shift to parallelism and the draw-down in the MHz race, who knows. For the CPU, maybe MS would be ok just slapping two-four (two seems too low) XeCPU's together and calling it a day. In 2011, that would be a cheap proposition, no doubt. But for the GPU I think it goes without saying that if they want to stay 'hardcore,' they need to go to a modern architecture. And frankly I think at the minimum, the CPU needs to see at least some tweaking in its design.

For Sony I think the path is a little clearer in terms of Cell-revision and an NVidia GPU, but even for them, after the expense of creating what they felt would be a 'platform' architecture, the truth is that nothing is cast in stone as technology advances over the next two-three years (pre-prototype).
 
Frankly, I think every generation, the entire architecture must be evaluated from the ground up on these consoles.

When are the next generations of the PlayStation (PS4?) and Xbox due?

Is there any chance that Microsoft might make a "0.5" generation release of a new XBox? Something that would be realized in a year or two that would be comfortably more powerful that the existing XBox 360 and PS3, but yet would come out two years or so before the next PlayStation update? Of course, this Xbox 2.5 would be compatible with current games (maybe even play them with more detail) while also allowing new games. Part of what I'm basing this conjecture on is that Microsoft's gambit of pushing hard to get the XBox 360 out before the PS3 seems to have really helped the XBox out.
 
A change every 2.5 years would eat into the cost savings the console makers get from economies of scale, increasing manufacturing efficiences, and refinements of a given design over time.

If Microsoft did this with the Xbox 360, they'd be spending money on a costlier newer design just as manufacturing started on the Falcon variant of the old one.
 
Last edited by a moderator:
When are the next generations of the PlayStation (PS4?) and Xbox due?

Is there any chance that Microsoft might make a "0.5" generation release of a new XBox? Something that would be realized in a year or two that would be comfortably more powerful that the existing XBox 360 and PS3, but yet would come out two years or so before the next PlayStation update? Of course, this Xbox 2.5 would be compatible with current games (maybe even play them with more detail) while also allowing new games. Part of what I'm basing this conjecture on is that Microsoft's gambit of pushing hard to get the XBox 360 out before the PS3 seems to have really helped the XBox out.

An Xbox 2.5 which preserves the functionality & compatability of the 360 but forces developers to support a higher/new hardware performance specification is.. for all intensive purposes.. an Xbox 3.0..

Don't forget even the 360 & PS3 have BC..
 
A change every 2.5 years would eat into the cost savings the console makers get from economies of scale, and increasing manufacturing efficiences, and refinements of a given design over time.

Right now the console market is based around having bleeding edge (and expensive) hardware at release. Then they shrink it, integrate parts, etc., which lowers the cost and price (but the price doesn't decline as quickly as the cost). Then, the last few years of the console generation, all the other competitors also have four-year-old designs, so the consoles become cash cows for everyone at that point.

Actually, Nintendo doesn't play that game. Nintendo hasn't been on the bleeding edge the last few generations. With both the GameCube and Wii, Nintendo has been laughing all the way to the bank. They didn't sell as many GameCubes or have the same revenues as Microsoft and Sony, but in terms of profit and return-on-investment, Nintendo's GameCube did just fine. Certainly better than XBox I and arguably better than the PS2. The Wii is doing even better (and not because its 3D graphics are so good, of course).

I guess what I'm trying to say is: what if Microsoft moves away from this "revolution every five years" model to a "evolution every 2.5 years" model. It may just be that the dynamics of the console industry would prevent that from happening (I'm not sure what those dynamics would be, but perhaps some of you can comment). Such a model would allow the consoles to not need to be on the bleeding edge at the launch (currently required right now because you want that system to last several years). In fact, it would be more like PCs (and mobile phones, and iPods, and everything else). Something that Microsoft would be pretty comfortable with. Perhaps automobiles are a counterexample, but automotive technology is developing much more slowly than computer technology.

I'm not saying this will happen. I'm just asking: what would prevent this from happening?
 
Right now the console market is based around having bleeding edge (and expensive) hardware at release. Then they shrink it, integrate parts, etc., which lowers the cost and price (but the price doesn't decline as quickly as the cost). Then, the last few years of the console generation, all the other competitors also have four-year-old designs, so the consoles become cash cows for everyone at that point.

Actually, Nintendo doesn't play that game. Nintendo hasn't been on the bleeding edge the last few generations. With both the GameCube and Wii, Nintendo has been laughing all the way to the bank. They didn't sell as many GameCubes or have the same revenues as Microsoft and Sony, but in terms of profit and return-on-investment, Nintendo's GameCube did just fine. Certainly better than XBox I and arguably better than the PS2. The Wii is doing even better (and not because its 3D graphics are so good, of course).

I guess what I'm trying to say is: what if Microsoft moves away from this "revolution every five years" model to a "evolution every 2.5 years" model. It may just be that the dynamics of the console industry would prevent that from happening (I'm not sure what those dynamics would be, but perhaps some of you can comment). Such a model would allow the consoles to not need to be on the bleeding edge at the launch (currently required right now because you want that system to last several years). In fact, it would be more like PCs (and mobile phones, and iPods, and everything else). Something that Microsoft would be pretty comfortable with. Perhaps automobiles are a counterexample, but automotive technology is developing much more slowly than computer technology.

I'm not saying this will happen. I'm just asking: what would prevent this from happening?

Cost..?

Demand..?

It's likely that by investing billions into your new platform only to launch the next in 2.5 years you'd be cutting off your install base before you ever got one.. This would prevent mass market penetration of your products & thus destroy your ability to make any money at all off anything.. Not to mention the fact that you're basically going to piss off all of your developers who now have to adapt to new technology every half cycle & never get the chance to really get the most out of it with respect to re-engineering their code base to max optimal use of the resources available..

In fact i'm intrigued as to what you think the "benefits" of such a model might actually be considering this is probably one of the reasons why consoles are so successful (when compared to PCs with respect to consumer mass market consumer appeal..)..?
 
Right now the console market is based around having bleeding edge (and expensive) hardware at release. Then they shrink it, integrate parts, etc., which lowers the cost and price (but the price doesn't decline as quickly as the cost). Then, the last few years of the console generation, all the other competitors also have four-year-old designs, so the consoles become cash cows for everyone at that point.

Well... that's the ideal model, certainly, but it's worth noting that as described only Sony has served as an exemplar of it (and Nintendo in handhelds). Traditionally, the release of a new console has seen a rapid tapering in legacy support. In the post Nintendo/Sega era things are changed somewhat, but GameCube and Xbox didn't stick around at all after the new gen for example. There are of course reasons, but I think it's Sony's ~10 year cycles that should be viewed as the anomaly almost... and Nintendo again on the handheld front. I do think that Microsoft will try to go down that path though this gen; Nintendo we'll just have to see.

Actually, Nintendo doesn't play that game. Nintendo hasn't been on the bleeding edge the last few generations. With both the GameCube and Wii, Nintendo has been laughing all the way to the bank. They didn't sell as many GameCubes or have the same revenues as Microsoft and Sony, but in terms of profit and return-on-investment, Nintendo's GameCube did just fine. Certainly better than XBox I and arguably better than the PS2. The Wii is doing even better (and not because its 3D graphics are so good, of course).

I think many would tell you that although they didn't spend as much on R&D/materials, that the GameCube was actually the most elegant system design though across a number of criteria. I think coming out later in the game allowed them to take advantage also of some of the tech trends at the time that allowed it to be such as well. And indeed, meager though its marketshare, it was profitable to some extent or another. Keep in mind though that the billions you see Nintendo collect through those GameCube years stemmed primarily from the handheld market, and not the home console market.

I guess what I'm trying to say is: what if Microsoft moves away from this "revolution every five years" model to a "evolution every 2.5 years" model. It may just be that the dynamics of the console industry would prevent that from happening (I'm not sure what those dynamics would be, but perhaps some of you can comment). Such a model would allow the consoles to not need to be on the bleeding edge at the launch (currently required right now because you want that system to last several years). In fact, it would be more like PCs (and mobile phones, and iPods, and everything else). Something that Microsoft would be pretty comfortable with. Perhaps automobiles are a counterexample, but automotive technology is developing much more slowly than computer technology.

I'm not saying this will happen. I'm just asking: what would prevent this from happening?

I think iPods is the best example to go with, since I think its the level on which they would have to differentiate. I *do* think its possible, but in the core area of CPU/GPU architecture - vs other feature set enhancement - what I think makes it most challenging is developer/mindshare burden. If you follow closely over time, you'll notice a definite trend: rising development costs, and years-long development. To the extent that the only way I see it being even remotely viable would be for there to be an XBox 360 "Plus," with a library offering that is identical across the board to the 360 on new releases. What would be asked of developers in this case would be 'simply' to add engine enhancements capable of utilizing in some nominal fashion the higher spec'd sysem to produce a better result. Sort of the options selection that happens in PC gaming, only pre-determined for the user dependent on the system. The games sold would still come on the same media, and as such it's the console itself where profit would be derived: such a SKU would have to be profitable, and perhaps serve as some other platform enhancer pre-NextBox as well; maybe built-in WiFI, high-capacity HDD, HD/BD drive, basic Windows install as an alternate boot path(!)... who knows. Some of it or none of it, but basically the console would have to be profitable, because it's certainly not going to be worth developers efforts to program specifically to this SKU. Prettier versions of planned releases are the limit of its scope.
 
Last edited by a moderator:
It's likely that by investing billions into your new platform only to launch the next in 2.5 years you'd be cutting off your install base before you ever got one.

Implicit in my argument is that it wouldn't cost billions each 2.5 years. As it would be evolutionary vs revolutionary, it wouldn't be these huge Herculean efforts. Just as the GPUs and CPUs evolve over time (with new jumps for new microarchitectures or new process technology), why couldn't consoles be more similar to that (not the same rapid rate, just not one a five-year cycle).

Not to mention the fact that you're basically going to piss off all of your developers who now have to adapt to new technology every half cycle & never get the chance to really get the most out of it with respect to re-engineering their code base to max optimal use of the resources available..

Also implicit in the argument is that the systems would be evolutionary, so the game developers wouldn't need to totally re-engineer the code base every time.

In fact i'm intrigued as to what you think the "benefits" of such a model might actually be...

Lower risk (you're not betting the farm on each new generation). More stable revenue and profits (wall street likes smoother profits). Continues use of a design team.

What do you do with your hardware design team in between generations? Sure, you can have them working solid for five years to get the next console out, but that is exactly the reasons the consoles cost so much to develop.

...considering this is probably one of the reasons why consoles are so successful (when compared to PCs with respect to consumer mass market consumer appeal..)..?

As an outside observer, the biggest advantages I see for end users of game consoles is: no need to install software, no mucking with re-installing windows, etc. It is as simple to use as a DVD player. This seems like an important reason for the mass market consumer appeal of consoles (remember too that the mass market isn't just high-end 3D gamers).

Historically, game consoles were also cheaper. Yet, that doesn't seem to be as true with the current generation now that a PC from Dell can be cheaper than a cutting-edge game console. Another trends is that game consoles were originally toys for teenagers and younger, but now they are increasingly targeting 20-somethings, too. And the Wii is having even broader appeal in terms of age groups.

Of course, developers like game consoles because they know they have a stable, steady system to target. Yet, companies still make games for the PC, and that is an incredibly diverse and rapidly-changing market. Besides, if you're targeting a game to run on multiple platforms, some of that advantage likely goes away.
 
Right now the console market is based around having bleeding edge (and expensive) hardware at release. Then they shrink it, integrate parts, etc., which lowers the cost and price (but the price doesn't decline as quickly as the cost). Then, the last few years of the console generation, all the other competitors also have four-year-old designs, so the consoles become cash cows for everyone at that point.
This is something of a recent phenomenon. Successful consoles of earlier generations did not try to compete at quite as high a level.

There are other dynamics, such as Microsoft and Sony trying to push beyond gaming-specialized consoles and taking more control of the home media center that encouraged heftier hardware.

The Wii is more economical in that regard because remaining specialized allowed it to skip what wasn't critical for its role.

I guess what I'm trying to say is: what if Microsoft moves away from this "revolution every five years" model to a "evolution every 2.5 years" model. It may just be that the dynamics of the console industry would prevent that from happening (I'm not sure what those dynamics would be, but perhaps some of you can comment). Such a model would allow the consoles to not need to be on the bleeding edge at the launch (currently required right now because you want that system to last several years). In fact, it would be more like PCs (and mobile phones, and iPods, and everything else). Something that Microsoft would be pretty comfortable with. Perhaps automobiles are a counterexample, but automotive technology is developing much more slowly than computer technology.

PC hardware is sold as a product in and of itself. Its own profit margins finance the R&D needed for its evolution, and it is expected to make money almost from day one.
In addition, there is a certain level of flexibility in order to allow for different hardware combinations that adds cost but can survive on the fatter margins of the market.

Console hardware is often a loss leader or a modest earner compared to the software licensing and sales that make the money needed for design work.

What seems to be happening is that the fixed cost of a design, whether new or an evolution, is only a fraction of ongoing expenses compared to the costs of manufacturing millions of consoles.

Saving X millions of dollars up front on the design effort can be a bad thing if it means that the manufacturing lines cannot shave X dollars off of the production of tens of millions of units.

It's one thing to manufacture hardware of a given level of performance more cheaply with process shrinks and hardware revisions, and an entirely different can of worms to do the same with hardware that has even higher specifications.
Even cost-saving revisions have a cost involved in their design, so it's not a money saver to pay money to make a revision that then costs more unless it can pull in more cash.

The crux of the matter is that the cash is tied to the software being sold.
Unless the evolutionary hardware allows for a tangible improvement of the software that leads to consumers spending more money on the games, the console maker doesn't really win out.

The Wii's success is in part due to the fact that its interface allows for tangibly different software.
The others suffered initially (and possibly still do in some genres) from the problem that their new hardware didn't lead to noticeably different or improved software.

A little bit better hardware isn't particularly conducive to software people buy more of or pay more for, so it encourages greater generational changes.
 
Implicit in my argument is that it wouldn't cost billions each 2.5 years. As it would be evolutionary vs revolutionary, it wouldn't be these huge Herculean efforts. Just as the GPUs and CPUs evolve over time (with new jumps for new microarchitectures or new process technology), why couldn't consoles be more similar to that (not the same rapid rate, just not one a five-year cycle).

The design is probably the cheapest part for a new console launched. Manufacturing and marketing are what's going to kill you if you do it every 2.5 years. Only recently did MS set aside two billion dollar for the Xbox 360 manufacturing defect. Too much risk, too little return if at all.

Beside the consoles market are more software driven. They expect the third generation software to look better over the first generation. Developers will exploit the hardware for all its worth is surely a better option for console owner and manufacturer point of view.

Also unlike other hardware like PC or MP3 players, consoles made most of their money from software royalty.

I can see handheld moving to shorter product cycle if there are enough competition, but currently its only DS and PSP. Not like Mobile phone or MP3 player sector where its more competitive.

But what I would like too see Consoles divided up like Graphic Cards in terms of their performance and functions. Say $300 get you the basics gaming fuction with sub 720p @ 30fps games with standard res textures and model, than you have the high end $5000 that do 1080p @120 fps with high amount of AA and high res arts as well as more than gaming functions. But launched during the same window of oppurtunity.

Simply put the barrier to entry is just too great for consoles to launch every 2.5 years and Even if they could the window of oppurtunity is non existant.
 
Back
Top