Predict: The Next Generation Console Tech

Status
Not open for further replies.
Interesting perspective, the only real difference between us may be that I was still considering the loss lead strategy whilst yours involved consoles being sold at a positive margin. The real question is demand. Can a $400 console offer enough of an improvement to the experience as to be worth a premium price over the current generation consoles?
I think that's definitely true. Wii is a case-study of the right gaming product being sold at considerable profit. MS and Sony won't be able to recreate that in all probability, but at the same time a huge leap isn't necessary. A respectible improvement in tech for 2012 will be sufficiently beyond the current gen for people to be interested. And price wise, XB360 was outselling supply when first released at $400. MS could have charged more and satisfied demand at a high profit margin. If Sony hadn't have gone with, they could have been making much nicer returns per unit now. So next gen, both MS and Sony can produce console designed to be break-even or profitable at a $300 price point and release them for $400. Joe Earl-Adopter will think they're getting the latest, greatest technology and be happy to pony up the premium to be an exclusive owner of the new machine. The rest of us will have to wait a year or so for the price to drop to the 'mainstream' $300. Although $400 isn't such a premium on older consoles given inflation. Tech is becoming so cheap these days it's hard for anyone to ask a premium on CE goods!
 
I think that's definitely true. Wii is a case-study of the right gaming product being sold at considerable profit. MS and Sony won't be able to recreate that in all probability, but at the same time a huge leap isn't necessary. A respectible improvement in tech for 2012 will be sufficiently beyond the current gen for people to be interested. And price wise, XB360 was outselling supply when first released at $400. MS could have charged more and satisfied demand at a high profit margin. If Sony hadn't have gone with, they could have been making much nicer returns per unit now. So next gen, both MS and Sony can produce console designed to be break-even or profitable at a $300 price point and release them for $400. Joe Earl-Adopter will think they're getting the latest, greatest technology and be happy to pony up the premium to be an exclusive owner of the new machine. The rest of us will have to wait a year or so for the price to drop to the 'mainstream' $300. Although $400 isn't such a premium on older consoles given inflation. Tech is becoming so cheap these days it's hard for anyone to ask a premium on CE goods!

Perhaps the biggest question is when the next generation console is released as well as whether it can establish a niche in the market. If the current generation consoles are expected to persist well into the next generation, its not so critical to let the previous generation console die before a new one is introduced.

So long as the interface is strong, a next generation competitor/complement to the Wii is probably the best bet at this point with far more powerful hardware it will be easier to find a place on the market whilst maintaining the previous generations viability. Catering to the hole between interface and performance we currently have.

I wonder if sooner rather than later is a better bet for the console manufacturers. Its an absolute cinch to adjust a game to run at a higher frame-rate/resolution compared to the other challenges game developers face and with the current port everything mindset I doubt the console would miss out on any third party support.

Move early, take advantage of the opening in the market between the Wii and the Xbox 360/PS3 and perhaps try to get the best of both worlds at the same time. I think this is the best scenario for profitable hardware because as economics taught me, with perfect competition economic profits approach zero, as we see with the Xbox 360 and PS3.
 
I would love for one of the consoles to use an Intel chip that combines a 48 core Larrabee, with 2 or 4 beefy traditional CPU cores based on their next-gen CPU architecture, Sandy Bridge, on 22nm. Plus the best custom Nvidia or ATI GPU that can be created--with focus being on rasterisation performance, since Larrabee can take over all the polygon calculation/vertex/geometry shader/setup stuff. That way, the console can benefit from Larrabee's programmability and future raytracing support, but also at the same time, not be slowed down by its less than cutting-edge raster performance.
 
I would love for one of the consoles to use an Intel chip that combines a 48 core Larrabee, with 2 or 4 beefy traditional CPU cores based on their next-gen CPU architecture, Sandy Bridge, on 22nm. Plus the best custom Nvidia or ATI GPU that can be created--with focus being on rasterisation performance, since Larrabee can take over all the polygon calculation/vertex/geometry shader/setup stuff. That way, the console can benefit from Larrabee's programmability and future raytracing support, but also at the same time, not be slowed down by its less than cutting-edge raster performance.
I don't think that this is a good idea. Say that per mm² larrabbee achieve the theoretical flop peak as Nv/ati gpu now you remove the job done by fixed function units as raterizer and RBE/ROPs and lower efficience say now you lag ~25% behind what a gpu woukld achieve for me it's still doesn't make sense to have two chips competing for the same job /design complexity/developping nightmare.
 
Would using a single chip design like larrebee make the whole processing -> rendering pipeline more efficient? If its both done on the same chip, couldn't it be done with just a single frame delay, rather than the usual two? Im not sure if thats going to help... but hey.
 
I would love for one of the consoles to use an Intel chip that combines a 48 core Larrabee, with 2 or 4 beefy traditional CPU cores based on their next-gen CPU architecture, Sandy Bridge, on 22nm. Plus the best custom Nvidia or ATI GPU that can be created--with focus being on rasterisation performance, since Larrabee can take over all the polygon calculation/vertex/geometry shader/setup stuff. That way, the console can benefit from Larrabee's programmability and future raytracing support, but also at the same time, not be slowed down by its less than cutting-edge raster performance.
Are you sure you don't want to add anything else? 0_0
 
Putting aside what Intel may achieve with Larrabee for the moment, I think how DX11 GPUs are shaping up that sliding budgets to the GPU may be the best bang for buck out of the gate a well as long range alu resources to fiddle with to push the last mile. In this sense AMD could potentially repurpose a handful of AMD64 cores (fast, relatively small) and a moderate 2x GPU design. Obviously not ideal, but I don't think next gen will give us "ideal" but instead "best balance of scaling/power with performance out of the gate with API and middleware support with something tangible to push down the road." Of course 2011-2012 is pretty far off and guaging diminishing returns and what gives the right balance will be a tough task, especially when tieing a design to price points and software reading to ship out the door.

As I mention to Al the otherday I am not sure it wouldn't be in Sony's best interest to kill the PS3 early (2010) and launch a fully backwards compatible PS3 with a improved Cell design and beefy GPU and RAM but more price reduction aggressive design. If Sony did such I am not sure with the increasing unpredictable nature of chip design that a 2xPPE 32xSPE with a really solid GPU would be solidly trumped in 2012 at a competitive price. They wouldn't even need a slew of next gen software out of the gate. Just hit a $400 price tag and aim for $250-$299 by 2012. They could even transition over PS3/PS4 games where depending on your HW you could get a visual bump and offer new mechanics for their wanky new controller. More importantly they would be setting the tone for software development (much like the 360) and hit MS where it hurts: no free exclusives. With MS axing internal studios they would actually be at a pretty bad position and Sony would be in a position to reboot and reposition themselves in the market while leveraging their investment on the PS3. If design and chip reduction issues are going to hold back future consoles a middle ground that is more strategic may go a long ways, although I am not too sure a lot of developers more 32SPEs.
 
I don't think its possible for Sony to launch by early 2010. it's only one year away now, there would be rumors and leaks all over the place, chips would have too be taped out, etc. There is no way the whole thing would have been kept secret. There is no big new from IBM one the cell front. On top of that I don't see IBM go for a 2PPU 32SPU design, there are strong leaks stating that IBM wanted the cell to be be tinier, I can't see them going for something pretty huge again. If anything I could see them favor a tinier chip, maybe somehow sacrifices transistors denity and aim for even higher clock speed (ala Power6).

From a non technical side I would love to see next gen system coming:
With a symmetric controller, say close to the 360 one but with the right stick standing where the buttons are.
A dual output, I really enjoy coop with my girlfriend, I wish the system would be able to output on to different screens (even at half the resolution then up scaled). I mean LCD can be conveniently move (as opposed to old huge heavy CRT) it would be easy to move it close to a tv.
 
Last edited by a moderator:
Would using a single chip design like larrebee make the whole processing -> rendering pipeline more efficient? If its both done on the same chip, couldn't it be done with just a single frame delay, rather than the usual two? Im not sure if thats going to help... but hey.
There's nothing about a two chip design that intrinsicaly slows the process down an order of magnitude. All a dual-package system has extra to cope with is bandwidth and latency with communication between the two chips. The CPU generates content that goes out to RAM that the GPU reads and turns into pretty pictures. If all on one die, the CPU can write to local cache for the GPU to use, but you'd need an awful lot of cache to fit the scene :)shock:), or some TBDR system. Any savings to be made are going to be too small to be a factor in choosing a single massive chip over two smaller ones.
 
It's come up before in the thread, but it has to be mentioned again - don't expect the silicon area budgets on 22nm that we saw on 90nm. 90nm era and lagging 65nm transitions was a wake-up call as it was for the industry, to say nothing of the opaque future that is post-22nm fabrication. If in the next year or so there's not a clear path towards a nice shrink progression past 22nm, then all these manufacturers are going to start at smaller die sizes to begin with.
 
...the WSJ predicts that he will be helping the software giant with new processors for the Xbox division and to develop new operating systems and software that will run on a variety of chips with more than one core.

I'd go more with the later part of that prediction than the former. I wouldn't think there'd be too much overlap with what MS is doing on the console side and what Marc would be dealing with over at MS. I think it's just a strange phenomenon in the world of financial journalism that the console businesses are ever prominent on the minds of these writers when they analyze these companies' moves.
 
Would using a single chip design like larrebee make the whole processing -> rendering pipeline more efficient? If its both done on the same chip, couldn't it be done with just a single frame delay, rather than the usual two? Im not sure if thats going to help... but hey.

frame delay? that would be about double buffering or triple buffering, which is unrelated to the architecture.
 
The 970MP@45nm would be about as big as the hollywood (still to much heat?) and woud beat XeCPU in many many cases (just like the 4670). No reason to not play any of todays game at 720p.


About emulation, it made news not long ago.



http://www.tomsguide.com/us/nintendo-wii-hd-emulator-gamecube,news-3725.html

Well I was asking about the particulars of the graphics code. Though a 4670 class GPU is well in excess of probably 15 to 20 times as powerful as the Broadway GPU, enough for graphical emulation, the CPU I guess would just have to convert code or possibly the GPU could have a built in API to make use of Gamecube and Wii games?

And 720p at 60 FPS would be a very nice thing to have with a Wiimote+Nunchuku. That 4670 based Wii2 GPU should have GDDR5 also:smile:
 
we might ask yourself what's a $400 PC in 2012. In a matter of weeks, we can build a PC with an Athon X4 (propus core), RV740, 4 gigs of ram and a 1TB drive along that price, seriously spanking any console or even arcade cabinet.
 
we might ask yourself what's a $400 PC in 2012. In a matter of weeks, we can build a PC with an Athon X4 (propus core), RV740, 4 gigs of ram and a 1TB drive along that price, seriously spanking any console or even arcade cabinet.

In that context 2012 seems too far off. Even mainstream PC gaming hardware should be laughably more powerful than the consoles by then, as if it isn't already.
 
Status
Not open for further replies.
Back
Top