Predict: The Next Generation Console Tech

Status
Not open for further replies.
I'm not saying that Arm will definitely be in the console, but there are some advantages and so it is a possibility for very little cost.
There's really no point in having ARM in console as something that would unify Windows platform. First of all: I don't believe OS codebase will be shared between desktop and console. There are tons of reasons*. Second of all: code portability has to do a lot with API portability and tools maturity. My Win32/C code works perfectly fine on x86, amd64 and IA64 - three very different beasts. And I'm not even thinking that much about making it portable: I just follow some guidelines, don't bind my code to, say, pointer size, endianness, etc. and I'm fine 99% of the time.

* the core concept of Windows is multitasking; the core concept of console is "run this game securely at max perf". This difference makes it hard to have generic OS components that can be shared: your I/O has to be secure on the console, so your I/O stack is different; you have no paging, so your memory manager has to be different; your OS tasks have strict budget so your background services have to work differently; your HW cannot be compromised so you run on top of a super-tight hypervisor; your graphics API does not need all the abstraction layers you need on PC, so you strip WDDM completely; and so on... I mean - it'd be cool to have one code base, but I don't think it's gonna happen for X3.
 
Sure they could use power gating to save power, but still not as efficient as Arm.

Wake-up on image/audio for Kinect would be much more power efficient than gating cores and would be a very nice feature.

Piracy could be more effective by having the OS controlling the system in a controlled environment on Arm. Similar to how Sony have effectively used Cell.

Its not going to be as efficient as a arm but is switching on 1 core at lower clock rates isn't going to be inefficient either. Besides your still stuck with having to run the gpu anyway which probably is going to use the most power anyway. You also need to keep in mind costs. Not sure how much a arm core + all the extra logic on the mainboard will cost you but it will be atleast 5 dollars? Is it really worth spending 250~500million extra (on 50million consoles) just so you can shave of a couple of watts used when watching a movie or something? I got one of those low power amd X2 cpu's for my htpc. It does 35watt max and that is without the ability to switch cores off. Switch one core off and down clock and your probably not even using 15 watt.

I don't see it working for piracy either. Just like cell it will depend on the global protection of your system, not on the cpu you use. Wii has a seperate arm cpu for all the security stuff I believe but still Wii security is fundamentally broken.
 
We saw how NV has a "different" core in their new quad design. Maybe MS may be looking at DVR and other functions and looking at the slow dash and stuff like Kinect and want some dedicated resources for these "background" tasks that are "always on" and then the primary CPU/GPU resources be available for developers. This way your DVR recording, dash, and other background stuff can run autonomously at all times without impacting gaming.

Maybe a crazy idea... but for the consoles to become more functional boxes there is the conflict between the gaming and the background media/social tasks. I am sure they could come up with a solution where there are dedicated CPU resources for such but maybe they have decided to go with dedicated hardware for it?

I think this makes the most sense for an ARM core. The high performance CPU & GPU would be wasted on dashboard & video duties. Maybe they are going to take the ARM Shiva class processor that is already in Kinect & put it in the console? That way they can cost reduce Kinect & also use it as an alternative CPU for the dashboard. That would save the monster CPU & GPU for developers.

Tommy McClain
 
Why should MS care how much power the Xbox uses at the dashboard if 99% of potential buyers don't? I have never heard someone say: "I don't want to buy a 360 because it's idle power consumption is too high."

If MS cared about idle power consumtion (not what caused their RROD problems), wouldn't they have made the new slim use less power at idle? 70W vs 90W peak hardly shows they care that much.

http://www.anandtech.com/show/3774/welcome-to-valhalla-inside-the-new-250gb-xbox-360-slim/3
 
Why should MS care how much power the Xbox uses at the dashboard if 99% of potential buyers don't? I have never heard someone say: "I don't want to buy a 360 because it's idle power consumption is too high."

This, my sound system and TV is going to use like 5x as much power as the console anyway. I'd appreciate a lower power footprint, but it's unlikely to affect the purchase decision, and I'd certainly rather they use the dollars that an ARM would cost to improve the prime functions rather than desktop power consumption.

I'm not suggesting an additional processor would be useless, just that it would be less useful than spending the money on improvements elsewhere.
 
I think this makes the most sense for an ARM core. The high performance CPU & GPU would be wasted on dashboard & video duties. Maybe they are going to take the ARM Shiva class processor that is already in Kinect & put it in the console? That way they can cost reduce Kinect & also use it as an alternative CPU for the dashboard. That would save the monster CPU & GPU for developers.

Tommy McClain

As an aside I think the hardware will be/should be tied to more features; e.g. Halo 3 videos. I have wanted that feature in every game. What if the new consoles could offer DVR features that allow every game to have recording features?

These kinds of ideas may influence the hardware as well and justify certain design choices that at first blush seem odd.
 
As an aside I think the hardware will be/should be tied to more features; e.g. Halo 3 videos. I have wanted that feature in every game. What if the new consoles could offer DVR features that allow every game to have recording features?

These kinds of ideas may influence the hardware as well and justify certain design choices that at first blush seem odd.
Realtime encoding would be really nice for live streaming a la TwichTV. You could even monetize it through Live.
 
I've been thinking that maybe there are other problems (cost,customization,space) to use 2 GPU in the same package, but TDP/wattage perhaps not impossible to solve,for we have today Radeon 6950M (barts/960 streans at 580MHz, 48TMU, 32 ROPs ) at 40nm yet with only 50 Watts/TDP.

A next gen console with 2 gpu this level (even Radeon 6990M with 1120 streans 75 watts/TDP at 40nm) at 28nm yields improved may be possible (at least launch date until Sony/MS change PCB,shrink ICs a year later for less TDP ...).

(And its my memory doesn't fail me RSX and Xenos in theirs releases dissipated at least something around 100 watts/TDP at 90nm)
 
Last edited by a moderator:
I don't think Xenon or RSX would have been anywhere close to 100W. Xenon had a much smaller cooler and much less airflow than the CPU (even with the modified GPU cooler), and the entire system was about 190W at the wall, so probably less 160W used inside the box.

"Mobile" parts have the advantage of binning to skim off the most power efficient parts, which is something that console vendors can't do.

Dual GPU means a more complex package or motherboard, along with built in inefficiency for the lifetime of the console, so I don't see it ever happening.
 
I've been thinking that maybe there are other problems (cost,customization,space) to use 2 GPU in the same package, but TDP/wattage perhaps not impossible to solve,for we have today Radeon 6950M (barts/960 streans at 580MHz, 48TMU, 32 ROPs ) at 40nm yet with only 50 Watts/TDP.

A next gen console with 2 gpu this level (even Radeon 6990M with 1120 streans 75 watts/TDP at 40nm) at 28nm yields improved may be possible (at least launch date until Sony/MS change PCB,shrink ICs a year later for less TDP ...).

(And its my memory doesn't fail me RSX and Xenos in theirs releases dissipated at least something around 100 watts/TDP at 90nm)

Why use 2 half clocked 6990M, when you can get a full clocked 6870 at a lower price? I am sure that with some effective engineering, you can bring down the initial 151 Watts to 100.
 
I've been thinking that maybe there are other problems (cost,customization,space) to use 2 GPU in the same package, but TDP/wattage perhaps not impossible to solve,for we have today Radeon 6950M (barts/960 streans at 580MHz, 48TMU, 32 ROPs ) at 40nm yet with only 50 Watts/TDP.

A next gen console with 2 gpu this level (even Radeon 6990M with 1120 streans 75 watts/TDP at 40nm) at 28nm yields improved may be possible (at least launch date until Sony/MS change PCB,shrink ICs a year later for less TDP ...).

(And its my memory doesn't fail me RSX and Xenos in theirs releases dissipated at least something around 100 watts/TDP at 90nm)

Cost to use 2 gpus instead of one larger one is less, and binning IS possible if the gpu is compatible with other less demanding devices. :smile:
 
I don't think Xenon or RSX would have been anywhere close to 100W. Xenon had a much smaller cooler and much less airflow than the CPU (even with the modified GPU cooler), and the entire system was about 190W at the wall, so probably less 160W used inside the box.

"Mobile" parts have the advantage of binning to skim off the most power efficient parts, which is something that console vendors can't do.

Dual GPU means a more complex package or motherboard, along with built in inefficiency for the lifetime of the console, so I don't see it ever happening.

There's pro's and con's to both packaging approaches, I don't see either approach being so much better than the other that never would be part of the equation. It depends on your end game (if you have one), if you see yourself not changing packaging thru the life of the console then a single chip probably wins out. But if you see a change such as stacking or even an apu then there's advantages that a split package offers over a single chip.
 
Why use 2 half clocked 6990M, when you can get a full clocked 6870 at a lower price? I am sure that with some effective engineering, you can bring down the initial 151 Watts to 100.

I see these dual GPU rumors, if they have any truth at all, as most likely indicating dev kits would be approximating a not yet available high end AMD 7 series chip, with something like a 6990. That would be one awfully high end machine, though. I dont know, at times I think next gen speculation threads seem to be way too full of downsizing sentiments that may not be reality, so maybe it's likely.

Edit: And now I'm a "Regular". This seems like a downgrade from Senior Member :p
 
Why use 2 half clocked 6990M, when you can get a full clocked 6870 at a lower price? I am sure that with some effective engineering, you can bring down the initial 151 Watts to 100.

I was thinking more power ( 2 * 6990M =~"3.3 Tflop" vs. 6870=~"2 Tflop") under the same wattage / TDPand thinking a possibility in theory the console manufacturer could optimize a more efficient access than Crossfire on pcs or Radeon HD 6990 with 2 gpu solutions on same PCB. /card.
 
I don't think Xenon or RSX would have been anywhere close to 100W. Xenon had a much smaller cooler and much less airflow than the CPU (even with the modified GPU cooler), and the entire system was about 190W at the wall, so probably less 160W used inside the box.

"Mobile" parts have the advantage of binning to skim off the most power efficient parts, which is something that console vendors can't do.

Dual GPU means a more complex package or motherboard, along with built in inefficiency for the lifetime of the console, so I don't see it ever happening.


Perhaps that's why many xbox360 died in the first year with RRods(my ps3 launch burn RSX).

Most likely you're right, but if they can customize 2 GPUs notebook like(2*Radeon 6950M or even 2* Radeon 6990M) and setting for the universe specifically for console packape (removing transistors dedicated to the world pc, PCI Express,UVD3.0 etc etc.) working efficiently and counting with a 28nm process (it depends yield rates unfornately...much much better than 20% per waffer) perhaps the best way to achieve power with relatively low wattage/TDP.

Yes 100 watts is too high for early RSX and Xenos...something range 80 to 100 watts is the best shot but here* we see the consumption/Wattage of the GeForce 7800 GTX512 and 7950GT that has some characteristics with RSX (RSX has more transistors,more cache,more RAM on GTX512 and 7950GT) with similar clock and reaches somewhere around 80 to 100 watts and maybe Xenos/C1 and eDRAM(332/337 million transistors) have even more wattage than RSX.


* http://www.xbitlabs.com/articles/graphics/display/geforce7800gtx512_5.html

http://www.guru3d.com/article/geforce-7950-gt-512mb-shootout-review/5
 
Last edited by a moderator:
How much power would xenos have consumed if it were a 256MB add in board? An x1800 was using something like 150 watts.
 
I was thinking more power ( 2 * 6990M =~"3.3 Tflop" vs. 6870=~"2 Tflop") under the same wattage / TDPand thinking a possibility in theory the console manufacturer could optimize a more efficient access than Crossfire on pcs or Radeon HD 6990 with 2 gpu solutions on same PCB. /card.

How come a "half clocked" 6870, which the 6990M essentially is, has more TFLOPs per Megahertz?
 
Status
Not open for further replies.
Back
Top