Predict: The Next Generation Console Tech

Status
Not open for further replies.
I think there have been some regulations conserning stand by power usage, but other than that, there will certainly not be a regulation in the near term that would ban a game console drawing something around 200w, or even more to be honest...

Well I believe California was looking to ban Plasma TVs

http://www.ecogeek.org/content/view/2430/

See?

So I don't think it is at all far fetched that they take a dim view of consoles taking up too much power or instituting a tax on 'energy guzzling' appliances.
 
I know California recently slashed the amount of power HDTV's were allowed...I'm not sure what other regulations go on. I remember at the time the regulations were strict enough, there was concern Plasmas would not be able to meet them.

http://www.wired.com/gadgetlab/2009/03/california-tv/

Edit: Squilliam beat me

I say we just ban California :)

The bad thing about these laws is if California does it, it becomes a defacto national law, as no retailer wants to distribute two TV standards, no car maker will make two models of the same car that meet different emissions standards, etc.
 
Kind of OT but I read lately something interesting in a "serious" IT news paper. The industry should be way more concern about recycling and how long they keep their IT asset. They were speaking of a side effect on software providers they should think of bringing improvements and new functionalities while not relying on hardware delivering the same perfs. (so optimizations as in consoles).
The ecologic effect of such practices are way worse than power consumption alone.
It takes a lot of energy to put together news chips, systems and recycling is really lacking.
/OT
 
Well I believe California was looking to ban Plasma TVs

http://www.ecogeek.org/content/view/2430/

See?

So I don't think it is at all far fetched that they take a dim view of consoles taking up too much power or instituting a tax on 'energy guzzling' appliances.

There is quite a bit of distance between California considering on putting some restrictions on TVs and the EU making around 200w power draw limits on consoles. For starters there aren't enough consoles out there "flying above the radar" for the legislators target them specifically. 200w is not that much for a electrical home appliance anyway. I'm sure consoles can hide under computers just fine on this. I don't expect the next consoles to draw more than what the current ones did at launch though.
 
Last edited by a moderator:
What would it bring? (EDram)
It's like the size of LS nAo for example stated multiple times that it's not in the top list of thing that would make SPUs better. SPUs are about streaming and STI made sure they had "plenty" of bandwidth (~25GB/s is/was a lot in 2005).
Well if there's very fast system RAM then a fair bit of processing can be done piecemeal, but when you start jumping around larger datasets like textures, you'll hit a latency wall with managing tiny chunks in the SPE's LS. Substantial eDRAM would give Cell full control over the framebuffer with very low latency, and fit in several render targets. Think deferred rendering with every sample to hand to blend in whatever interesting ways you see fit! (okay, we won't get that much eDRAM! :( ) As part of a fully programmable renderer, that's going to be a significant advantage over having to work on pieces at a time in cache. If hardware texture compression/decompression was included in the SPEs, the graphics aspect would be reasonably strong, and my faith is that novel renderers would avoid the texture headaches (how's about working per texture instead of per pixel, loading a texture and rendering every point that references it?) and produce a platform comparable in attained visual performance to a conventional GPU while maintaining the flexibility of the open processor.
 
Well if there's very fast system RAM then a fair bit of processing can be done piecemeal, but when you start jumping around larger datasets like textures, you'll hit a latency wall with managing tiny chunks in the SPE's LS. Substantial eDRAM would give Cell full control over the framebuffer with very low latency, and fit in several render targets. Think deferred rendering with every sample to hand to blend in whatever interesting ways you see fit! (okay, we won't get that much eDRAM! :( ) As part of a fully programmable renderer, that's going to be a significant advantage over having to work on pieces at a time in cache. If hardware texture compression/decompression was included in the SPEs, the graphics aspect would be reasonably strong, and my faith is that novel renderers would avoid the texture headaches (how's about working per texture instead of per pixel, loading a texture and rendering every point that references it?) and produce a platform comparable in attained visual performance to a conventional GPU while maintaining the flexibility of the open processor.

How do you know we won't get that much ED-RAM? The transistor budgets might be very high indeed for the next generation consoles even with relatively smaller die sizes.

There are several different ways of looking at it.

One option is that they have different sub specialised SPEs within the same processor. It is unlikely that all of the SPEs would need to be improved to implement texture extensions. It is quite possible that only 8 would be needed within a single ring bus for that functionality.

Another option is a seperate GPU processor mated to ED-RAM which accesses main memory through the main CPU die.

We have to consider that at 22nm or 28nm they can fit 2B+ transistors within a single die. In that number they could probably fit for instance 32 to 48 improved SPEs, 4 PPC cores and a large quantity of ED-RAM. This would be a significant increase over the 300M transistors they used for the PS3 CPU. Releasing on 22nm would be 4 full process nodes beyond what they launched the PS3 with.
 
Well I believe California was looking to ban Plasma TVs

http://www.ecogeek.org/content/view/2430/

See?

So I don't think it is at all far fetched that they take a dim view of consoles taking up too much power or instituting a tax on 'energy guzzling' appliances.


It would also end up preventing OLED TVs from entering the market. Hence why the OLED association fight against the standards.


The Energy Star program is another failure. It is riddled with scandal after scandal. Just google "energy star scandal". What is the point of making a refrigerator reduce power consumption by 20% if it increases the odds of breaking down twice as fast? The overall energy of a consumer having to go out and purchase a new model more frequently causes far greater damage.
 
OLED consumes a lot of power. Over the long term it has great potential to be low power.


Here is an older article on the Sony XEL-1.

Power Consumption
While some issues were resolved through modifications to the drive, we found others that still need to be addressed - specifically, power consumption and viewing angle.

A look at power consumption showed that the self-emitting design exhibits about a 10W difference between all-white and all-black states. The average power consumption for all white is 28.4W, and for all black (no emissions) 18.3W. This is the total power consumption for the entire TV, including components other than panel drive; but even so, the TV engineer was surprised at how high it was, considering this is an 11-inch set. We also discovered a slight difference in white and green hues depending on viewing angle (Fig 2).

Sony said from the start that reducing power consumption would be the next development challenge, said Yoshito Shiraishi, general manager in the E Products & Business Development Department of the TV Business Group at Sony. One of the reasons why Toshiba Corp of Japan delayed release of its 30-inch class OLED TV originally slated for 2009 is also thought to have been high power consumption. According to president and chief executive officer (CEO) Katsuji Fujita of Toshiba Matsushita Display Technology Co Ltd, "OLEDs of 30 inches or more consume two to three times more power than LCDs. It will take a little more time to drop this to at least the level of LCDs."

http://techon.nikkeibp.co.jp/article/HONSHI/20080226/148048/

Keep in mind the Sony XEL-1 wasn't even Full HD, that would have increased the power consumption.


And here is Sharp (obviously biased towards LCD but...) talking about the pros and cons of OLED.

Negative OLED Characteristics:

Dynamic display efficiency. While you can write a few lines of static text with great efficiency, video requires more power than an LCD. OLEDs are more efficient for small graphics or text because they only consume power in the area where they are addressed.

To date, the reliability has not come up to the levels of LCDs.

It is particularly difficult to drive the blue colors where the luminance efficiency is very low. As a consequence, the lifetime is reduced, and burn-in is also an issue.

http://www.sharpsme.com/Page.aspx/americas/en/b3fad008-bf63-4e66-ab68-7a52cae8fa1e


OLED is fine for cell phones, where most of the time the graphics being displayed is just static text. When it comes to video where the pixel are constantly being refreshed power consumption goes up.

OLED is going to improve and has a lot of headroom, but other technologies like Plasma are also improving dramatically when it comes to energy consumption. The king of low power and large screen size is the Mitsubish Laservue.

Power Consumption
65 inch Panasonic plasma = 729 watts
75 inch Mitsubishi laser = 128 watts
65 inch Sharp LCD = 525 watts
 
Cisco sponsored "state of broadband" research, where USA ranks 15th:
http://247wallst.com/2010/10/18/new-study-on-broadband-quality-ranks-us-15th-csco-t-vz-s/

If one assumes, as does Cisco Systems Inc. (NASDAQ: CSCO), that the availability and quality of country’s broadband penetration reflects that country’s preparedness for the future, then the US lags behind 14 countries in a tie with Canada and Latvia for 15th place. The leader, as it has been for all three years of the Cisco-sponsored study conducted by the Said Business School at Oxford University and the University of Oviedo, is South Korea.

Building out the US broadband network means that broadband suppliers like AT&T (NYSE: T), Verizon Communications Inc. (NYSE: VZ), Sprint Nextel Corp. (NYSE: S), and others will need to continue making large investments in laying fiber optic cable. These companies are also spending heavily to boost mobile broadband quality. According to the study, 10% of mobile broadband users enjoy quality equal to fixed-line users.

According to the study, the current measure of broadband quality comprises a download speed of 3.75 Mbps, an upload speed of 1 Mbps, and latency of 95 ms. These benchmarks allow today’s users to participate in social networking, low-definition video streaming, basic video chat, sharing of small files, and standard-definition IP television.

In order to be prepared for what the study calls “Internet applications of the future” (3-5 years), broadband requirements jump to a download speed of 11.25 Mbps, an upload speed of 5 Mbps, and latency of just 60 ms. These future applications include video networking, high-definition video streaming, high-quality video telephoning, sharing large files, and high-definition IP television. No US city meets these higher requirements according to the study. Fixed-line broadband in the US is available to 75% of households, compared with 100% in South Korea. In fact, South Korea’s broadband quality exceeds future requirements by about 3x in upload and download speeds and 22% in latency.

Another interesting conclusion from the study is that broadband consumption patterns appear to be diverging. A basic consumer of broadband may require download speeds of just 2 Mbps and consume about 20 gigabytes per month of data. What the study calls a “smart and connected home” may require speeds of 20 Mbps and consume some 500 gigabytes of data per month.

The study also looked at how broadband quality affected market share of the service providers. As might be expected, monopoly carriers that provided fiber connections increased market share by 13% in two years. Cable providers with high-quality broadband in competitive markets increased market share by 10%.

The disparity between consumption models, the high cost of laying fiber, and the market share improvements lead to the conclusion that tiered pricing is sure to become the norm, and that the battle over net neutrality is being decided in favor of the carriers. Those households that use only minimal broadband service will not want to subsidize those using large amounts of bandwidth.
 
I'm always amused by the fact that those who don't use as much don't want to subsidize the rest. They act like those who barely watch tv and pay for cable should somehow pay less then those who watch tons of it. There are bound to be pricing tiers just like anything else but it works much better keeping it like cable or phone then trying to go to an electricity type pricing model.

And I'm not surprised the US is behind sheerly cause of the area it would need the network to cover and the availability in rural areas.
 
I'm always amused by the fact that those who don't use as much don't want to subsidize the rest. They act like those who barely watch tv and pay for cable should somehow pay less then those who watch tons of it. There are bound to be pricing tiers just like anything else but it works much better keeping it like cable or phone then trying to go to an electricity type pricing model.

And I'm not surprised the US is behind sheerly cause of the area it would need the network to cover and the availability in rural areas.

Also if what I hear is true, the government actually gave tax breaks to the phone companies for the purpose of getting fibre-optic layed out in major areas to push high speed internet services in the late nineties. Of course that plan never came to be, the companies got their tax breaks and instead gave us dial up service, then highly expensive broadband, and we have been behind ever since. It's a shame really. Where I moved to with my family in late 1996, there was fiber optic cable being put in as the neighborhood was still in the heavy process of being developed (It is a suburb of the Dallas/Fort-Worth, Texas, USA area). Funny thing was that residents couldn't even use it in the first place because there was no service for it. Hell, cable installation had not penetrated most parts of the neighborhood, so most people who wanted expanded television got a satellite service (and we followed suit). Of course, people now can use the fibre-optic (I'm sure of it lol), but it was only years later as demand for higher speeds grew.

As far as expandability goes, yes it's a rural area issue too. Suburbs cause enough trouble creating and implementing infrastructure, and rural areas compound the issue. Suburbs IMHO here need to be denser like proper cities, but smaller and further spread out from other cities if you get my drift. Major points in networks and industry could be more evenly spread out across states and the country, and residents would (theoretically) be closer to places of work, interest, and transportation with the cities made denser but smaller overall. The suburban dream of middle class America bourgeoisie pretty much created a huge host of problems that no one wants to take the blame for. Yes, it has it's benefits, but at the same time I think the overall implications of suburbanism greatly increase energy needs, create sociological rifts, destroy valuable farm land and natural areas that need to be preserved. /End suburbanism rant. Back to internet infrastructure, I think the future for rural America will be wireless based using Wireless phone services to send data to modem boxes or cards like what is already available. Rural residents will just have to live with the costs.
 
Last edited by a moderator:
heres my prediction.

Since 2008 ive been predicting console releases in 2013,

And now without even knowing what performance AMD Cayman is (November release)

I predicted Cayman will be the right performance target for 2013 consoles.

Whatever you can do withit is what we will get ( except perhaps wth some more DX features ie a new Shader model etc)
 
heres my prediction.

Since 2008 ive been predicting console releases in 2013,

And now without even knowing what performance AMD Cayman is (November release)

I predicted Cayman will be the right performance target for 2013 consoles.

Whatever you can do withit is what we will get ( except perhaps wth some more DX features ie a new Shader model etc)
Indeed that could be a good basis, I find the new HD6850 pretty impressive while its TDP is 127 watts( EDIT that was a guru3D typo).
@28mm2 something in the same class as Cayman should do the trick easily. By the way AMD impresses me more and more, their perfs per watts and mm2 goes higher.
I believe that there are truth in the 4wide vliw rumors, it may not be for cayman (definitely not for barts). It will only get better.
 
Last edited by a moderator:
Indeed that could be a good basis, I find the new HD6850 pretty impressive while its TDP is 127 watts.
At 28 Watts something in the same class as Cayman should do the trick easily. By the way AMD impresses me more and more, their perfs per watts and mm2 goes higher.
I believe that there are truth in the 4wide vliw rumors, it may not be for cayman (definitely not for barts). It will only get better.

Agreed,

Current guess has the cayman at 3billion trans at around 400 - 450 mm2 @ 40nm

is it fair to say that in 2013 well be at 20nm or less?

That would put it at 150mm2 tops? thats very reasonable target size chip for consoles right?
 
I predict the 3rd-generation Xbox GPU will be will beyond Cayman even if it launches in late 2012, but especially if it launches in late 2013. AMD / ATI was working on R1000 as of several years ago (Cayman is R9xx) so I expect a mid-range if not upper-mid-range DX12 part with well beyond 3B transistors. We're going to see a larger leap than from Xbox to Xbox 360 next time. The resolution won't go beyond 1080p and framerates will be 30 to 60 fps again, but with an order of magnitude increase in geometry detail, textures, shaders, features and hopefully alot more/better AA as well as much more post-processing effects.
 
I predict the 3rd-generation Xbox GPU will be will beyond Cayman even if it launches in late 2012, but especially if it launches in late 2013. AMD / ATI was working on R1000 as of several years ago (Cayman is R9xx) so I expect a mid-range if not upper-mid-range DX12 part with well beyond 3B transistors. We're going to see a larger leap than from Xbox to Xbox 360 next time. The resolution won't go beyond 1080p and framerates will be 30 to 60 fps again, but with an order of magnitude increase in geometry detail, textures, shaders, features and hopefully alot more/better AA as well as much more post-processing effects.
Well it's likely to be beyond that in feature but I think that Kietech was speaking about "raw" throughput (in GFLOPS).
 
Agreed,

Current guess has the cayman at 3billion trans at around 400 - 450 mm2 @ 40nm

is it fair to say that in 2013 well be at 20nm or less?

That would put it at 150mm2 tops? thats very reasonable target size chip for consoles right?

I don't know really Intel will be 22nm for sure the others???
 
Status
Not open for further replies.
Back
Top