Predict: The Next Generation Console Tech

Status
Not open for further replies.
A GPU capable of 1,000 single precision flops a clock is consuming ~2000 bytes per clock. At 500 MHz that'd be 1 terabyte per second BW it could eat through.
As frightening as that example is (since no commercially feasible solution exists to provide 1TB per second off-chip data transfer rate in a consumer product), it's also a very unrealistic situation. This rate of data consumption would only occur if the SPs only ran one single instruction on each chunk of data, and I think shader programs have evolved quite a bit beyond that by now...!

I don't know how long shaders commonly are these days (and undoubtedly these shaders also reference additional data such as textures and so on), but I don't think they hit 1TB/s theoretical data rates either. :)
 
If the next xbox is truly going to be the media hub of the living room, I can easily see it eating up lots of ram for non gaming tasks: Main display running a game, tasks like IE running in the background, perhaps audio video streams to multiple devices, background downloading/dvr'ing of TV/Movies, etc.
This is possible, but it's also a rather crazy way to go IMO. Why have the Xbox streaming background movies to handhelds instead of them streaming from the internet and the cloud services? Plus any streaming or downloading task needs only a minimal amount of RAM as a buffer. The impact on HDD use during a game should be far more significant.

Well, there's a whole other thread on this. Suffice to say that the OS requirements could be anything from 100MBs to 100GBs dedpending on what direction the console companies want to go. As such, we can't really guess how a 4GB or 8GB or 2GB console would perform or compare to another, especailly without info on the other storage like SSD. You may find a browser could be run task-swapped rather than multitasked, but with an SSD it might be a difference of 0.2 seconds to restore rather than 0.1 seconds when it's resident in RAM.
 
If the next xbox is truly going to be the media hub of the living room, I can easily see it eating up lots of ram for non gaming tasks: Main display running a game, tasks like IE running in the background, perhaps audio video streams to multiple devices, background downloading/dvr'ing of TV/Movies, etc.

I think there's a possibility that this device may have multiple concurrent users for the various apps/tasks.

one could say streaming, dvr, download etc. hardly eats memory. but still, web browsing can be heavy quick. if you end up with a wireless keyb/mouse and a 1920x1080 screen you'll forget about not being on a PC and won't do limited single page browsing.

if you try browsing the modern web on a 256MB computer (with XP or a light linux desktop, and the latest firefox or chromium) it gets hairy.

concurrent users? I don't think it's possible, because it's Windows Server territory. even though consumer windows has all the guts already (remote desktop and fast user switching are a prove of this) you can only do it with expensive software licensing.

it's why I run linux, because I can log to another machine whenever I need it. vs 1000 euros of licenses including special ones just for one windows computer.
windows "terminal server" licenses (for each remote user) don't even have their price listed pubicly.
 
Why? PS3 can render all sorts of functions and uses up <100 MB. Efficient apps needn't consumer masses of RAM to run, and there isn't a real need for massive multitasking in a conventional box. So unless these consoles are to become app servers to the home, running everyone's web browsing and video playback and multiple games across TV and phones and tablet, the need for OS RAM consumption isn't high.

I can see a larger footprint for enabling background webpages and immediate switching to a browser in game, but if that's not enabled (and with the ubiquity of handhelds that can browse simultaneously while gaming, one ahs to question the value of mutlitasking web browsing with game playing) then the OS needn't consume huge amounts.

I believe that will be the case.
 
This is possible, but it's also a rather crazy way to go IMO. Why have the Xbox streaming background movies to handhelds instead of them streaming from the internet and the cloud services?

that wastes your slowest bandwith. (or, costs more to the cloud provider)

of course, you would do that streaming from xbox if you had downloaded the movies to the xbox, rather than stream them in the first place.
it depends on your internet connection, further network or server congestion, personal preference.

I like that a downloaded movie is instantly accessible, skips instantly between parts of it, is of higher quality than what you can afford with streaming. you would likely be able to schedule the download from the tablet, have a quick look at it or the option of watching it on the console or another device.
 
I believe that will be the case.
So people are going to give up on smart handhelds and buy instgead dumb handsets that just stream content from a home server? Is there any, even lsight, suggestion of moves by someone towards this? MS have unveiled Windows 8 mobile and a high-spec tablet with the power to run apps natively. Who's suggesting we instead run apps remotely on a computer somewhere else in the house?

that wastes your slowest bandwith. (or, costs more to the cloud provider)...you would likely be able to schedule the download from the tablet, have a quick look at it or the option of watching it on the console or another device.
The specifics of movies are one use. Taken in isolation, that'd place the XB3 against standalone media servers. I can see something of a market for that, but then the RAM requirements are minimal. Streaming media from the XB3 won't need lots of RAM. Streaming audio or anything else won't take much. The only reason to stream apps from a remote computer is if the local target device hasn't the capability to run the app itself. eg. Game streaming would enable a small, low-performance device to play big-boy games. For everything else, if the target devices have enough performance then why not use that and run native content? Why have a web browser in XB3 and stream the pages to a handset or tablet? Why have a map application steaming maps? MS themselves have cross-device apps in their gameplan, where you buy the app and install it on multiple devices.

For most everyday tasks, the performance requirements are light such that a mobile chipset can perform them. Without a need to run high-performance tasks (HD video editing etc.), there's no need to defer to an external box.

I'm just not seeing a scenario where MS's view of the next-gen xbox is a home mainframe doing all the work of each user. That model is the domain of cloud computing. Processing power is cheap enough that people will have devices capable of everyday computing.
 
T-Ram should be cheaper. That is, if it actually works. Which is not a given.

Forgot to respond. The reason I said cost was because of this.

http://translate.google.com/transla...&sa=X&ei=15QtUJrCEoeryQHK6YFY&ved=0CFAQ7gEwAw

Improved SRAM single transistor technology is the world's largest capacity SRAM Ramtron subsidiary Enhanced Memory Systems improved SRAM, the modified SRAM capacity is four times the market memory capacity but low price, low power consumption; Four times larger than the traditional six-transistor SRAM capacity, the price is 4 times cheaper than traditional SRAM; The ESRAM speed with the interface with the conventional SRAM, is the low price of the large capacity SRAM improved SRAM capacity is four times the current market memory capacity, can be used in the system design a ESRAM can replace the four traditional expensive SRAM.
I don't know if there's anything to back that up though.

Keeping so many computation units fed is going to be extremely difficult. That's what SPEs were trying to address with 256 KB in just keeping its vector unit+scalar unit busy. A GPU capable of 1,000 single precision flops a clock is consuming ~2000 bytes per clock. At 500 MHz that'd be 1 terabyte per second BW it could eat through. Each 2 megs of local store would last 10 clock cycles, so 10 MBs eDRAM would net you 50 cycles of work at which point you'd need new data. I don't know if that'd really be beneficial or not. It wouldn't be like Cell with enough data stored locally for the SPE to work full tilt while more data is fetched unlses you had an enormous amount of local store.

GPUs are so wide that the memory issues will always be limited to main BW. It does make one wonder though if more could be achieved with less computation units and fast local store to feed them?

Wow.

I don't know if this would even be worthwhile, but could it be feasible for a console version of a GPU to modify the L2 cache in the GPU with a memory tech that would better perform? Assuming the usage of a more "traditional GPU design. I say feasible in that while I saw your comments on IHVs, could it be that they feel it's not necessary because of the "run of the mill" usage by the consumer? And that in a console setting there would be more of a benefit due to the dedicated gaming aspect? Or would that be a waste of time and money? Though I'm assuming the mass production would at least help some for the latter.
 
I'm just not seeing a scenario where MS's view of the next-gen xbox is a home mainframe doing all the work of each user. That model is the domain of cloud computing. Processing power is cheap enough that people will have devices capable of everyday computing.

I'm thinking MS (similar to what Pachter said) might try to get in with a cable company(ies) at least in the US and sell the 720 as the cable box replacement.

Currently, I use Uverse at home. We have two boxes, the main one with a 500GB (I think) hard drive that can record 4 HD streams and a dumb wireless stub on another TV that can watch TV and recorded programs from the main box. I can control them with my IPAD as well.

I could easily see MS wanting to do this with the 720 as the main box and the current 360 (or a smaller revision) as the extender. Mark Rein speculated on something like this. I would love it, if I had just one box under my TV and no crappy cable box.

Now how does this go back to 3 GB of ram for the OS. I'm not sure, but if you give them the room they'll fill it.
 
So people are going to give up on smart handhelds and buy instgead dumb handsets that just stream content from a home server? Is there any, even lsight, suggestion of moves by someone towards this? MS have unveiled Windows 8 mobile and a high-spec tablet with the power to run apps natively. Who's suggesting we instead run apps remotely on a computer somewhere else in the house?

The specifics of movies are one use. Taken in isolation, that'd place the XB3 against standalone media servers. I can see something of a market for that, but then the RAM requirements are minimal. Streaming media from the XB3 won't need lots of RAM. Streaming audio or anything else won't take much. The only reason to stream apps from a remote computer is if the local target device hasn't the capability to run the app itself. eg. Game streaming would enable a small, low-performance device to play big-boy games. For everything else, if the target devices have enough performance then why not use that and run native content? Why have a web browser in XB3 and stream the pages to a handset or tablet? Why have a map application steaming maps? MS themselves have cross-device apps in their gameplan, where you buy the app and install it on multiple devices.

For most everyday tasks, the performance requirements are light such that a mobile chipset can perform them. Without a need to run high-performance tasks (HD video editing etc.), there's no need to defer to an external box.

I'm just not seeing a scenario where MS's view of the next-gen xbox is a home mainframe doing all the work of each user. That model is the domain of cloud computing. Processing power is cheap enough that people will have devices capable of everyday computing.

When you add in things like Smart Glasses & Watches where they would want to have less hardware in the product yet give you nice visuals.
 
It's not just a dedicated chunk of memory, but also a dedicated APU to go along with whatever programs the secondary user is using while the primary user games. Pretty neat if you ask me. And this is just what's floating around in the patent we linked to a while back.
 
Now if you have some free time soon I think a lot of us would be curious what you and other developers thought of, say a 50MB chunk of embedded memory on-die that is full write/read with ~ 1TB/s of bandwidth and low latency.

For graphics, it's obviously the frame buffer. For other things, latency determines it's usability.

One of the things Cell is genuinely good at is physics, because not only do you have a lot of throughput, you can do all kinds of interesting acceleration structures with the 6-cycle latency local pool. GPUs aren't nearly as good at this, because the local pools are small, and because the latency of even the simplest operations kills all kind of interesting acceleration structures -- they are really only good at brute force. Having access to a few megabytes of low-ish latency (can't get anywhere near the 6 of Cell, but something like guaranteed <40 cycles and mostly running out of L1 would still be pretty awesome) massive bandwidth memory would make it a physics monster, with the limits much higher than cell.

As far as other use is concerned, I can't really say. There are a lot of things that would benefit, but the thing is, framebuffer and physics would benefit so much *more*, that it would simply make sense to reserve it all to them.


I've wondered alot about this rumour of the Nextbox having 6-8GB of DDR3/4. My biggest concern would be load times. I mean to load 6-8GB worth of data from an optical drive into a relatively slow pool of main ram, whether bluray or DVD, would take forever.

How would you get around that?

One of the best ways to use RAM in the case of slow disk access speeds is to just cache more. The more you can cache, the more BW you free up to real use, the more fidelity you get out of your limited pipe.

Also, I got shouted to the ground last time, but I still think that 1GBps+ connections will be commonplace for a huge chunk of customers by the latter half of the generation. Perhaps people will be more receptive now that Google has started putting down fibre in the USA? Anyone who has a GBps class connection is better off using the ram as the only local storage, and getting game content over the wire from the closest CDN.

Forgot to respond. The reason I said cost was because of this.
<snip 4 times better than sram>
I don't know if there's anything to back that up though.
It's legit. But it's comparing to traditional SRAM -- and as I understand, T-Ram is claiming a little less than 5X gain against SRAM. SRAM is known to be fat and expensive.

However, you should remember the caveat that as far as we know, T-Ram might not even exist. (Now, there's a lot of money behind it, so I expect it to pop up eventually, but genuinely new semiconductor tech has a habit of seeing decades of delays.)

I don't know if this would even be worthwhile, but could it be feasible for a console version of a GPU to modify the L2 cache in the GPU with a memory tech that would better perform?
This would be very hard if not impossible. One thing to keep in mind is that as far as switching individual cells goes, SRAM is fastest by far. Seriously, compared to things like eDRAM or 1T-SRAM it can be hundreds of times faster. However, they can advertise better speeds because SRAM is so fat, that when you build a pool of any significant size, the access latency will be completely dominated by the delay of getting the signal there. So, even if the 1T-SRAM takes a lot longer switching, because it's on average twice closer, it can win that back.

But that only goes when you got tens of megabytes of the stuff. For caches with ~low megabytes of room, the traditional approach is the fastest known approach. (That's why it's used.)

This rate of data consumption would only occur if the SPs only ran one single instruction on each chunk of data,

Well, a single instruction per chunk of data would actually be ~8kB per clock. 2 FLOPS = 1 FMA = 3*4B input, 4B output. 2B/FMA is the traditional practical value for normal shaders when doing direct rendering, and it assumes a lot of operand reuse. I don't have enough experience with it to say how it applies to indirect rendering.
 
I'm thinking MS (similar to what Pachter said) might try to get in with a cable company(ies) at least in the US and sell the 720 as the cable box replacement.

Currently, I use Uverse at home. We have two boxes, the main one with a 500GB (I think) hard drive that can record 4 HD streams and a dumb wireless stub on another TV that can watch TV and recorded programs from the main box. I can control them with my IPAD as well.

I could easily see MS wanting to do this with the 720 as the main box and the current 360 (or a smaller revision) as the extender. Mark Rein speculated on something like this. I would love it, if I had just one box under my TV and no crappy cable box.

Now how does this go back to 3 GB of ram for the OS. I'm not sure, but if you give them the room they'll fill it.

Uverse runs the MS IPTV stack and the existing 360 can act as a STB for it.

There is a lot of work ongoing for the replacement of cablecard and the direction what the government and the electronics companies are pushing is a gateway service model in which case the 720 would handle everything including streaming content to all other TVs in the house.
 
Wow i think MS need pretty hard work to "make" nextbox OS use 3GB
Good "luck" to them

I don't think OS uses 3GB at all, I guess it uses something like 512mb~1gb but it reserve the rest for apps.

8Gb = 2~3Gb (OS and apps) + 5~6Gb (games)
 
It is better to have RAM to spare than to lack RAM, especially as they already allegedly have more than enough to run any game and even more than their competition is rumoured to have inspite of the quantity supposedly saved.
 
I don't think OS uses 3GB at all, I guess it uses something like 512mb~1gb but it reserve the rest for apps.

8Gb = 2~3Gb (OS and apps) + 5~6Gb (games)
Oh,actually i mean OS+Apps...win8 on PC use less than 300mb RAM,unless nextbox use non-updated vista lol
It's very hard,and actually,no way and no point to do that,even nextbox can play game+record the game+stream the game to jtv or ust etc at same time,it still can't even reach 2GB for non-gaming part
 
With Kinect authentication will be done with your face. I don't see how people will be able to crack that easily and I doubt holding up a picture will work on a 3d camera.
 
Status
Not open for further replies.
Back
Top