Predict: The Next Generation Console Tech

Status
Not open for further replies.
So aproximately when would the tentative specifications for DX12 be complete and when would they likely be finalised? Are we going to see a new DX iteration in 2012 to go with the new version of Windows? Im just thinking backwards to when the hardware would have started to be designed and then forwards to when the hardware for consoles would have to be finalised.

The funny thing, wouldn't using an X86 system mean that they wouldn't actually have to release official development kits that far before the actual release of a console, meaning people working on futuristic 'PC' games would have no idea that the game was actually being designed for a next generation console as well.
 
4x the resolution requires 4x the processing power, unless you're using something like an O(n^2) algorithm.

You're kinda talking twaddle now. A 1080p frame is ~2 million pixels. At 24 bits RGS, that's 6 megabytes a frame. 60 FPS would be 360 MBps bandwidth consumed for shifting through every pixel. A comparison between two 1080p60 framebuffers would be 720 MBps. PS3 has >45 GBs. Next gen will have more. A couple of GB/s from the total RAM pool is no great loss, and certainly nothing needing a whole extra memory system as you are suggesting. You are over simplifying the process.

Multitasking OSes for decades have been able to run concurrent, independent task on the same RAM and processors by shifting tasks on the fly. Video processing is no different. The current Kinect PC demos are doing exactly that, using the same processor to evaluate the Kinect data, and then run whatever tasks are happening concurrently. Gesture, Face recognition and Camera are eventually going to be modularized functions that will spin off to CE equipment. I believe it will be easier to program a console if these processes are separated and will result in cost saving for the Main CPU hardware if the processes are not part of general CPU pool processes.

Again, this isn't at all accurate. You'd either have a posterized image losing all the information that denotes objects, or you'd have to dither it, making it nigh impossible to do optical processing on. And it'd still look like crap. If you're using the video feed in game, you'll need the full colour image. Good optical recognition wants as little noise and as much information as possible. JPEG compressing a video stream is bad enough, let alone throwing away most of the image information! And it's uneccessary. Future ports will be able to cope with higher camera resolutions. Maybe they'll be limited to 720p. Regardless, that's all covered by the general IO choices of the console and don't need any special attention, unless you feel a brand new port needs to be designed specifically for high-speed cameras because the likes of USB3 aren't up to it. IF you look at Kinect outputs they provide depth Z axis as a number and using mostly Z input the Xbox creates posterized image that is converted to a wire frame model that is compared to templates.

No different to every other system out there. We don't break PCs up into a processing component with CPU and RAM for audio, another for video, another for physics, another for browser, etc. We take one pool of resources and use it dynamically. Yes they do, Video in PCs has it's own memory for a very good reason. You are taking a PC that's designed as a general purpose machine and is hardware and software upgradeable. A game console is not hardware upgradeable. Again, you're looking at a few GB/s maximum. In systems with likely well in excess of 50GB/s, that's not a problem that needs special attention.

An 8x increase in needs will match a natural 8x increase in performance that comes with the next generation of console hardware. The impact will be no more than the current requirements are on this gen. XB360 wasn't designed with a memory and processing subsystem for a future 3D camera. Instead the Kinect works by using a fraction of the system's available resource pool, with no song-or-dance complications about it crippling the running of other applications because it's getting in the way of their memory accesses. See above and the process used has issues that will limit the uses both from a overhead point of view and accuracy.

Only if the features are specialst. Everything you say can fit into the possible processing choices we've outlined before in this discussion. Only if you are doing something extraordinary that conventional processors can't cope with (in the same way a 2005 tri-core PPC and GPU can't cope with 2010 cutting-edge 3D vision tracking) would you need to consider extraordinary CPU soltuions, but that would be cost prohibitive meaning you'd drop that feature and go for a lesser one that works within budget-constrained hardware choices. Wii didn't include a gyro when it launched, even though that'd have provided the full features Nintendo wanted, because the cost was too high. They reduced features to match a price target. Next-gen consoles will have a CPU and GPU made to a price, and the interface options will be built around those knowing that they'll consume a small fraction of resources.

OK, I'll take another step back. Assuming you are correct that budget constraints are a major issue, how would we resolve this?

We want a PC like design with an inexpensive basic multi-purpose core that is expandable but we have a Game Console that because it's not expandable must be overdesigned to last 10 years. It must include features that will evolve during the life of the console but not cost an arm and a leg like the PS3 did at release.

Again we start with the features we think will be available for the CE industry in the next 5 years or so. We then decide how to add these features, internally or via external high speed interface. Internally drives up the cost of the basic console so lets start with externally.

By this time I expect you have jumped ahead to something like a Gigabit port and a recently published patent from Sony for the uses of a 4 element Cell. Will the Gigabit port be fast enough for the features we think might be needed, if so, then we don't have to design them into the console. Can we reduce the cost for external hardware by using them with other CE products, including them in our products so these features are usable by all our products including our game console? Can we reduce costs by sharing OS and code?

Example: Include Voice and gesture recognition in the TV and it's usable by the PS4 if the TV and PS4 are connected by a network port....gee HDMI 1.4 has provisions for that.

With a little forethought the LG TV remotes which are using an Airmouse subset of the Move in their remote, could have been used for the PS3 in some games. How long till TVs have voice and gesture recognition or remotes a LED bulb on the front for full "move" support?

Projecting 10 years or so, TVs will include the processing power of a PS4 and a PS5 may be just the software stack you purchase for your TV.

The digital ecosystem will have all CE products including game consoles talking to each other and sharing resources. A TV home screen will control the home. You will use tablets to interface with the ecosystem and Tablets and the Home TV will have exactly the same features and similar menu structures running something like Android as a super UI. With this in mind we can start to plan the features in a PS4, PSP2 and Tablets.

I brought this up before in discussions and in the HTML5 thread on VP8 becoming the HTML5 <video> tag, the first chipset to include hardware support for VP8 is in a 10 inch tablet and TV with Android 2.3 made in China and to be released in March. Cost not mentioned but in 9 months or so it will be CHEAP and probably have a Sony name on some.
 
Last edited by a moderator:
Example: Include Voice and gesture recognition in the TV and it's usable by the PS4 if the TV and PS4 are connected by a network port....gee HDMI 1.4 has provisions for that.
You can't design a console with intentions to use features in a TV a user might not have. If there was talk of a standard in camera interfaces and cross-device communication, I'd accept that as a prospect, but AFAIK there isn't. So you'd have to provide any devices as peripherals, and if they are essential to the experience, include them in the base cost. Thus you aren't saving money. If neXBox comes with stereoscopic TOF cameras and an interface designed around them, the hardware to drive that will have to be part of the console package, with maybe an outside chance of a camera-free box if you use an MS KinecTV enabled display with built in cameras, which probably won't exist. For cost purposes you'd just ahve the CPU and GPU drive the camera interface of the console, and not be able to target any CE expansions as that's not your base standard for the platform.

Projecting 10 years or so, TVs will include the processing power of a PS4 and Game consoles may be just the software stack you purchase for your TV.
That's a possibility for the future of the market, just as the dumb-terminal to an online game service like Gaikai and OnLive is. However, that's not really the topic of this thread. This thread is mostly about the console that will in all likelihood be released in the next 2-3 years, what CPU and GPU will make those conventional boxes? If the consoles go a different route, with Sony of MS abandoning the traditional console and going streaming net-games only say, then that box doesn't need to be discussed here because the hardware requirement aren't of particular consequence. I mean, I suppose you have a point that if the future is CE devices or online then the specs we'd be considering would be well different. I don't see anything pointing to completely brave new world next gen though. There'll be some online players like OnLive, and some CE devices, but there's still room for another round of closed-hardware consoles which Sony and MS and Nintendo will almost certainly release.
 
So aproximately when would the tentative specifications for DX12 be complete and when would they likely be finalised? Are we going to see a new DX iteration in 2012 to go with the new version of Windows? Im just thinking backwards to when the hardware would have started to be designed and then forwards to when the hardware for consoles would have to be finalised.

The funny thing, wouldn't using an X86 system mean that they wouldn't actually have to release official development kits that far before the actual release of a console, meaning people working on futuristic 'PC' games would have no idea that the game was actually being designed for a next generation console as well.
I've read a lot of rumours about Windows8, it seems that the OS will relies extensively on virtualisation. With this being port to both ARM and X86 I wonder if it (the hyper-visor) could also be the basis for the next XBOX. Basically that could mean that coding "close to the metal" could become a thing of the past, it could also implies that PC and Xbox games cold be basically the "same", the console rendition would be optimised accordingly to the system resources (things like textures sizes, various effects, AA, etc.).

I wonder if games could by passes completely any kind of "fixed graphic pipeline". Something like a game written in MS equivalent to Intel TBB sending routine in HSLS to the GPU whenever it wants (for compute or graphic purposes).

For directx12, if MS uses a single chip for their next system could that push them to require a coherent memory for GPUs? Intel is pretty close already with the sandy bridge as the L3 is "somehow" (I should read again a serious review say the techreport one...) shared between the CPU and the GPU.
Could they also remove the limit for kernel size or increase it? Could they ask for more efficient handling of concurrent kernels?
 
Last edited by a moderator:
Question:

If you had only one choice what would you prefer the extra processing power was spent on in the next gen consoles?

1. Higher graphics fidelity (more effects, polygons and schwizz bang wow fx) @ 1080p

or

2. Same level of graphics fidelity but higher resolution (4k screens etc) @ 2160p (ish).

I know what I would prefer...

3. A pony
 
You can't design a console with intentions to use features in a TV a user might not have. If there was talk of a standard in camera interfaces and cross-device communication, I'd accept that as a prospect, but AFAIK there isn't. So you'd have to provide any devices as peripherals, and if they are essential to the experience, include them in the base cost. Thus you aren't saving money. If neXBox comes with stereoscopic TOF cameras and an interface designed around them, the hardware to drive that will have to be part of the console package, with maybe an outside chance of a camera-free box if you use an MS KinecTV enabled display with built in cameras, which probably won't exist. For cost purposes you'd just ahve the CPU and GPU drive the camera interface of the console, and not be able to target any CE expansions as that's not your base standard for the platform.

That's a possibility for the future of the market, just as the dumb-terminal to an online game service like Gaikai and OnLive is. However, that's not really the topic of this thread. This thread is mostly about the console that will in all likelihood be released in the next 2-3 years, what CPU and GPU will make those conventional boxes? If the consoles go a different route, with Sony of MS abandoning the traditional console and going streaming net-games only say, then that box doesn't need to be discussed here because the hardware requirement aren't of particular consequence. I mean, I suppose you have a point that if the future is CE devices or online then the specs we'd be considering would be well different. I don't see anything pointing to completely brave new world next gen though. There'll be some online players like OnLive, and some CE devices, but there's still room for another round of closed-hardware consoles which Sony and MS and Nintendo will almost certainly release.

Given that gesture control is going to be modularized and a chipset developed to support that feature just like Broadcom developed a chipset to support the LG "move" like remote control which is using the same blu-tooth gyro chip used in the PS3 "Move" , accessories like "move" or gesture control can be add-ons. IF it's available for CE product's it's cost effective and can be included in TVs or as a stand alone. It's an incentive to purchase a Sony TV as Sony would have a digital ecosystem that allows Sony TVs with that feature to work with the PS4 and you don't have to buy the stand alone accessory.

So I've in effect eliminated my argument and you don't have to worry about supporting gesture control in hardware in the PS4, just make provisions to support it through the LAN port. It was just an example of the need for thinking outside the box which you have now accepted in principal.

I'm having a hard time thinking of other CE feature examples that can't be supported with just software.

I expect some provision for hardware support for video overlay is going to be a part of future GPUs.

A faster than Gigabit LAN port might be a new standard. Direct to memory writes from that port might be a hardware change to support distributed processing.
 
It's interesting to see how the bandwidth problem could be solved, there are 3 options that I know of:

1. Edram
2. XDR2
3. Stacking DDR on chip (Ivy Bridge)
 
XDR2 reduces power, but it's not a huge jump in bandwidth. Chip stacking without TSV and area IO (which Intel are probably not doing) is not a huge jump in bandwidth either.

The only technology ready in the short term are eDRAM and silicon interposers (hugely expensive ... basically you use an old node process to make a truly giant substrate, say ~100 cm2, which you flipchip bond all the ICs to).
 
Last edited by a moderator:
I've read a lot of rumours about Windows8, it seems that the OS will relies extensively on virtualisation. With this being port to both ARM and X86 I wonder if it (the hyper-visor) could also be the basis for the next XBOX.

I would definately consider the ARM kernel of the Windows 8 code as the main Xbox next OS language. Since they are scratching a massive itch on ARMs back im sure they are doing likewise for Microsoft and whomsoever provides the referrence platform for Windows 8 for ARM will probably also provide Microsoft with practically royalty free access to putting an ARM architecture inside the Xbox next. It also implies that Windows Phone 7/8 games and applications and their next generation set top box applications would all work as well.

I can sort of see the product lineup now:

Windows Phone 8
Microsoft set top box ~$100-200
Microsoft nextbox Arcade $299
Microsoft nextbox Elite $399
Microsoft Windows Arm 8
Microsoft Windows 8

They could share applications between all arms of their new found ARM empire :LOL:. Also by using ARM it would allow them to wake with Kinect and do background downloading ala Wii and also things like content on demand by using a reserved quantity of space to ensure that the starting portion of say the top 20 GOD are already pre-downloaded and ready for use. Also I wonder if having an entirely seperate CPU device running the OS could protect it from hacking even better since you could institute a one way code policy so the ARM chip tells the other chip and not the other way around.

Im also wondering whether they will try to lower fabbing costs by using salvage parts on the Arcade. It makes sense if the Arcade is unable to do many of the multitasking features that require a HDD and it could involve a fraction of compute resources but result in an increase in yields if they disable one SIMD and one CPU core given a likely abundance of both.

For directx12, if MS uses a single chip for their next system could that push them to require a coherent memory for GPUs? Intel is pretty close already with the sandy bridge as the L3 is "somehow" (I should read again a serious review say the techreport one...) shared between the CPU and the GPU.
Could they also remove the limit for kernel size or increase it? Could they ask for more efficient handling of concurrent kernels?

Well im pretty sure that if they do go CGPU again then they would have to answer all of those questions. Otherwise I can't really say as im not familiar with the issues on that problem.
 
For directx12, if MS uses a single chip for their next system could that push them to require a coherent memory for GPUs? Intel is pretty close already with the sandy bridge as the L3 is "somehow" (I should read again a serious review say the techreport one...) shared between the CPU and the GPU.

It would almost make complete sense for a fat L3/eDRAM, eh? :p Go a little lighter on # of CPU cores (not more than 8). Have a fat cache. Perhaps make the GPU off-die so they can have a larger transistor budget. Instead of the 360's current configuration of CPU/GPU + eDRAM, it'd be CPU/eDRAM + GPU, and this time, at the start of the generation. Could make the wire tracing to main memory pretty hectic, but if Sandy Bridge is already doing something similar...

hm...
 
IIRC, it was mentioned that the edram process does not play nice with high volume logic process.

Ah... I see. I was sort of wondering about that, particularly IBM's process. I'll have to read up more about that and see what I can find. :)
 
IIRC, it was mentioned that the edram process does not play nice with high volume logic process.

It also requires SOI rather than bulk processes which therefore would add to the expense. It would also mean there would be no convenient half nodes to shrink to either.

Maybe the current Xbox 360 CGPU + ED-RAM or perhaps some other form of memory, surely AMD has Z-RAM or whatever working by now?
 
So, I open this thread for the first time in weeks if not months for no specific reasons and what do I see? :oops: The gods are making fun of me ;)
Not that I think Z-RAM has delivered on its promises (e.g. the AMD paper with the disappointing performance numbers) or that anyone is considering it for consoles, mind you.
 
AFAIK, current xbox's cgpu is also made on soi process.

Do you know if the old Chartered semi-conductor fab had an SOI process? If so did they also have a bulk 45nm process as well?

So, I open this thread for the first time in weeks if not months for no specific reasons and what do I see? :oops: The gods are making fun of me ;)
Not that I think Z-RAM has delivered on its promises (e.g. the AMD paper with the disappointing performance numbers) or that anyone is considering it for consoles, mind you.

So if 1 = certain and 0.000001 means that only Sony would try it, if only to achieve a new marketing slogan like "Powah of teh Zee RAM" or "PS4-Z", where do we actually stand on exotic memory technologies? In terms of promised densities it looks like Z-Ram etc is a winner but in terms of actually making the stuff it seems that ED-RAM and XDR2/3 are the only front runner exotic technologies.
 
There was already a thread where AMD dumped Z-RAM. It was nowhere near reliable enough for a serious application.
It instead opted for the new sexy, T-RAM, which is much like Z-RAM in that once AMD mentions it licenses the tech it results in years of silence.
 
Status
Not open for further replies.
Back
Top