Predict: The Next Generation Console Tech

Status
Not open for further replies.
That's when Sony corporate stormed into the room and threw a bag over his head.
Not quite.....

Anyway, that's my piece and I came across this, so a few bits more.

Obviously, it's a piece about design strategy rather than specification. And it's not necessarily covering stuff that will definitely be in the PS4, though some of it could be in there.

Also, Tsuruta-san was at IEDM, I think, largely to set out the roadmap and hopefully pique some interest from potential technology suppliers.

However, it's clear that they are after a successor to Cell that will give them enough headroom to progressively add features to the box (hence the DSP and the programmable logic - I don't think the latter's meant for game developers in any way whatsoever), and that will last them a decade.

So, it's capacity in the event that the advanced haptics and so on become available before the fourth gen PS reaches end-of-life. And given the likely NRE on a Cell-like SoC/3D implementation today, that makes some sense. It's cheaper to add a Kinect a few years in and refresh your market that way than having to rearchitect the platform all over again. Will all the stuff on his roadmap go in the next box - probably not. But you're talking a period from at least 2013-2023, even if they announce post-E3. However, the balance to that is that you'd be using current technology that might not scale for everything.

On the 300fps, I didn't get a chance to ask him during the interview because it only occurred afterwards, but that looks like a 3D play based on this headset/glasses idea. Finger in the wind, but say you're delivering 60fps X 2 for each eye to two players who have different perspectives (they're playing in AR), you need to output 240fps just to do that, yes? Go above two and already you would have to drop the frame rate to around a more typical 30fps. So, it could be another capacity and output play to feed multiple sources as a target for a single viewing plane on a display.

On 8k - well Sony has a 4k projector at CES this week. In fact, it's been around for a few months. They'll keep pushing resolution not just for consumer but also to push the envelope on the broadcast/filmmaking side. So I'd say that they could get the prototypes, do high-end deployments and then bring the technology down to an affordable level within a decade. Whether the market will want it..... another question. But it's not technologically unfeasible that it could happen within the PS4's lifetime.

But the main point here is that Tsuruta-san was talking about how they go into the design and specification process and what they take account of within the terms of their roadmap.

It's ambitious thinking and you have to admire that. But the caveat that Tsuruta-san put down about ROI was pretty strong.
 
On 8k - well Sony has a 4k projector at CES this week. In fact, it's been around for a few months. They'll keep pushing resolution not just for consumer but also to push the envelope on the broadcast/filmmaking side. So I'd say that they could get the prototypes, do high-end deployments and then bring the technology down to an affordable level within a decade. Whether the market will want it..... another question. But it's not technologically unfeasible that it could happen within the PS4's lifetime.

But the main point here is that Tsuruta-san was talking about how they go into the design and specification process and what they take account of within the terms of their roadmap.

Did he mention anything about 16bit HDR imaging on displays? In theory they should be able to do this with their Crystal-LED tech. David Kirk of nVidia was big on this improvement for displays. Panavision also sees movies cameras that can capture with HDR the next big thing in image quality.
 
However, it's clear that they are after a successor to Cell that will give them enough headroom to progressively add features to the box (hence the DSP and the programmable logic - I don't think the latter's meant for game developers in any way whatsoever),
Can you clarify at all what 'programmable logic' means? Are we talking something along the lines of a PGA? It occured to me just the other day thata PC can struggle with encoding h.264 where my camcorder can do it on the fly, so why hasn't anyone made a little chip that plugs into the USB port for the purpose? If Sony are thinking of reassignable logic that can turn into an ideal format en/decoder as new techs evolve, that makes sense to me, but I've no idea how that field has developed. It's not one we hear much about at all.

On the 300fps, I didn't get a chance to ask him during the interview because it only occurred afterwards, but that looks like a 3D play based on this headset/glasses idea...
That makes sense. 300 frames per second doesn't mean all frames belong to the same camera! With multi-person viewing, there's the possibility of individual viewers picking their own camera angles and stuff. In terms of what targeting 300 fps means, it includes a data stream capable of multiple viewports, so thinking ahead to that tech is important.

On 8k - well Sony has a 4k projector at CES this week.
The issue of diminishing returns means very few homes are equipped to benefit greatly from 2k, and even less 4k, so I can't see it ever being a target for consoles. But with what you said above, I'm wondering if the general idea here isn't so much the next console(s) but a whole hardware platform/policy reaching out into Sony's many departments. Just as they now have SEN for their content divisions, a common hardware strategy that pulls together all their arms is a sound move to compete with Korea. Sony can actually have a vision and decide the future in a way, as they can make the content and the hardware to drive it.
 
Can you clarify at all what 'programmable logic' means? Are we talking something along the lines of a PGA? It occured to me just the other day thata PC can struggle with encoding h.264 where my camcorder can do it on the fly, so why hasn't anyone made a little chip that plugs into the USB port for the purpose? If Sony are thinking of reassignable logic that can turn into an ideal format en/decoder as new techs evolve, that makes sense to me, but I've no idea how that field has developed. It's not one we hear much about at all.

That makes sense. 300 frames per second doesn't mean all frames belong to the same camera! With multi-person viewing, there's the possibility of individual viewers picking their own camera angles and stuff. In terms of what targeting 300 fps means, it includes a data stream capable of multiple viewports, so thinking ahead to that tech is important.

The issue of diminishing returns means very few homes are equipped to benefit greatly from 2k, and even less 4k, so I can't see it ever being a target for consoles. But with what you said above, I'm wondering if the general idea here isn't so much the next console(s) but a whole hardware platform/policy reaching out into Sony's many departments. Just as they now have SEN for their content divisions, a common hardware strategy that pulls together all their arms is a sound move to compete with Korea. Sony can actually have a vision and decide the future in a way, as they can make the content and the hardware to drive it.

I was thinking about how Sony and Toshiba had planned to use the Cell in so many devices, yet it really didn't take off. Why was that?

But I was thinking that what if Sony really went out on a limb and decided to try again for the next Playstation system, with something like a quad core Power7 with a couple SPEs per core, while extending that processor to other devices, specifically home computing products like set top boxes and the TVs themselves? They could put more focus on creating a multi-faceted software platform and ecosystem that supports itself on multiple fronts and also encourages consumers to buy into it with multiple products, just like other companies have been doing. I've talked alot about this in regards to MS, and Sony could too.
 
SOC designs have been produced with processors packaged with and FPGA blocks.

In addition to fixed blocks, SOC designers may be enticed by the prospect of having some "blank" customizable logic to implement future standards or protocols that did not exist at the time the rest of the SOC's design was finalized.

Possibly, a new codec could be implemented on the same hardware. It would probably not be as good as a dedicated block, but it is better to be less efficient than to have nothing.
Customizable logic and I/O may permit a longer-lived console by allowing workarounds for signalling and protocol changes that software updates cannot emulate.

Platform security could use it, possibly to allow retroactive fixes to cracked encryption. It could be a threat vector, however.
 
300 frames per second doesn't mean all frames belong to the same camera! With multi-person viewing, there's the possibility of individual viewers picking their own camera angles and stuff. In terms of what targeting 300 fps means, it includes a data stream capable of multiple viewports, so thinking ahead to that tech is important.

Yeah... could be multiplayer or holographic display.
 
SOC designs have been produced with processors packaged with and FPGA blocks.

In addition to fixed blocks, SOC designers may be enticed by the prospect of having some "blank" customizable logic to implement future standards or protocols that did not exist at the time the rest of the SOC's design was finalized.

Possibly, a new codec could be implemented on the same hardware. It would probably not be as good as a dedicated block, but it is better to be less efficient than to have nothing.
Customizable logic and I/O may permit a longer-lived console by allowing workarounds for signalling and protocol changes that software updates cannot emulate.

Platform security could use it, possibly to allow retroactive fixes to cracked encryption. It could be a threat vector, however.
Those are good reasons, but it seems unusual to talk about something in public that only a console vendor will use.
 
Timothy Lottes speculation on GPU next gen power.

http://timothylottes.blogspot.com/2012/01/to-extremes-and-back-to-reality.html

Ok, Now Back to Reality

My prior comment, "IMO a more interesting next-generation metric is can an engine on a ultra-highend PC rendering at 720p look as real as a DVD quality movie?" is a rhetorical question asking if it is possible for a real-time engine to start to approach the lower bound of a DVD movie in realism.

To make this clear, I'm not suggesting that games should compromise interactive experience just to get visual quality. If I was going to develop a title for next generation consoles I would output 1080p and run frame locked to 60Hz with no dropped frames period. I still believe developers will be able to start to reach the quality of film for next generation consoles and current generation PCs, and I'm intending to develop or prove out some of the underlining techniques and technology which gets us there.

At the same time, certainly expectations for next generation consoles should at least be grounded in some rough realistic estimates for performance. Using public information found on the internet, lets nail down a realistic estimate of what next generation console performance will be, by looking at how ATI/AMD has evolved GPU performance after the Xbox 360,

(1.) Next gen console games will be outputting at 1080p. I can say this with full confidence simply because HDTV typically adds a frame of latency when it needs to convert from 720p to 1080p.

(2.) Using HD6970 as a proxy for a high end PC version of the Xbox 360, lets compare specs. Going from the typical 720p @ 30Hz on Xbox360 to 1080p @ 60Hz on HD6970 with 2x the geometry would take roughly 4x the performance (2x the pixels and geometry times 2x the frame rate) just to provide a similar experience at the higher resolution and frame rate with similar average pixels/triangle.

HD6970 has roughly another 2x over the 4x required to maintain same look at the full HD experience,

Xbox360 = 240 Gflops : 22.4 GB/s : 8 Gtex/s : 4 Gpix/s
HD6970 = 2703 Gflops : 173 GB/s : 84.5 Gtex/s : 28.2 Gpix/s
-------------------------------------------------------------
roughly 11x Gflops : 7x GB/s : 10x Gtex/s : 7x Gpix/s



(3.) What about process scaling, lets attempt to get an idea of what future technology might have, lets compare HD6970 to HD7970. Looks like AMD managed around a 1.4x on-paper spec increase except they did not scale Gpix/s.

HD6970 = 2703 Gflops : 173 GB/s : 84.5 Gtex/s : 28.2 Gpix/s : 250 Watt
HD7970 = 3789 Gflops : 264 GB/s : 118.4 Gtex/s : 29.6 Gpix/s : 250 Watt
-------------------------------------------------------------------------
roughly 1.4x Gflops : 1.5x GB/s : 1.4x Gtex/s : 1x Gpix/s



(4.) What about power scaling? The latest shrink of the Xbox 360 hardware uses a 115 Watt power supply (for the entire system, not just the GPU). So lets assume that next generation consoles won't have huge power supplies like PC GPUs. Taking what I'm wild guessing to be a really liberal estimate for possible GPU power for a 115 Watt system, lets compare a medium power modern proxy for the Xbox 360, the HD6750 (which is a 86 Watt TDP on paper). These numbers suggest if Microsoft launched a Xbox update around last year, that it would not be able to do 1080p at 60 Hz with the same look as current 360 games (because the HD6750 isn't 4x the 360).

Xbox360 = 240 Gflops : 22.4 GB/s : 8 Gtex/s : 4 Gpix/s
HD6750 = 1008 Gflops : 73.6 GB/s : 25.2 Gtex/s : 11.2 Gpix/s
-------------------------------------------------------------
roughly 4.2x Gflops : 3.3x GB/s : 3.2x Gtex/s : 2.8x Gpix/s



(5.) Next generation console performance will be a function of how much power the machine uses and what process technology each vendor adapts. Launch date of the console is going to hint at what process is used. Process scaling is not constant, but for the sake of making this simple, lets just assume each process gets 1.4x the performance. Then lets look at estimated performance scaling from the HD6750 to keep closer to current "console" power levels. This will provide some very rough estimate on what future consoles might have. Lets estimate process technology road maps by looking at google image search results,

2011 : 40nm : HD6750 : 4.2x Gflops : 3.3x GB/s : 3.2x Gtex/s : 2.8x Gpix/s
2012 : 28nm : ?????? : 5.8x Gflops : 4.6x GB/s : 4.4x Gtex/s : 3.9x Gpix/s
2013.5 : 20nm : ?????? : 8.2x Gflops : 6.4x GB/s : 6.2x Gtex/s : 5.5x Gpix/s
2015 : 14nm : ?????? : 11.5x Gflops : 9.0x GB/s : 8.6x Gtex/s : 7.7x Gpix/s



Given the window of possible launch dates and power targets it would be hard to know exactly what will end up in next generation consoles, however a 2011 high-end single-GPU card seems like a possible proxy for next generation console, and at least a good start to understanding what could be possible.
Posted by Timothy Lottes at 21:38
 
60hz per eye with shutterglasses is awfully close to the critical flicker frequency perceivable by human eye , to me it's clearly a bad standard, brought in, better yet "slackered in" by digital displays, hopefully cloud gaming guys are up to better stuff (with a theoretical display bundle).
 
60hz per eye with shutterglasses is awfully close to the critical flicker frequency perceivable by human eye , to me it's clearly a bad standard, brought in, better yet "slackered in" by digital displays, hopefully cloud gaming guys are up to better stuff (with a theoretical display bundle).

60 per eye has been fine in movies (yes, the movie isn't actually 60 frames per second per eye but that's irrelevant), and in games before, why wouldn't it be now?
 
60 per eye has been fine in movies (yes, the movie isn't actually 60 frames per second per eye but that's irrelevant), and in games before, why wouldn't it be now?

Because we'd want high velocity content for high velocity displays ultimately:
Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object.
http://en.wikipedia.org/wiki/Temporal_anti-aliasing
Can't really say 2x 60fps is cutting it for me.... ;)
 
Can you clarify at all what 'programmable logic' means? Are we talking something along the lines of a PGA? It occured to me just the other day thata PC can struggle with encoding h.264 where my camcorder can do it on the fly, so why hasn't anyone made a little chip that plugs into the USB port for the purpose? If Sony are thinking of reassignable logic that can turn into an ideal format en/decoder as new techs evolve, that makes sense to me, but I've no idea how that field has developed. It's not one we hear much about at all.

Virtex 7 from Xilinx is quite the beast http://www.xilinx.com/products/silicon-devices/fpga/virtex-7/index.htm, but I don't think FPGA would be ideal. SPU's in Cell are capable of en/decoding in software just about anything (today), make them more "next-gen" and they will be future proof enough.
 
Virtex 7 from Xilinx is quite the beast http://www.xilinx.com/products/silicon-devices/fpga/virtex-7/index.htm, but I don't think FPGA would be ideal. SPU's in Cell are capable of en/decoding in software just about anything (today), make them more "next-gen" and they will be future proof enough.

I was thinking along the same lines as you, what kind of en-decoding or processing is the SPUs inefficient at?

Is there some future standard that may need some heavy processing? The PS3 was a trojan horse that helped adoption HDMI, BD and 3D technology. I guess it is likely that Sony may want the PS4 to be a trojan horse for some new technology as well. If we could predict what technology that could be, we may be able to better understand their design choices.

One upcoming technology is glassless 3D-TVs. It seems that the most promising technology is a TV screen with lenticular lenses with multiple views for each eye. MPEG is right now working on a standard to support this and it uses 4 views for each eye, i.e. 8 views in total to allow convinient multiple vewpoints. The standard is to provide an algorithm that creates these 8 views from a stereoscopic 3D video stream, i.e. 2 views. Right now there are 12 research laboratories around the world that are evaluating conversion algorithms for this. I read about this here and here. The articles say that if the tests pans out well, there will be a new standard within "some year".

Obviously this conversion requires quite a lot of processing power to be done on a video stream in real-time. Perhaps FPGAs could help in this case, I am not an expert on FPGAs and image processing so I can´t tell for sure that they would be more efficient than a number of SPUs working at 3.2 GHz.

If Sony wants to use a stereoscopic 3d camera to create depth map, a la Kinect, from the two images that would require a similar kind of image proccessing, the FPGA could maybe help in that case as well.

Another thing, 8 views in 30 fps makes it 240 fps not far from the 300 he mentions, maybe a connection?

There is already an encoding standard for BD that allows more views than the current stereoscopic 3D, but I don´t think there is any commercial BDs yet providing more than two views, and I also don´t think that the current HDMI standard supports this, but this maybe something Sony is aiming for with the PS4. :?:
 
Last edited by a moderator:
It seems that the most promising technology is a TV screen with lenticular lenses with multiple views for each eye.
No it isn't ... it's a transitional niche technology. Fixed viewing zone displays will never be popular in the living room.
 
I was thinking along the same lines as you, what kind of en-decoding or processing is the SPUs inefficient at?

Is there some future standard that may need some heavy processing? The PS3 was a trojan horse that helped adoption HDMI, BD and 3D technology. I guess it is likely that Sony may want the PS4 to be a trojan horse for some new technology as well. If we could predict what technology that could be, we may be able to better understand their design choices.

One upcoming technology is glassless 3D-TVs. It seems that the most promising technology is a TV screen with lenticular lenses with multiple views for each eye. MPEG is right now working on a standard to support this and it uses 4 views for each eye, i.e. 8 views in total to allow convinient multiple vewpoints. The standard is to provide an algorithm that creates these 8 views from a stereoscopic 3D video stream, i.e. 2 views. Right now there are 12 research laboratories around the world that are evaluating conversion algorithms for this. I read about this here and here. The articles say that if the tests pans out well, there will be a new standard within "some year".

Obviously this conversion requires quite a lot of processing power to be done on a video stream in real-time. Perhaps FPGAs could help in this case, I am not an expert on FPGAs and image processing so I can´t tell for sure that they would be more efficient than a number of SPUs working at 3.2 GHz.

If Sony wants to use a stereoscopic 3d camera to create depth map, a la Kinect, from the two images that would require a similar kind of image proccessing, the FPGA could maybe help in that case as well.

Another thing, 8 views in 30 fps makes it 240 fps not far from the 300 he mentions, maybe a connection?

There is already an encoding standard for BD that allows more views than the current stereoscopic 3D, but I don´t think there is any commercial BDs yet providing more than two views, and I also don´t think that the current HDMI standard supports this, but this maybe something Sony is aiming for with the PS4. :?:


& this is why I think it's going to be best for Sony to hold off on Next Gen until 2015 because without a new TV standard or some kind of New Control interface there really isn't much that's going to make most people feel they need a new console in the next 3 years.

people are looking at 4K / 8K & saying that it's no use in it if the TV isn't 85" or bigger but they are not looking at it for what it's going to bring & that's Glasses-Free 3D

4K give us 720P Glasses-Free 3D & 8K will give us 1080P/2K Glasses-Free 3D.

4K is about $10,000 now but if it catch on in 2015 they should be in the same price range that 1080P was around 2006.

it will have a slow start but this is a console that's going to have to last until about 2020 so these 4K TVs are going to be dirt cheap in this console life so it would be better to support 4K than to come out right before 4K becomes a standard & not have it.
 
Virtex 7 from Xilinx is quite the beast http://www.xilinx.com/products/silicon-devices/fpga/virtex-7/index.htm, but I don't think FPGA would be ideal. SPU's in Cell are capable of en/decoding in software just about anything (today), make them more "next-gen" and they will be future proof enough.
I dunno. As I say, in my video encoding example, Cell could definitely munch through h.264 encodes, but then if that's using up a lot of the Cell you ain't got much left to do the rest of the game with, whereas if a tiny amount of silicon can be reengineered for the task and do it much more efficiently, you have major gains. I just don't know how well custom logic solutions can outperform programmable solutions, but a look at the lack of Cell's progress outside of PS3, and the facts custom ASICs still dominate for high-performance tasks, it's clear custom logic remains way more efficient.
 
Status
Not open for further replies.
Back
Top