Predict: The Next Generation Console Tech

Status
Not open for further replies.
eDRAM is all about bandwidth, and in the end the choice for or against its inclusion comes down to management of the total system BW. If we have hundreds fo GB/s next gen, the inclusion of eDRAM seems less worthwhile, but if we cap at 70 GB/s for the whole system, eDRAM would be an option to provide bandwidth-free transparency and framebuffer ops. The eDRAM in Xenos is quite a limited implementation and shoudn't be considered as an example of what eDRAM could offer. PS2 is a better example with its crazy overdraw and fabulous transparency effects. If the GPU had complete read+write to eDRAM, the benefits to XB360 would be markedly improved.
 
eDRAM is all about bandwidth, and in the end the choice for or against its inclusion comes down to management of the total system BW. If we have hundreds fo GB/s next gen, the inclusion of eDRAM seems less worthwhile, but if we cap at 70 GB/s for the whole system, eDRAM would be an option to provide bandwidth-free transparency and framebuffer ops. The eDRAM in Xenos is quite a limited implementation and shoudn't be considered as an example of what eDRAM could offer. PS2 is a better example with its crazy overdraw and fabulous transparency effects. If the GPU had complete read+write to eDRAM, the benefits to XB360 would be markedly improved.

Isn't the biggest draw back to the current Xbox 360 implementation of ED-ram the time it takes to read back from the external chip due to tiling? If Microsoft could do it all over again, would the first step be to not increase the actual size of the ED-Ram but to incorporate it into the GPU die which would make it that much more versatile. So even if they could fit less of it, less is more when it comes to on-die cache vs off die cache is it not?

Then once you're considering using large on-die cache why not also let the CPU use it depending on what the developer wants to do? So therefore theres another case for using one larger CGPU with on-die cache over two separate smaller chips. That way a console could be made smaller and simpler without compromising performance.

I don't think we'll see another console with more than one memory bus to main memory and we're unlikely to see a console with more than one CPU. I just don't see anything else panning out, especially given the power/mm^2 ratio is skewing the designs towards smaller chips already.
 
Isn't the biggest draw back to the current Xbox 360 implementation of ED-ram the time it takes to read back from the external chip due to tiling? If Microsoft could do it all over again, would the first step be to not increase the actual size of the ED-Ram but to incorporate it into the GPU die which would make it that much more versatile. So even if they could fit less of it, less is more when it comes to on-die cache vs off die cache is it not?

Then once you're considering using large on-die cache why not also let the CPU use it depending on what the developer wants to do? So therefore theres another case for using one larger CGPU with on-die cache over two separate smaller chips. That way a console could be made smaller and simpler without compromising performance.

I don't think we'll see another console with more than one memory bus to main memory and we're unlikely to see a console with more than one CPU. I just don't see anything else panning out, especially given the power/mm^2 ratio is skewing the designs towards smaller chips already.
I completely agree still I wonder if we're right on the matter whether it's Epic (see end of the gpu roadmap presentation), Crytek (last notes about the future of rendering) or Ms ("Zen in multi-core rendering) it looks like they are shooting for really high specifications.

I also find interesting the fact that Crytek thinks that future architectures will be much more divergent than they are now.

Sweeney wet dreams would be to bypass completely on fixed function hardware (including texture samplers no matter he think that future hardware will come with it) and his presentation let think he really want to implement something like REYES in near future. And he is not a clown I can't see him throw that kind of statement without being serious about it.

Crytech thinks that "renaissance of graphics" should happen ~2013.

I wonder (I hope) if we're in for a surprise because all this seems set to be pretty unexciting (not that it won't deliver).
 
I completely agree still I wonder if we're right on the matter whether it's Epic (see end of the gpu roadmap presentation), Crytek (last notes about the future of rendering) or Ms ("Zen in multi-core rendering) it looks like they are shooting for really high specifications.

I also find interesting the fact that Crytek thinks that future architectures will be much more divergent than they are now.

Sweeney wet dreams would be to bypass completely on fixed function hardware (including texture samplers no matter he think that future hardware will come with it) and his presentation let think he really want to implement something like REYES in near future. And he is not a clown I can't see him throw that kind of statement without being serious about it.

Crytech thinks that "renaissance of graphics" should happen ~2013.

I wonder (I hope) if we're in for a surprise because all this seems set to be pretty unexciting (not that it won't deliver).

I wonder if the traditional texture units need to be eliminated. They require a lot of data transmission from a slow optical drive/slightly faster HDD, they use a lot of memory and I wonder if they are the best technology to use on a console which has set its sights on being more than 3D ready.

Could a GPU in 2013 completely do without texture units? If texture units are merely taking your typical compressed texture bitmap and rendering it to a game world, would that look good in 3D? Would it be worth it instead to use displacement mapping and shaders to simply recreate what would otherwise be done with textures and give a true 3D effect to the world rather than simply pasting 2D bitmaps all over the place?

Is it plausable for a next generation console CPUs execution architecture to look like this:

<Shader array> <onboard cache> <multicore CPU>

Or

<Shader cluster> <CPU core> <onboard cache> <other compute groups>

With both raster and texture compute being performed by the shaders themselves when required.

That would be the elimination of all fixed function hardware essentially. If shaders can perform the raster ops in parrallel then that would sort the triangle setup bottleneck for replacing texels with shader ops and displacement mapping.

The target resolution is still limited, its not getting bigger than 1920/1080 and the required output rate of 30 uniquely rendered frames per second still remains. Thats not much more than the current generation, yet the real world performance of the GPU could easily be 10* greater which means effectively that they could invest 5* the resources per pixel.
 
I wonder if the traditional texture units need to be eliminated. They require a lot of data transmission from a slow optical drive/slightly faster HDD, they use a lot of memory and I wonder if they are the best technology to use on a console which has set its sights on being more than 3D ready.
Actually Sweeney expects texture samplers to be still part of the next-gen system.
Sweeney said:
Revisiting REYES

“Dice” all objects in scene down into sub-pixel-sized triangles
Tile-based setup
Rendering with Flat Shading
No texture sampling
Analytic anti-aliasing
Per-pixel occlusion(A-Buffer/BSP)

Requires no artificial software threading or pipelining.
He could still use the sampler for effect like DoF. We had the discussion sometime ago and senior members here made clear to us that you can't pass for now on tex units but that for as long as you sample textures. I'm not sure giving on tex units would have an effect on memory usage, data would still be compressed in RAM, most likely a thing a bit like Larrabee would end more slow than fixed function hardware in regard to compression/decompression.

Could a GPU in 2013 completely do without texture units? If texture units are merely taking your typical compressed texture bitmap and rendering it to a game world, would that look good in 3D? Would it be worth it instead to use displacement mapping and shaders to simply recreate what would otherwise be done with textures and give a true 3D effect to the world rather than simply pasting 2D bitmaps all over the place?
I think that as long as sampling is required tex units are a huge win. Sweeney speaks about changing the way game are renderer.
The real question is can a "GPU" be made powerful enough so REYES can be done in real time?
I would not dare to say that Sweeney is making bold statements in this regard he has to think this is achievable in near future (till for ref he considers a 4TFLOPS system rendering at 1080P @60fps, that would mean big, hot, expansive systems for manufacturers or as now most games would be rendered at lower resolution and even lower if in 3D mode).
Is it plausible for a next generation console CPUs execution architecture to look like this:

<Shader array> <on-board cache> <multi-core CPU>

Or

<Shader cluster> <CPU core> <on-board cache> <other compute groups>

With both raster and texture compute being performed by the shaders themselves when required.

That would be the elimination of all fixed function hardware essentially. If shaders can perform the raster ops in parallel then that would sort the triangle set-up bottleneck for replacing texels with shader ops and displacement mapping.

The target resolution is still limited, its not getting bigger than 1920/1080 and the required output
0 uniquely rendered frames per second still remains. That's not much more than the current generation, yet the real world performance of the GPU could easily be 10* greater which means effectively that they could invest 5* the resources per pixel.
Removing texture sampler and getting rid of sampling all together should somehow solve the huge latency problem faced par nowadays GPU, "standard" latency to the RAM would be easier to hide.
I don't know what would be the impact on actual silicon still I can see this saving some silicon.

For the hardware I think it would have to be a bit like Larrabee but I wonder if going with something a bit more like AMD Bulldozer could help to achieve better density and limit communication overhead, so amortizing the cost of a cleverer front end and of L2 cache on 2 "blocks/cores" or more (Bulldozer is rumoured to be 4 issue?) feeding more potent SIMD arrays and less potent integer pipelines.
 
Last edited by a moderator:
Sony: "developers will help build the next Playstation" and "future platform related activities" are underway.

Sounds like another indication MS was right with 360 and Sony was wrong :LOL: Sony will go to a PC style multicore CPU next gen most likely, at least rumors suggest.

Getting dev input can help Sony a lot though.
 
Rangers said:
Sony: "developers will help build the next Playstation" and "future platform related activities" are underway.

Sounds like another indication MS was right with 360 and Sony was wrong :LOL: Sony will go to a PC style multicore CPU next gen most likely, at least rumors suggest.

Getting dev input can help Sony a lot though.

That kind of depends on which developers are giving input though. What if its Naughty Dog, Insomniac, Santa Monica, Media Molecule, D.I.C.E.? What if we add Id and Epic? Crytek? What if we add Rockstar, Blackrock, etc? ;) each will have very different input.

For my part, I think it is great if developers have good and early input, preferably all of them. Some if the input will influence hardware decisions, some devtools, some available services and libraries, etc.
 
Think that epic,cryteck and id will get a big part to say about it because those guys are that gen 3rd party engine builders. First party will probably have some influences.
 
Their goals are different. 360 is a dedicated gaming box acting as a MPC extender. Sony wanted PS3 to be a consolidated media platform (Think Blu-ray 3D, and more). The Cell vision is sound. Execution is the problem due to various factors.

Getting developers' feedback is extremely important regardless of what they want to do though. This time round, I think motion gaming and browser gaming may feature more importantly. FWIW, the PS3 architecture is rather suitable for processing natural interface input and media analysis.

EDIT: There was a paper on future applications going from text-based (word-processing, SQL, etc.) to media-based (Pandora, gaming, Photo Gallery, Blu-ray, etc.). Cell was designed for the latter. Kaz has said it's fine to explore non-games, but they have to nail down gaming as a top priority first. Other than the Blu-ray stack, I don't think even Sony has fully exploited Cell for general media applications yet.
 
Sony: "developers will help build the next Playstation" and "future platform related activities" are underway.

Sounds like another indication MS was right with 360 and Sony was wrong :LOL: Sony will go to a PC style multicore CPU next gen most likely, at least rumors suggest.

Getting dev input can help Sony a lot though.

This is the smartest thing I have heard from Sony!
I always wondered how the hell they could build this difficult to develop consoles, when they have direct acess to a broad phalanx of world class game developers!
Asking devs about how the next console tech should be like is the right thing, as these guys know the most and (at the end) have to deal with the hardware (for instance: they cannot complain about it, if they designed it themselves :mrgreen:).

I would especially ask third party devs, as they will certainly have a different point of view, as they typically have a much more tight time and money budget to deal with!
 
This is the smartest thing I have heard from Sony!
I always wondered how the hell they could build this difficult to develop consoles, when they have direct acess to a broad phalanx of world class game developers!
Asking devs about how the next console tech should be like is the right thing, as these guys know the most and (at the end) have to deal with the hardware (for instance: they cannot complain about it, if they designed it themselves :mrgreen:).

I would especially ask third party devs, as they will certainly have a different point of view, as they typically have a much more tight time and money budget to deal with!
Indeed if I remember Carmack was pretty suspicious about Larrabee wanting a proof of concept whereas Sweeney sounds like he wants something even more radical in design no matter how perfs tun out in the end.

EDIT
I found an "old" interview of Sweeney where his overall POV is expressed more clearly:
http://arstechnica.com/gaming/news/2008/09/gpu-sweeney-interview.ars/1
 
Last edited by a moderator:
Lets not forget, Sweeney & Co have a vested interest in a more general purpose GPU as a standard in something like a console as that would put more reliance on middle ware licensing (UnrealEngine3-4 etc).

I see the market going in that direction, but it isn't there yet. If either Sony or MS take the GPGPU route and abandon fixed function hardware in the GPU, they will fare rather poorly in comparison screens vs the other, or have to suck up the additional cost of die space to make up the difference. Neither way makes business sense at this point.

BTW - Kudos to Sony in consulting with developers! It seems they have learned quite a bit through this experience. I remember when Sony first made ps1, it was very easy to code for in comparison to Saturn. Then Sega learned their lesson, developed a coders dream machine in DC (no pun intended), and Sony had enough clout to not care about developers wishes which led to a PITA PS2 arch. But again, success bred a certain attitude which went hand in hand with another PITA arch in ps3.

Sometimes, greater success comes after failure. I look forward to see what they come up with.

From the sounds of things, I'm going to guess it is a dual core cell with an OOOE PPE and SPE's with larger cache. I also know the bean counters are not happy in Sony land so I'm not expecting a huge increase in processing power:

(x2) 3.2GHz OOOE PPE w/ 1MB cache & 8-12 SPE's 512KB cache each

Greater general purpose power which will provide ease of use, but also BC and increased ability for those familiar with SPE programming.

Most importantly though, I expect Sony to put a lot of time in polishing their existing tools and having them ready to deploy to developers when ps4 games development kicks off. Scaling off the existing architecture will enabled a multitude of savings on Sony's part and will increase the quantity and quality of their first gen efforts for PS4.
 
http://www.develop-online.net/news/35289/Sony-Developers-will-help-build-the-next-PlayStation
One of the highest-ranking executives at Sony Computer Entertainment has revealed the company is hard at work on future platform developments.

But with former SCE president Ken Kutaragi now out of the picture, Sony is keen to turn to its first-party studios to help make future PlayStation consoles highly accessible for tomorrow’s game creators.

In an exclusive interview with Develop magazine, Sony Worldwide Studios (WWS) boss Shuhei Yoshida candidly explained how Sony has learnt from past mistakes and is now building tech that developers can get the most out of.

“When Ken Kutaragi moved on and Kaz Harai became the president of SCE, the first thing Kaz said was, ‘get World Wide Studios in on hardware development’,” Yoshida said.

“So he wanted developers in meetings at the very beginning of concepting new hardware, and he demanded SCE people talk to us [developers].”

And when asked whether this change in philosophy will be applied to future PlayStation hardware, Yoshida replied: “Yes, we are undergoing many activities that we haven’t yet been talking about in public. Some future platform related activities.”

Yoshida was appointed head of WWS at a time when Sony had endured a stuttering start to the PS3 era, as a number of third-party developers struggled to get enough out of the famously powerful console.

In the full Develop interview – published later this week – Yoshida explains in frank detail how SCE underwent a rescue mission for its first-party studios, bringing together top engineers from around the world to build a universal game engine.

This studio-collaborative philosophy at Sony has remained in place ever since, and was a core pillar of the design ideology for Sony’s new motion controller, PlayStation Move.

“I’m spending more time on the hardware platform,” Yoshida added, “connecting hardware guys to developers. That’s my major role now, and Move is one of those new ways of developing platforms.”
 
Sorry if this has been discussed elsewhere.

I'm thinking MS and Sony have now pretty much forced themselves(with help from nintendo of course) into including motion control into their next consoles. So backing away from CPU, GPU etc...talk in here.

XBOX Next??, built in Kinect maybe a mic to with possibly improved tech in those areas.

Sony PS4, Built in EyeToy with mic, one wand and nunchuk, again with that tech also being improved as well.

Wii HD?? i'm guessing their console will be the weakest but they really have to up ante up in the power department next gen IMO. You can't go into the next gen with all three having motion control and now each console in direct competition. People might just turn to graphics power if they are on the fence if the prices are close though.

Next gen is gonna be interesting in the power department I think. MS and Sony might be forced to scale back to keep their costs down because they might have to include motion controls and Nintendo might have to come up in price because they will now have to get closer to the power of the other two companies offerings.
 
Don't really see a built in Kinect as it needs to be separate from the console itself like the wii sensor bar. Bundled and with a slim style power slot definitely though.
Really not sure what to expect aside from that, I believe we're definitely hitting diminishing returns in terms of IQ though,
 
What if these motion controllers make it more difficult to meet costs and hit certain price targets and they say ship less RAM than they otherwise might have?

To eliminate this possibility, it's best that the Kinect and Move both flop, demonstrating that the market won't pay $100 plus for these things.

Worst thing is that they're hoping to extend this generation/delay the next gen with these overpriced peripherals.
 
Lookout should be on ‘Wii 2’ and ‘Xbox 720’ before PS4, says Sony boss

http://www.develop-online.net/news/35315/Sony-Watch-our-rivals-go-next-gen-first

Personally, I don't buy it.

I think Sony will be sure to be ready for next gen when MS drops. They won't be caught off guard again as they were in 2005. This goes in line with my thoughts on their architecture choices. Keeping things ps3 based will enable a quick response. Either in response to market conditions, or competitive pressure.

I think this was merely a response to the recent report of Sony looking into next gen architecture and trying to diffuse it and turn it into a positive for ps3 "well it's the most powerful so it will have a longer life span" ... (see Nintendo, Wii, DS)

2012 is looking more likely than not IMO ... for all next gen systems. Nintendo may surprise next year, but certainly by 2012.
 
Lookout should be on ‘Wii 2’ and ‘Xbox 720’ before PS4, says Sony boss

http://www.develop-online.net/news/35315/Sony-Watch-our-rivals-go-next-gen-first

Personally, I don't buy it.

I think Sony will be sure to be ready for next gen when MS drops. They won't be caught off guard again as they were in 2005. This goes in line with my thoughts on their architecture choices. Keeping things ps3 based will enable a quick response. Either in response to market conditions, or competitive pressure.

I think this was merely a response to the recent report of Sony looking into next gen architecture and trying to diffuse it and turn it into a positive for ps3 "well it's the most powerful so it will have a longer life span" ... (see Nintendo, Wii, DS)

2012 is looking more likely than not IMO ... for all next gen systems. Nintendo may surprise next year, but certainly by 2012.
Indeed Sony is in a tight spot I'm not sure they can afford to be late to the party but I'm not sure they have the funds to launch on time either. In the same time they are threatened on other activities by Samsung (TV, cell/smart phone, soon tablet, players, etc.).
Actually Samsung may be in a better situation to enter the market than Sony, to some extend I wonder if Sony would not make money with an alliance a bit like WINTEL. Samsung could handle the hard and manufacturing Sony provide content (won't happen I know mostly for non business related reasons).
 
Indeed Sony is in a tight spot I'm not sure they can afford to be late to the party but I'm not sure they have the funds to launch on time either...

Launch costs will be minimized by extending ps3 arch to ps4. Cell based cpu and nvidia gpu. It'll be like a beefed up ps3. Costs of software dev will also be minimized.

The markets will turn in a couple years and Sony's financial health will improve along with it.

The pressures from their competitors are present, but Sony will make it through. Their base is hardware engineering. I trust their ability to continue to lead in that realm.

If anyone would fall in that category I'd say Nintendo or MSFT are more likely.

*however* - I think Sony may be risking a bit much into the 3D trend. Personally, I think the 3D trend will fail until hardware is available which enables glasses free 3D. Hopefully Sony is protecting themselves against a 3D trend failure.
 
Status
Not open for further replies.
Back
Top