Predict: The Next Generation Console Tech

Status
Not open for further replies.
I'm with the multipurpose PU camp on this one.

Once our hardware is generalized then the advancements come form increasing speed and tweaks not developing new hardware to perform specific tasks. I see no reason why crunching data of any kind should rely on different processors.

If mother nature has taught us anything is that specialization gets you killed, being able to adapt is what kept us alive millions of years ago and is what is going to keep propelling our technology forward. We traded long term advancement in the computer world for short term gain by separating what crunches what data into separate processors.

We need to get back to the fundamentals of what we are trying to accomplish, interpreting, calculating and transmitting binary data. That is all we are really doing, what that data represents or is trying to do is meaningless...its just data.

I say we take the brunt of it today with lower gains in speed and potential now, so that we get better gains in the future. The longer we wait to make the switch to a general processor the harder it's going to be to switch as we will have created a rift ourselves we could have avoided. Our processors are already too specialized, we are creating software just for that hardware and in turn creating new hardware because we want different software.
 
All software graphic computing seems out of reach for now. I was really enthusiastic about Larrabee because I was thinking people would push something new. Even Intel "failed" or admit to consistently delay the project.
The interview Charlie did with Andrew Richard and Tim Sweeney pretty much set my mind as I think that A.Richard made quiet some interesting and realistic points. His focus on power efficiency and consumption being the strongest point he made against T.Sweeney.

While the exchange between Rys and Brainstew is still pretty present in my mind (in the thread about the aforementioned interview) but I still wonder if more fixed function hardware could a be solution to allow for more fully programmable hardware.
 
Last edited by a moderator:
All software graphic computing seems out of reach for now. I was really enthusiastic about Larrabee because I was thinking people would push something new. Even Intel "failed" or admit to consistently delay the project.
There's a problem there in that Intel are competing with GPUs and need a working product. The best use of completely flexible processing will no doubt come from completely new approaches, which no-one has thought of and which no-one will think of immedaitely. It'll take time experimenting with the hardware and trying things out, and then finding inspiration that links with the experiences of the hardware.

Takes something like the new GI approaches, CryTek's cascaded light volumes and LBP 2's irradiance slices. These exist because the hardware is programmable and up to speed. Fixed function would have prevented such innovations. As such, we see programmability as a real world empower. GPU's are more effective by being more flexible. We just have to limit flexibility for performance economies, but there can be no denying (in my mind at least) that a software renderer will do things people don't currently think possible as they look at graphics throguh the eyes of GPU programmers. Now that GPU's are more programmable, eyes are being opened to things like tesselation, which requires a new piece of hardware in the geometry shader. No hardware limits would mean every idea gets explored, and from that, efficiencies such as far lower pixel shading and vertex processing throughput with far smarter application, only determining and drawing the bits that matter.

Such research is purely the domain of academia at the moment, where investment doesn't expect returns within a few years, and academia is spread thin. I dare say if you took a Cell workstation or Larrabee server to MIT or the like and said, "here's a budget for three years to find every exciting rendering method you can," there'd be some incredible results.
 
I should have said specialized hardware instead of fixed function hardware.
Actually I wondering about it a while ago in a thread I opened and Fadalada came with a really short and meaningful "answer" (answer is not the word I search but I can't find it) in this post:
fafalada said:
But I disgress, I don't think modern equivalent would mean much(other then sound nice for PR). A 2006 GS equivalent would essentially be a Deffered-Rendering accelerator (super fast at filling attribute buffers and not much else). Which would do just fine paired with Cell or something like it, but one has to ask if it's worth specializing so much when an RSX gives pretty nice results for BOTH forward and deferred rendering even on fundamentally crippled memory interface.
He's indeed right PS3 Cell/RSX push damned fine graphic for something designed in 2005 :)

But when I see nowadays GPU power consumption I wonder if this approach could worth it if not for overall perfs but in regard to power consumption. Both the ps3 and 360 after three shrinks are still bulky and power hungry devices, Pc world doesn't do any better in this regard (that's pretty much an understatement). On the other hand SPU are damned impressive in this regard (and some others actually).
 
Last edited by a moderator:
There's a problem there in that Intel are competing with GPUs and need a working product. The best use of completely flexible processing will no doubt come from completely new approaches, which no-one has thought of and which no-one will think of immedaitely. It'll take time experimenting with the hardware and trying things out, and then finding inspiration that links with the experiences of the hardware.
Larrabee is more flexible than the current GPU architectures, but it gave some pretty strong hints as to the facets of a good approach.
Anything that could not use the 16-wide VPU or failed to keep writes from bouncing cache lines from one core to the next would have been bad. The coarseness of modern DRAM and its burst lengths favors schemes that packetize work such as some forms of ray tracing and rasterization more than others.
Fundamentally different approaches that do not employ SIMD FP throughput with high data locality and low divergence will fail to differing but still painful degrees for both.

John Carmack's sparse voxel octree concept that was bandied about a while back would have been ill-suited to Larrabee, and he stated as much.
Physical realities stack the deck in favor of certain approaches, though in theoretical terms there should not be such a disparity.

We just have to limit flexibility for performance economies, but there can be no denying (in my mind at least) that a software renderer will do things people don't currently think possible as they look at graphics throguh the eyes of GPU programmers.
Software renderers came first, though.
They had and have a lot of ideas even today. It seems innacurate to characterize the body of graphics knowledge as being somehow limited to what the a console GPU can render.

No hardware limits would mean every idea gets explored, and from that, efficiencies such as far lower pixel shading and vertex processing throughput with far smarter application, only determining and drawing the bits that matter.
More accurately, the hardware is less restrictive on the software, but it is certainly not unlimited.
Ideas such as stateful rendering have been researched, where per-frame data that does not change and normally is redundantly rendered is instead copied over to the next frame.
It seems wastefully rerendering everything has thus far proven to be less of a nightmare to implement.

Such research is purely the domain of academia at the moment, where investment doesn't expect returns within a few years, and academia is spread thin. I dare say if you took a Cell workstation or Larrabee server to MIT or the like and said, "here's a budget for three years to find every exciting rendering method you can," there'd be some incredible results.
Cell doesn't have texturing units. Larrabee had to include them.
There is a large universe of already researched software rendering programs and algorithms.
Not so many commericially successful real-time ones, though.
Movie studios and animation companies have pumped serious cash into software methods, there are many methods that have been tried and good money is available if they succeed.

Now if the question is why academia doesn't feel like sitting down and finding every software rendering algorithm we can do on an Xbox 360 or PS3, I can understand that. Whatever they attempt would be 10 times slower, take much longer by dispensing with already existing tools, and not make much money.
 
Software renderers came first, though.
They had and have a lot of ideas even today. It seems innacurate to characterize the body of graphics knowledge as being somehow limited to what the a console GPU can render.
Software renderers were limited by processing power, such that you couldn't really explore them. Who was implementing normal mapping on a 486? A 486 could render anything (DX software rasteriser FTW!) but would take too long, such that lines of thinking were just beyond the scope of even considering. Every exploration in the software rasterisers had to stop once they got to such long computation times they couldn't really get anywhere. Hence the earliest days of CG rendering were phong/gouraud vectors and the like. No-one thought about implementing normal maps because they were pleased as punch just to be able to produce 3D graphics - that was new and cutting edge! And no-one wrote a Toy Story type renderer in the early days because it would never have finished rendering any pictures! the prociessing power hasn't been there to support a full exploration of what is possible via software algorithms.

Cell doesn't have texturing units.
I remember discussing prior to (or shortly after) PS3's release a potential Cell rendering method that addressed this with some smart texture management. Larabee had to work as a DX renderer, which meant it had to work with texture concepts already engrained in developer mindsets.

Movie studios and animation companies have pumped serious cash into software methods, there are many methods that have been tried and good money is available if they succeed.
A lot of those methods have been performance hacks to work around legacy problems. How many clean-sheet designs ever really get explored? I dunno, maybe there really are a lots, but from what I see of business, the funding will be determined by the management who want direct dollar conversion for investment, and experimental research is sidelined unless it's tackling issues that are immediate sales-point. "How do we get this scanline rasteriser to render translucent surfaces, because that's what our existing customer base wants?" rather than, "how do create a new renderer that can deal with both the opaque geometry we have had for years, and this new fangled translucent geometry?" The latter solution is the better one, but would require a lot more investment and you'd alienate you current customers as they look to other companies who offer right now a hacked-in solution. The world is full of slap-dash solutions piled on top of legacy platforms!

However, this isn't really the place to discuss the future of rendering tech. That's a branch that warrants (and has) its own discussion. This thread is more about the bits that go in and the designs. At the moment I concede that there isn't a suitable processing platform to support my Grand Vision of the Future, and any tablet console will need to make do with conventional portable parts.
 
Software renderers were limited by processing power, such that you couldn't really explore them. Who was implementing normal mapping on a 486? A 486 could render anything (DX software rasteriser FTW!) but would take too long, such that lines of thinking were just beyond the scope of even considering. Every exploration in the software rasterisers had to stop once they got to such long computation times they couldn't really get anywhere.
There are high-end software solutions used for CG animation.
The big companies have massive render farms, so computation is orders of magnitude higher for them than a console owner.
I would suspect that if that is not yet enough that this would be a significant factor in the continued existence of specialized hardware, which typically gets nowhere near the space or power allocated to something like Pixar's server room.

I remember discussing prior to (or shortly after) PS3's release a potential Cell rendering method that addressed this with some smart texture management. Larabee had to work as a DX renderer, which meant it had to work with texture concepts already engrained in developer mindsets.
Memory accesses are individually small, frequently unpredictable, unaligned, and usually involve scads of low-precision FP or integer math.
Whatever texture scheme Sony envisioned seemed to be inferior doing an about-face and buying an aging chip from Nvidia.

I dunno, maybe there really are a lots, but from what I see of business, the funding will be determined by the management who want direct dollar conversion for investment, and experimental research is sidelined unless it's tackling issues that are immediate sales-point.
The implication of this is that even if we weren't being held back by fixed-function hardware we'd still be held back by real-world considerations.
This is a point I've made before elsewhere.

If a scheme could promise significant visual benefits or could complete faster with little degradation, there would be a business case made.
The issue seems to be that no such scheme has made that case, and I do not believe nobody is trying.
 
If mother nature has taught us anything is that specialization gets you killed, being able to adapt is what kept us alive millions of years ago and is what is going to keep propelling our technology forward.
And yet there are hundreds of different specialized types of cells and tissues in your body filling all manner of different structural and functional needs. And it's apparent on the macro scale too. I for one am happy that I'm not a quadruped, with an eye and a nose on each limb. I cherish my separately specialized organs for sensing, traversing and manipulating my environment.
 
And yet there are hundreds of different specialized types of cells and tissues in your body filling all manner of different structural and functional needs. And it's apparent on the macro scale too. I for one am happy that I'm not a quadruped, with an eye and a nose on each limb. I cherish my separately specialized organs for sensing, traversing and manipulating my environment.

And yet it's the 3 multipurpose parts, your hands, mouth, and your brain, that make you human.
 
Drifting OT, people.

Well have you noticed that it seems to be third parties who are really launching the 3DS? It makes me wonder given the fact that Nintendo sales are down significantly YOY and industry sales likewise (partially due to PS3 slim) that perhaps they are intending to launch the next Wii within the next 18 months. It could be as soon as holidays 2011 or it could be early 2012 or spaced between the two dates in a staggered launch.

Even with keeping the same or similar form factor, they could probably ramp up the cooling if they targetted a higher launch price with something similar to a vapour chamber Saphire is famous for. They could likely sell a Wii 2 with a TDP of 30-40W against the Wii's ~25ish without blowing the budget on noise or reliability.

So given the fact that each bobcat core with cache on 40nm is something like 8.9mm^2 they would only need say 40mm^2 of die space to implement a quad core bobcat, with the extra space being communication logic between cores. If Ontario is 74mm^2 with two cores and 80 SP then by extension it oughtn't be much larger than 160mm^2 to implement 320SP and 4 cores.

For security they could simply re-implement their ARM core as seen on the Wii to perform security and basic system functionality.

For an optical drive, the costs of slimline 6 speed BR drives are dropping precipitously. The extra costs would surely be covered if they went to $299 as a launch price.

http://www.newegg.com/Product/Product.aspx?Item=N82E16827136194&cm_re=blu_ray-_-27-136-194-_-Product

That ought to be half the price sold in bulk to OEMs, especially if they choose not to pay the $10 BR playback fee.

For storage, flash makers are about to shrink to the next process node or are doing it now. There shouldn't be any problems if they implemented a 64Gb flash chip.

Memory wouldn't be too difficult. They could use 2Gbit or even 4Gbit modules, 2 or 4 and go for a basic level of 1GB of ram. That ought to be more than enough for a next generation console.

It seems that they need a new console and the above would probably be a pretty good bet! (IMO)
 
Last edited by a moderator:
I've never thought of it but I would have to guess Nintendo will launch next gen soonest.

And imagine this, if Wii2 are actually more powerful than PS360. they will force successors to those consoles. Once hardcore consoles have been beaten in power their days are very numbered. High end development would begin moving to this hypothetical Wii2 for example. I never considered Nintendo as the catalyst for next gen to begin...

Not that I think Nintendo will even beat PS360 with their next console though. Since Sony/MS still sit at barely profitable 299.
 
Saw this on Neogaf

http://news.yahoo.com/s/pcworld/20101008/tc_pcworld/cellprocessordevelopmenthasntstalledibmctosays

Development around the original Cell processor hasn't stalled and IBM will continue to develop chips and supply hardware for future gaming consoles, a company executive said.

IBM is working with gaming machine vendors including Nintendo and Sony, said Jai Menon, CTO of IBM's Systems and Technology Group, during an interview Thursday. "We want to stay in the business, we intend to stay in the business," he said.

Interesting no mention of MS there. Hard to say if it's meaningful or just a casual omission. Especially since this is later in the article
"I think you'll see [Cell] integrated into our future Power road map. That's the way to think about it as opposed to a separate line -- it'll just get integrated into the next line of things that we do," Menon said. "But certainly, we're working with all of the game folks to provide our capabilities into those next-generation machines
 
I've never thought of it but I would have to guess Nintendo will launch next gen soonest.

And imagine this, if Wii2 are actually more powerful than PS360. they will force successors to those consoles. Once hardcore consoles have been beaten in power their days are very numbered. High end development would begin moving to this hypothetical Wii2 for example. I never considered Nintendo as the catalyst for next gen to begin...

Not that I think Nintendo will even beat PS360 with their next console though. Since Sony/MS still sit at barely profitable 299.

Personally though I'm not sure... Nintendo created a gulf this generation with the Wii, between the audience primarily catered to by the platform and those high end 3rd party developers that like making big AAA core games.

I wouldn't think that Nintendo jumping the gun and releasing a next gen console too soon will persuade the likes of Epic, Rockstar, Crytek etc that their games (which have a significant appeal to a specific albeit broad demographic) will do well on a new Nintendo console. Who's to say that regardless of the technical capability of Nintendo's next console, the same kinds of gamers that buy up your PS2s, Xboxs, PS3s and 360s will migrate over to a new Nintendo box?

Nintendo don't just own a platform but an entire brand, and probably one of the most famous brands of all the big 3. Nintendo will have a harder job convincing the hardcore gamer that they will be able to adequately cater to them, as well as if not better than MS and Sony has with the PS360.

After all, the GC was more technically capable than the PS2. That didn't really see a mass migration of core gamers over to it, nor did it see a mass migration of all high end game development. I just see think that Nintendo releasing their next box early would force MS and Sony's hands, as i believe that the majority of those core gamers and developers that care about high end gaming will still remain on PS360 and most likely wait it out till PS4 and the NeXtBox are released.

If Nintendo wants to really broaden the appeal of the next console they need to target the core gamer alongside what they do for the expanded audience they brought in with the Wii. Otherwise the issue they had with 3rd party development on Wii will continue and they'd still miss out on alot of the big 3rd party core games. I believe they'd be better waiting and providing a console which matches PS4/X720 as much as they'd want it to do so. That way they can garner all 3rd party port-downs from PS4/X720 and still own with their in-house dev support... Good things will come to a Nintendo that waits ;-)

That's my perspective anyways...
 
Last edited by a moderator:
MS are going with a multiple Ontario CPU+GPU combination in their tablet/console combo system. :yep2:

Or a rather less subtle hint, this isn't really the place to discuss MS's options. :p Instead, point to the IBM remarks in the Next-gen tech thread. As Rangers recognised - kudos Rangers!

Point taken, wasn't paying attention. Anyway, if I were Microsoft I would personally be quite happy with the GPU contribution by ATi / AMD, and I can definitely see them allowing AMD to pitch an integrated CPU / GPU vision (note that I'm saying the vision is integrated, not necessarily the chipset on day one).

Also if that happens, I'm pretty sure it wouldn't be a literal implementation of something presented to the public today. It would be something that we won't hear about until around the next Xbox's release.
 
Point taken, wasn't paying attention. Anyway, if I were Microsoft I would personally be quite happy with the GPU contribution by ATi / AMD, and I can definitely see them allowing AMD to pitch an integrated CPU / GPU vision (note that I'm saying the vision is integrated, not necessarily the chipset on day one).

Why wouldn't it be integrated on day one? Theres nothing really to suggest that they are going to make a console with thermals so high that one socket/chip cooling solution couldn't handle it. Besides this there are very real advantages to combining the two in terms of GPGPU especially in relation to latency as they are likely going to use the GPU once again to process the camera inputs.
 
But the graphics transisitors become a nuisance to dedicate to non-graphics work. An open architecture won't have idle transistors. I'd rather have an open, fully programmable processing model, to use however you choose. Much more efficient (as long as you can get respectable rendering from the thing).

I'll chime in a bit late. As a lot of other have pointed out, the efficiency claim in the quote above is questionable.
Actually, the architecture of todays consoles make quite a bit of sense. You have a GPU to deal with the embarrassingly parallel task of putting pixels on the screen, and one (or a few) CPUs optimised for doing a good job with game logic, system house keeping, et cetera. It is a heterogenous computing platform that does well both with serial code and with parallel code, alleviates Amdahls law bottlenecks, and allows for a more focussed/efficient architecture as far as rendering is concerned.
Not half bad.

There may be efficiency gains to be had by closer coupling of the systems, allowing them to share data faster or share resources such as cache, but the underlying paradigm is quite sound. The fact that it evolved from single thread CPU origins doesn't imply that a wholly homogenous/parallel architecture would be the evolutionary goal.
 
Why wouldn't it be integrated on day one? Theres nothing really to suggest that they are going to make a console with thermals so high that one socket/chip cooling solution couldn't handle it. Besides this there are very real advantages to combining the two in terms of GPGPU especially in relation to latency as they are likely going to use the GPU once again to process the camera inputs.

Has there ever been a high end integrated GPU/CPU?

The next Xbox IS going to be a high end console, I promise. All this stuff, integrated, ARM chips, is non starter.


But I guess, it's a speculation thread, not so much realistic speculation :p
 
Status
Not open for further replies.
Back
Top