Predict: The Next Generation Console Tech

Status
Not open for further replies.
ARM aint that good perhaps, but x86 can still be a pretty significant burden on lightweight cores.

Not having a direct line with Intel's engineers I can't confirm, but it was claimed (on realworldtech, or ace's ?) that Silverthorne's transistor count was inflated by >15% over what it would have been if it weren't x86.

The business side to having an x86 again in a console is something separate. Maybe Intel would want to spread Larrabee, and if the console maker wants to have its cost cutting measures at Intel's mercy, have at it.

Maybe an AMD x86 could make it in, and if the console maker wants to have its supply chain contingent on AMD's competence and continued existence, have at it.

Where VIA would fit, besides just being a very small design team and limited player, I don't know.
 
Thanks for your response Rapso, you hint me some new stuffs to read (and to fight to understand :LOL: )

Your points are really interesting, I edited some part of my initial post because I did mistakes and I hope it didn't make some parts offensive ;)

I'm nowhere near your level of knowledge I just try to have opinions then read some interesting stuffs about it ;)
EDIT
Trying to understand the basis ;) for those interested:
http://softwarecommunity.intel.com/isn/downloads/intel_tbb_ch01_for_promo.pdf
EDIT
Damned that was intersting ... but only the first chapter is available online :(
 
Last edited by a moderator:
...and I'd expect the consoles to move more and more toward the pc, not just allowing mouse gaming (like ps3 already does), but to be a replacement for surfing etc. being a real replacement for the PC for most of the ppl.

that imply that big pc players like NV/Intel/AMD will urgently try to get into this marked, offering their solutions of cpu+gpu+memcontroller mixtures and we're back at the cpu with a lot of cores+hyperthreading+special-instructions/fixedfunction-slaves.

What if Sony choosed more PC-oriented CPU in PS4 plaftorm instead of IBM Cell chip. Backwards compatibility would be a big issue, how can they simulate SPE dedicated subtask programs. Of course today Cell platform is moving to the same direction as anyone else. More ppu threading cores, more SPE slave units.
 
What if Sony choosed more PC-oriented CPU in PS4 plaftorm instead of IBM Cell chip. Backwards compatibility would be a big issue, how can they simulate SPE dedicated subtask programs. Of course today Cell platform is moving to the same direction as anyone else. More ppu threading cores, more SPE slave units.
there are already (very slow) emulators public available. of course, it wouldn't be simple to emulate the whole cell, but it wasn't simple to emulate the ps2 as well;)
_and_
I didn't say all consoles will _have_ to use PC tech, maybe non of them will. but the more they move toward the pc marked, they'll be obviously of interest for the PC Player.

What I wanted to say is that there might be no dedicated GPU, just CPUs with a lot of cores, with all kind of tricks to hide latency. I think Sony's/IBM's approach with the Cell might be very suitable for this kind of hybrids. they could either add some more of them to the cell and supporting texture/formatconversion instrinsics or they could add FFUnits just like it would be an SPU, so other SPUs could query it for some texture sampling tasks. I think wikipedia wrote that 32SPUs were planed anyway and there were rumors that sony planed to have several Cells on the PS3. so maybe their plans are just deferred to the PS4.

(In respect to a lot of simple cores with a lot of thread, i'd say the Niagara of SUN is in some way what I'm thinking of. instead of their security units, they could put TMUs on it and it would be my fricking future device :), of course, it's missing the math power etc., but it shows how it could be done.)
 
Anyway something is clear, by the time the next gen systems come out, it's likely that Sony will no longer be able to produce the chip "in house".

Between I'm still working on a proper response to your comment Mintmaster ;)
 
Last edited by a moderator:
Entropy said:
Specifically, the IP situation makes it difficult to design the CPU+GPU as a whole.
Well at least one IP maker has a solution coming that fits the bill. And going by some recent patents I don't think they'll be alone for very long.

Thing is, given how utterly horrible PPC cores in 360/PS3 are this gen, a future console dumping the concept of "general purpose" core completely and going fully with stuff like what's found in Larrabee doesn't even sound like a difficult transition anymore. And as we both agreed, it's not like we have some huge growing demand on conventional GP CPU power - we're already past the point where that matters.
The flipside is that we have no actual evidence yet how such a concept will perform at realworld GPU tasks, and then there's the much more serious problem of having software that drives it, since we're no longer dealing with fixed hw taking care of the ugly stuff.
Intel at least claims they will have all the answers there, but it remains to be seen.

It's success is an important example in many respects, but it makes the next generation less predictable, less likely to be just more of the same. Much more interesting to talk about. :)
Less predictable means more likely our speculations aren't even remotely close to the truth :p I totally agree that just moving in one direction isn't cutting it anymore though.


Mintmaster said:
It'll probably be one die, too, with more flexible usage.
Tons of same cores + general purpose eDram. Hey, it's like the very old Cell patents. Not that I'd complain. :p

nAo said:
Are you advocating one ASIC (CPU+GPU+EDRAM)?
As I said above, unless the 'CPU' has something much better to offer then PPCs in existing consoles, I'd rather we just drop them alltogether and deal with the small cores for everything.
 
As I said above, unless the 'CPU' has something much better to offer then PPCs in existing consoles, I'd rather we just drop them alltogether and deal with the small cores for everything.
That's a bit ambitious, isn't it?

Cores in a GPU are organized and fed data in a different way than cores in a Cell-esque CPU. I don't think we're ready for that type of radical design yet.

IMO software is really going to make 95% of the difference next gen. You want it to be as easy to program as possible, and make it cheap so that the userbase grows quickly and studios can afford top-notch content.
 
Mintmaster said:
That's a bit ambitious, isn't it?
Maybe it doesn't need to be.
If we take Larrabee, it's cores are specced roughly on par with current console PPCs (L1, shared L2, in-order), so we're probably looking at clock parity (but really, it could easily get better) in GP, and that would already be a step up (this generation was a step down in terms of per-clock performance relative to last-gen CPUs - even including R5900 and SH4).

It comes down to the question if you believe there is still big growth to come for single-threaded GP power requirements, and I don't.


As for how future Cell might map into this scenario - I think Panajev has been studying patents on that topic so you might want to ask him about it more.
Completely agree on Software thing though, even more so if 'GPU' becomes largely a software driven concept.
 
Last edited by a moderator:
I can see nintendo going back to cartridges. flash only gets cheaper, look at the price of a 2GB USB drive, outrageously low! and it's plenty of space to store a game already. you can have 1GB games, 8GB games etc.

resolution target would be 720p at 4x MSAA. (allow 1080p anyway)
An AMD Fusion SoC will power the thing : four Bobcat cores and something like a quarter of RV770's SPs, with low power as a main concern. starting on 40nm?
No edram, 1GB of 64bit G-DDR5.
 
Last edited by a moderator:
If Intel is to be believed Larrabee is way too power hungry (~300watt)
And this is with Intel process excellence.
Other thing It's huge.

I don't think that by the time next generation systems comes this kind of design will be cost efficient.
Not enough perf both per mm² and watt.

I still don't bite in the sea of CPU concept, ATI has just us how "cheap" power can be.
Ok the number of Tflop might no longer be relevant but why spend twice or a third more silicon?

Latter is another story, Intel has managed to make memory cells made out of two transistors!
When everybody will have access to this kind of tech then... some problem will be solved:
Lot of bandwidth and lot of space on chip ;)

Minsmaster, I don't want to speak about the gta4 I somewhat agree with you but it's a really polemic subject so I avoid ;) (I thank a lot about that but I didn't find a way to express my opinion in a way that would make this thread easily derailed by some...)

In regard to the two gpu design, I agree it won't happen (I spoke about it earlier mainly because I underestimate what gpu provider would be able to provide).

For the design it really depends on manufacturers silicon budget.
Kietech favors a silicon budget of 400mm², my guess was more between 300 & 350.

If I understand your comment you would put the silicon budget @300mm² maximum.
I think nothing would beat in efficience a tiny quad core CPU+ a GPU.

A quad core improve xenon would be tiny, and that would let room for the gpu (enought that it could have enough power left for GPGPU calcualtions).
Not sure for the edram, as fast RAM could be cheap.
I agree under 300mm² the CPU & GPU could be on the same chip fusion like.

Other thing GPGPU is here and growing, the software will be there every body is currently jumping in.
 
Last edited by a moderator:
I can see nintendo going back to cartridges. flash only gets cheaper, look at the price of a 2GB USB drive, outrageously low! and it's plenty of space to store a game already. you can have 1GB games, 8GB games etc.
At 8 GB they are still looking at a couple of bucks off their margins in 3 years as compared to the penny's for discs. A couple of bucks is too much. The 50 GB of a BR disc is completely out of reach. Unless flash density (on top of process shrinks) or production costs per mm2 change by an order of magnitude this is just a pipe dream for the near future.

As a storage device it's great, I don't think we will be seeing HDs anymore, but as a distribution device it sucks. Of course physical distribution in general will go the way of the dodo, but not this soon.
 
Other thing GPGPU is here and growing, the software will be there every body is currently jumping in.

Out of curiosity..

Who is everybody & what signs do we have currently that it's an area games developers will be looking towards..?

IMO with games nowadays having the vast majority of their technical demands being purely rendering & graphics-based, I *really* don't see why any developer would consider using any of their GPU silicon for anything other than graphics/rendering-related tasks (including animation & physics)..?
 
At 8 GB they are still looking at a couple of bucks off their margins in 3 years as compared to the penny's for discs. A couple of bucks is too much. The 50 GB of a BR disc is completely out of reach. Unless flash density (on top of process shrinks) or production costs per mm2 change by an order of magnitude this is just a pipe dream for the near future.

As a storage device it's great, I don't think we will be seeing HDs anymore, but as a distribution device it sucks. Of course physical distribution in general will go the way of the dodo, but not this soon.

Also if Nintendo are hoping to provide hardware more inline with what current gen performance systems are offering in terms of HD graphics, I fail to see how even 8GB game storage will ever be sufficient going forward..
 
Out of curiosity..

Who is everybody & what signs do we have currently that it's an area games developers will be looking towards..?

IMO with games nowadays having the vast majority of their technical demands being purely rendering & graphics-based, I *really* don't see why any developer would consider using any of their GPU silicon for anything other than graphics/rendering-related tasks (including animation & physics)..?
Every body, I think that a lot of people are pushing the software for super parallel architectures.

Intel, Apple just entered the game, MS will obviously follows, obviously both ATI/AMD and Nvidia and a lot of tinier actors.

It's pretty clear that GPGPU is happening now and will only get bigger.
Higher level languages, libraries, etc. etc are on their ways.

For games? see physx and Nvidia push, havoc with ATI and intel, and there will be more as the tools will make programming easier.
I'm sure AI will benefit too.

I would really like to see Insomniac speak about their engine and how it could work on current or upcoming GPU (there should be ~two generations of GPU till the next generation systems).

It's the begining and it's now. (/ I'm slightly enthousisatic in this regard :LOL: , but onthe other side I see no reason to feel another way)
 
Last edited by a moderator:
It's the begining and it's now. (/ I'm slightly enthousisatic in this regard :LOL: , but onthe other side I see no reason to feel another way)

I understand your enthusiasm as GPGPU does offer alot..

I am however concerned however as to how it can benefit games development (on consoles more specifically)..

Granted i'm sure processing such as AI, physics & animation could probably benefit from a speed boost over traditional CPUs, however even the CPUs of today are hardly "traditional" (CELL in particular) & if, going forward we're going to see the next generation of CPUs advancing in terms of parallel processing capacity (8+ cores, tens of threads, wider SIMD & greater floating point perf etc..) then I STILL don't see why any console developer would have any reason to move all these non-graphics-centric tasks onto (what is in the console space, essentially) an already overloaded GPU..

So far I've only seen arguments in the vein of "whether" these tasks can be done on the GPU not necessarily "why" you'd want to do it..

Besides that, do modern GPGPU solutions provide the means to execute both core graphics & GP code at the same time..?

I dunno but if anyone else does then it would be great to get some insight into this..

Overall it seems to me that GPGPUs main benefits lie currently in the HPC & Scientific computing spaces & possibly also in the PC space where for example an SLI rig can utilise one of the cards as a GPGPU processor to aid with GPU/CPU workload however their really doesn't seem to be any demand for this in the console space & I'd be pretty suprised to see it make any kind of dent unless for some reason the GPGPU programming becomes so pervasive that the switch is made for reasons other than any kind of technical or performance benefit..
 
It seems that both Nvidia and ATI make effort to make change between task on the "multiprocessor/whatever" easier (more memory/register)

It regard to load balacing between CPU and GPU I guess it's a matter of choice ;)
If you want the cpu to do a lot of work it likely that the gpu won't have enough ressources to something outside of its graphic jobs.

The real question (I can't answer) is if manufacturers go with a tigh silicon budget, where do you have to run the different parts of the code to achieve you performance goals within yout budget limitation.
Like the SPE GPU "multiprocessors" offer a lot of power.

Between in regard to earlier speculation about transistors density @32nm I amazed that ATI pack almost 1 billions transistors in 256mm²@ 55NM :oops:
 
Last edited by a moderator:
Maybe it doesn't need to be.
If we take Larrabee, it's cores are specced roughly on par with current console PPCs (L1, shared L2, in-order), so we're probably looking at clock parity (but really, it could easily get better) in GP, and that would already be a step up (this generation was a step down in terms of per-clock performance relative to last-gen CPUs - even including R5900 and SH4).
Right, but we have no idea how Larrabee will compare to chips from ATI and NVidia in sustained performance for graphical workloads, particularly immediate mode rendering.

I'm not sure Intel is going to even try achieving the amazing latency hiding and thread interleaving that ATI and NVidia do with their shader engines. The way the register file, texture units, thead scheduler, etc are organized, it just seems to specialized for graphical workloads.
 
Status
Not open for further replies.
Back
Top