Predict: The Next Generation Console Tech

Status
Not open for further replies.
Commodore managed to get a real time multi user multi tasking OS Kernel into just 64Kb so it should be pretty easy with 256Kb!

Exec was never multi-user, but your point is otherwise well taken. The memory doesn't rule out a hypothetical microkernel of some kind (though it does rule out Linux).

It's the SPE's inability to process external interrupts that would rule out most anything we'd recognize as an operating system.
 
So according to that article, 3rd Party Dev's are hoping PS4 doesn't use Cell? While 1st part Dev's are fine with Cell?

Makes for interesting speculation.

Regards,
SB
 
Wouldn't they be better to stick with a Cell based system though? If 3rd parties have their Cell coding practices down by the end of this gen and those can be carried over to ps4, wouldn't they be in a better position than if ps4 switches to a different architecture and they have to go back to square one again?

I thought that was kinda the idea ... you know, build a knowledge base this gen that can carry over to the next gen.
 
That plan's viability seemed to hinge on mature frameworks making it easy to expose a demonstrable advantage for the Cell architecture in games and fueling momentum of the platform and chip that could be tapped going forward.

As far as I can tell, none of that has happened. PC-oriented development still gives a greater head start for a greater number of companies at this point than a Cell derivative. Unless the Cell model somehow takes off in the near future as the best way to program for parallelism (and if the next generation requires that for a computational leap), it may be easiest for Sony to just eat the R&D loss on Cell and drop it.
 
Where does Microsofts best interest lie with regards to processor architecture?

I wonder if development for their next generation console platform will coincide with a cheaper and more 'deployable' Surface computer. I don't see any reason why they would waste development effort by developing two seperate systems when the single system would work better to their advantage. They have to get the cost down for their Surface computers and using the same hardware architecture to drive economies of scale for both platforms seems to make a lot of sense as both systems require a lot of floating point performance to drive the systems as they can be assumed to use next generation interface techologies.

Does that make sense or have I lost everyone?
 
IMHO having a CELLv2 based CPU (slightly upgraded PPE and SPE, not that many more of them in the CPU compared to PS3's CPU... and a bump in clock-frequency) coupled with a DX11 generation GPU as well as a good increase in RAM would be best.

You could have very mature OS (carried over form the PS3 days), libraries, compilers, and have it very early on in a quite stable state getting 1st, 2nd, and 3rd parties comfortable with PS4's SDK long before launch (giving another reason for developing for PS3 and PS4 [CELL skills would be very valuable]).

Also, as the GPU would jump ahead quite a bit in functionality and versatility (DX9 generation --> DX11 generation) the CELL based CPU could be freed from some of the graphics processing tasks it currently does to help RSX in PS3.

One more thing, unified OpenCL+OpenGL environment to access both CPU and GPU would be very cool (especially if they allowed that for homebrew programmers under Linux+Hypervisor 2.0 ... you could not run PS4 games in such an environment [as they would not be developed under those two libraries exclusively, but would make use of more low level components], but you could do some real great 3D and 2D anyway :D).

Also, BC will be important IMHO... PS3 games running on a PS4 era TV will look much better than PS1 games running on a 40'' 1080p TV... that is what hurts PS1/PS2 titles running on PS3 and/or PS3 era TV's.
 
That plan's viability seemed to hinge on mature frameworks making it easy to expose a demonstrable advantage for the Cell architecture in games and fueling momentum of the platform and chip that could be tapped going forward.

As far as I can tell, none of that has happened. PC-oriented development still gives a greater head start for a greater number of companies at this point than a Cell derivative. Unless the Cell model somehow takes off in the near future as the best way to program for parallelism (and if the next generation requires that for a computational leap), it may be easiest for Sony to just eat the R&D loss on Cell and drop it.

Exactly.

The bottom line is that this generation is becoming mature now and programming practises are still seeing substandard PlayStation 3 multi-platform releases.

PC and 360 form one branch of development that are closely intertwined, and we are still seeing PS3 being handled as another branch with less resources, often resulting in sub-par conversions. Need for Speed Undercover is a top-tier release from EA, and it's shockingly poor on PS3. It's like time-warping back to 2007. While multi-format releases have clearly improved since then, 360 still has the technical edge in the majority of new releases.

PS4 needs to align itself back into that main branch of development. It's a clear win for Sony if Microsoft's advantage in cross-platform development is nullified, and there's nothing to stop first parties still getting their extra performance out of the hardware.
 
PS4 needs to align itself back into that main branch of development. It's a clear win for Sony if Microsoft's advantage in cross-platform development is nullified, and there's nothing to stop first parties still getting their extra performance out of the hardware.

Besides from future Intel and AMD processors having caches and HW prefetch instead of manual Local Store handling (also I said CELLv2 not an over-clocked CELLv1... IBM mentioned PPE and SPE improvements in their CELL roadmap) do you think that developing on a many-core on PC will be that different than doing it to a bunch of SPE's?

Say that SPEv2 has more LS and compiler assisted software caches allow programmers to more efficiently treat the SPE as a cache based architecture processor... then what's the BIG difference in programming style between a 6 cores Intel chip and a CPU with say 6 SPE's?

CELL is not Achilles' heel of PS3, not the major complaint developers have with the console AFAIK.

Let OS, compilers, libraries mature... give the system a GPU that needs much less help in graphics tasks and with a more sane connection to the CPU (maybe a UMA design for the console too... if they can provide enough bandwidth and make sure the GPU does not eat all the bandwidth up leaving the CPU with nothing to feed its units with) and developers will be much happier.

The Xbox 360 is great because they stabilized the software environment earlier on and HW wise it is a great "transitional" console between the PS2 approach to graphics and the many-core real-men SIMD (vertical/SIMT/scalar) future IMHO.
 
PC and 360 form one branch of development that are closely intertwined, and we are still seeing PS3 being handled as another branch with less resources, often resulting in sub-par conversions.

PS4 needs to align itself back into that main branch of development.
But is that main branch still going to exist in its current easy form? Looking at the alternatives, Larrabee in PS4 makes worse sense than Cell2, as it's an unknown, whereas Cell2 is 'more of the same'. Which makes the only option standard multicore, which is processing bottlenecked. Unless games really don't progress much and it's nothing more than a graphical facelift next gen, fast processing throughput will be the order of the day, needing lots of vector units. The PC space is talking about getting that performance from GPUs and CUDA etc., which AFAIK is a bigger pain in the butt than Cell development!

I think next gen, a Cell2 system will be the easiest to transition to, comparng the new gen of console to the previous gen. From XB360 to XB4000 will possibly be as problematic as PS3 development is now. Lots of us were saying this gen that Sony were making the difficult change now, but it'd pay off next gen. What are PC+XB360 devs who shy away from Cell going to have to work with next gen? Will they be safe and comfortable in their development environments and able to achieve good results thanks to automagicval parallisation tools that Cell can't use, or will the development environments have the same problems that Cell development has now and the devs are going to have to learn new skills? That's what it all came down to. Conventional processing cannot provide the processing throughput needed for data-heavy 'media' tasks. Fat, parallel float processing is necessary. The only way to get that is lots of small, simple cores, which needs new programming paradigms, which needs skills to be learnt. Reticent developers may be able to hold onto simplified systems, but not forever, unless technology is going to hit a brick-wall and get nowhere. And if you're going to learn new development paradigms for Cell, transition those to Cell2 should be as easy as following x86 development from it's early days to the P4.
 
I think next gen, a Cell2 system will be the easiest to transition to, comparng the new gen of console to the previous gen. From XB360 to XB4000 will possibly be as problematic as PS3 development is now. Lots of us were saying this gen that Sony were making the difficult change now, but it'd pay off next gen. What are PC+XB360 devs who shy away from Cell going to have to work with next gen? Will they be safe and comfortable in their development environments and able to achieve good results thanks to automagicval parallisation tools that Cell can't use, or will the development environments have the same problems that Cell development has now and the devs are going to have to learn new skills? That's what it all came down to. Conventional processing cannot provide the processing throughput needed for data-heavy 'media' tasks. Fat, parallel float processing is necessary. The only way to get that is lots of small, simple cores, which needs new programming paradigms, which needs skills to be learnt. Reticent developers may be able to hold onto simplified systems, but not forever, unless technology is going to hit a brick-wall and get nowhere. And if you're going to learn new development paradigms for Cell, transition those to Cell2 should be as easy as following x86 development from it's early days to the P4.

DX11 GPU + smallish dual or quad core OoO CPU will be the main dev stream.

Cell2 wil be a sideline development the same as cell1 is today to third parties. That seems pretty obvious to me. Of course what will happen in 5 years time who knows.
 
You always have the hardwired option. It is not as "pretty" but AMD is betting everything in the concept of acelerators. Things like the 130nm PPU, still beat the GTX280 (althought it does have 10x (or more) transistores and much higher speeds), that AIPU still seems the best thing in pathfiding. MS had R&D on voice recg HW...

Like AMD said it will give the performance without the problems of manycore paralelism, but you will sacrifice flexibility.

I guess that many devs would like, but some would probably feel caged.

Anyway if you got something like a costum X3 (2MB)+PPU+AIPU+ InterfacePU+ whateverPUs, probably would be smaler that a X4 phenon II/i, with a 4670 level GPU (for 2011) wouldnt that be a good piece of HW for a 2011 console?

I also wonder if something like SSD for fast loadings couldnt be as important as RAM and processing power itself, after all they start at 200MB/s (right?).
 
Last edited by a moderator:
So we can look forward to minimal progression of games and nothing but fancy graphics? Someone wake me up when next-gen is over...

I dont really see why progression in gaming experience will be held back by the hardware anymore, surely its going to be in the software design and interace that is key?

What are you expecting or hoping will happen in game progression, that cant be done on next years standard high end dx11 GPU and quad core CPU?
 
Last edited by a moderator:
Do you think that pushing a quad-core CPU (with SMT so we are talking about 8 HW threads) will be trivial?

I do not think that Xbox 360 programmers are still stuck to the basic threaded engine they could get away with in 1st generation Xbox 360 games and while per-platform specific optimizations and optimal practices might vary between Xbox 360 and PS3, leading with PS3 and understanding how to push the SPE's correctly and partition work well between them in a non trivial way benefits coding for Xenon's tri-core VMX-128 enhanced set-up.

Do you think that adding another core (or more as PC will have 6-8 cores by then) and two more HW threads will simplify developers' work in designing a properly multi-threaded engine?

What's so different from making sure your code does not touch more than 256 KB at a time per thread compared to doing the same in terms of cache lines, cache working sets (and non deterministic latency) on the PC?
 
What are you expecting or hoping will happen in game progression, that cant be done on next years standard high end dx11 GPU and quad core CPU?
All the stuff that we were supposed to expect this gen (and last gen...) that never happened. Procedural synthesis, AI behavioural physics, 'living cities', yadayada. We have a bit of physics and not much else staggeringly wonderful. A lot of CPU power isn't being manifest in anything more than graphical improvements.

And as Panajev and others are saying, how is a Quad-Core PC CPU any easier to work with than Cell, especially on the back of previous Cell development if your Cell code is directly portable? It's not like the multicore tools allow you to write a monolithic piece of code and have it magically converted into a multicore, efficient scalable application. In fact Cell is supporting some nice features in that regard because STI are tackling the issue now. Seems to me everyone's kinda expecting Intel to offer a magic bullet, Cell's peak performance with a super-easy codebase. That'd be quite some achievement if so! I think bang for buck, Cell 2 would give the most processing grunt without needing to reengineer everything for Larrabee or GPGPU. It was a very smart ideaa back at Cell's conception, and remains a very smart idea when all anyone else is offering is promises. Anyone who knows Cell and has accumulated 5/6/7 years experience prior to next-gen should see Cell2 as a good move if they honestly evaluate the situation. I dare say that those who don't like the idea are in for a rude awakening when the alternatives are presented.
 
And as Panajev and others are saying, how is a Quad-Core PC CPU any easier to work with than Cell, especially on the back of previous Cell development if your Cell code is directly portable?.

Well if its hard to work with an ooo quad core then whats the point in considering an in order 16+ asymmetric core?.... unless its for highly paralleled tasks.... graphics.

EDIT: And besides not everyone has worked with Cell and got existing code sets to work with, and most third party devs havent come close to trying to max the cell. So in effect it doesnt change the fact that cell is not the primary development stream (apart from a handful of first party devs for sony.

most other stuff you mentioned would be done on a next gen GPU. and complex AI will run quicker on a traditional cpu no?

So where does that leave us?
 
Well if its hard to work with an ooo quad core then whats the point in considering an in order 16+ asymmetric core?

Is there another way?.

This makes me think about when they asked the devs of COD4 how they made it run at 60 fps on PS3... how they were able to manage it (as opposed on having the game perform well on Xbox 360 only)...

The answer: "we programmed it...".

The golden age of programming in which the statement "if your code is slow wait a few months" would hold true is basically over... until practices to design software that scales well with an increasing number of cores are more and more common...

Still, it is not going to be easy, but single threaded performance is hitting a wall right now and multi-core is the answer to bring system performance up.

Think about LRB... if you want to go beyond the SW based DX/OGL interface you will have to deal with a lot of cores with HUGE vector units attached to them

If we could have all the performance we need with a single core single-threaded CPU we would be crazy to go many-core.
 
What's so different from making sure your code does not touch more than 256 KB at a time per thread compared to doing the same in terms of cache lines, cache working sets (and non deterministic latency) on the PC?

I agree in principle that writing parallel code for problems that aren't embarrassingly parallel (like graphics) can be hard, but let's be serious...writing to SPUs is not like writing threads. Those threads all occupy the same address space and you don't have to DMA strings between cores just to do println debugging!

If you think it is, then put out two job reqs -- one requiring SPU experience, the other asking for experience with pthreads. See how many resumes you get back for each.
 
Status
Not open for further replies.
Back
Top