Predict: The Next Generation Console Tech

Status
Not open for further replies.
What do think about Big.little setups where two sets of CPUs never collaborate on tasks? i.e. one CPU for OS and apps and the other strictly for games.

Why not just have little.little.little.little (x2)? Or in the case of Durango/Orbis, jaguar.jaguar.jaguar.jaguar (x2) ;-)

That way you save the HW system complexity. Just have a single pool of memory with a bit reserved for the OS, and a many core CPU with a single core reserved for the same like this gen. Simpler and more elegant.

Big.little is only useful in protable devices where you have major power concerns. Not a problem for consoles.
 
playstation 1,2 and 3 was some kind of trilogy of custom hardware ! now playstation 4 is kind of a reboot.:cool:

psp also had custom hardware and now vita literally has off the shelf hardware !
 
playstation 1,2 and 3 was some kind of trilogy of custom hardware ! now playstation 4 is kind of a reboot.:cool:

psp also had custom hardware and now vita literally has off the shelf hardware !

It is good thing that Sony is no longer using custom chips for its next-gen console. I will take AMD's gpu over anything that Sony can come up with.
 
Single thread performance.

But Jaguar's single threaded performance will still be leagues ahead of current-gen consoles (which also had very poor single threaded performance for their time, yet compensated somewhat with more cores).

Sure Jaguars aren't going to be competitive with Intel's latest and greatest PC CPU cores, but if current consoles can run the same games as PC's currently games with 7 yrold shitty CPU single-threaded performance, then one needs to ask the question whether Core i7-level CPU performance is even necessary in a console? And whether video game CPU workloads (i.e. predominantly game-code, AI, animations etc etc), are embarrasingly parrallel enough such that having a many smaller core CPU & GPGPU would suffice.

Suffice to say that I believe several small & relatively low-throughput CPU cores would be more beneficial in a game console, provided a significant portion of the system TDP & transister budgets was allocated to a monster GPU.

If either next-gen console gets kitted out with a custom many core Jaguar-based CPU (think 8-16 real cores), clocked high (say 2-3 Ghz) and a beastly GPU, then I think it would prove a much better system than one with the same GPU and say 1-3 phat fast steamroller cores.
 
Last edited by a moderator:
Why is that ? Isn't APU just CPU+GPU on a single chip, why is adding another dedicated GPU makes little sense ?
For the same reason we don't put dual GPUs in any device. If the GPU part of the APU is dedicated to GPGPU functions, then you artificially limit the system flexibility. While running dual GPUs is adding complexity where you could instead spend the same silicon budget on a single larger GPU.
 
Why is that ? Isn't APU just CPU+GPU on a single chip, why is adding another dedicated GPU makes little sense ?

The two "GPU"'s would compete for memory for 1 and a lot of the well written, interesting GPGPU code and shaders today are memory limited, not ALU limited.
If you're putting a separate GPU in the box anyway, why not put the ALU's there, so you can use them for rendering or computation?
The only reasons I can think to do that are either you can't because it's too expensive or hot to have them in that package, or because the ALU's you have on the APU are different than the ALU's on the discrete chip.
Then there is what's your cost reduction plan going forwards?
 
For the same reason we don't put dual GPUs in any device. If the GPU part of the APU is dedicated to GPGPU functions, then you artificially limit the system flexibility. While running dual GPUs is adding complexity where you could instead spend the same silicon budget on a single larger GPU.
What if it's not dedicated to GPGPU functions? What if it's flexible, like the Cell processor is (switching between functions at any given moment)?
 
It is good thing that Sony is no longer using custom chips for its next-gen console. I will take AMD's gpu over anything that Sony can come up with.

To be fair, Sony did use an off-the-shelf GPU last time.

If the two consoles are going to be so well matched (sameish APU?), I hope that Sony goes out of its way to let us have some option at backwards compatibility. Maybe make those stacked DSPs be SPUs, or have that plug-in BC module they patented available.

If I have my choice of two identical systems next gen, and the 720 does backwards compatibility and PS4 does not, I've got nothing keeping me in Sony's corner.
 
The two "GPU"'s would compete for memory for 1 and a lot of the well written, interesting GPGPU code and shaders today are memory limited, not ALU limited.
If you're putting a separate GPU in the box anyway, why not put the ALU's there, so you can use them for rendering or computation?
The only reasons I can think to do that are either you can't because it's too expensive or hot to have them in that package, or because the ALU's you have on the APU are different than the ALU's on the discrete chip.
Then there is what's your cost reduction plan going forwards?

Have an APU+GPU with the goal of putting the GPU entirely into a later rev of the APU?
 
To be fair, Sony did use an off-the-shelf GPU last time.

If the two consoles are going to be so well matched (sameish APU?), I hope that Sony goes out of its way to let us have some option at backwards compatibility. Maybe make those stacked DSPs be SPUs, or have that plug-in BC module they patented available.

If I have my choice of two identical systems next gen, and the 720 does backwards compatibility and PS4 does not, I've got nothing keeping me in Sony's corner.

But they've got complete backwards compatibility with Gaikai. Why bother with the expense for that and put the money towards a more powerful gpu or smarter HCI.
 
Have an APU+GPU with the goal of putting the GPU entirely into a later rev of the APU?

But you could never unify the two GPU's so you'd end up with a single chip containing a CPU and 2 GPU's that's never going to be as efficient as CPU+single GPU
 
But you could never unify the two GPU's so you'd end up with a single chip containing a CPU and 2 GPU's that's never going to be as efficient as CPU+single GPU

Because the developers would need to design around the split configuration of an initial implementation, I take it? It wouldn't be possible to put a thin virtualization layer on top of the APU/GPU pair that would reduce down in a later unified chip?
 
But you could never unify the two GPU's so you'd end up with a single chip containing a CPU and 2 GPU's that's never going to be as efficient as CPU+single GPU

Developers got Cell's SPE's and RSX sync'd pretty well so I don't see why they couldn't sync a couple of GPU's.
 
But they've got complete backwards compatibility with Gaikai. Why bother with the expense for that and put the money towards a more powerful gpu or smarter HCI.

Well, that could do the trick, I suppose, at least for PSN software. I suppose they could do the same for on-disc software if Sony has a copy of everything published in their cloud and the PS4 ids the disc.

That implies that Sony would have to build up a cloud datacenter of custom PS3 compatible hardware, though. I'd rather they just sell me a little box with a Cell and RSX in it to plug into the back of PS4 if they're going to go that way.

Gaikai could still be useful to Sony for instant demos and such.
 
The only reasons I can think to do that are either you can't because it's too expensive or hot to have them in that package, or because the ALU's you have on the APU are different than the ALU's on the discrete chip.

And if both of these things are the case?
 
Enough with the off-topic questions that lead down a longer thread of off-topic posts in this thread. Yes, I'm looking at you arijoytunir.
 
Status
Not open for further replies.
Back
Top