Predict: The Next Generation Console Tech

Status
Not open for further replies.
Nope. The PS3 has an asymmetric multiprocessor CPU. The SPEs are complete processors in their own right, not limited like acceleration/maths units.
I agree to disagree, SPU are VPU they can't run an OS for example by self or boot a system.
Anyway mostly semantic and irrelevant in the gran scheme of things.
 
I agree to disagree, SPU are VPU they can't run an OS for example
Why not? They can run any type of code or algorithm at all. They may not be very efficient at some workloads, but they are complete cores and not just vector processing units. That sort of thinking should be dead by now, the amount this topic has been covered. Sure, they are dependent on the PPE to kickstart, but that's part of the asymmetric design - the CPU is the whole chip and not any part in isolation.
 
Full cooperation for big.LITTLE in the same sense that ARM is initially providing it requires full ISA compatibility. You can't migrate a thread to a core that blows up on instructions it can't handle.
Jaguar is not fully compatible with any Bulldozer core.
How difficult would it be to stop the system from transfering to the Jaguar cores when it is running code making use of XOP or FMA? It is only games and other high-performance applications which may make use of these, so there shouldn't be a significant power savings loss in doing so.
 
It's not. Not even the fastest consumer CPU on the horizon in the next year can boast teraflop performance, do you really think they would put one into a budget priced console? The I7-3960X, a $1K CPU, performs at about 140 gigaflops.

I liked how you refuted the teraflop cpu rumor but not the HMC rumour. :cool:
 
How difficult would it be to stop the system from transfering to the Jaguar cores when it is running code making use of XOP or FMA? It is only games and other high-performance applications which may make use of these, so there shouldn't be a significant power savings loss in doing so.
I think if the Jaguar core throws an exception, it could be handled upon encountering an instruction native to a BD core.
If not, it crashes the program.
The performance penalty could become severe if it happens enough. Software can try to provide hints or force affinity, and the OS could try to keep track of faulting threads for as long as it can.

That's not what big.LITTLE is about though.
The conservative solution would be to have threads favor the core with the broadest ISA support, which would mean the bigger cores stay up longer than necessary.
 
It's not. Not even the fastest consumer CPU on the horizon in the next year can boast teraflop performance, do you really think they would put one into a budget priced console? The I7-3960X, a $1K CPU, performs at about 140 gigaflops.

If Microsoft is paying IBM for a custom CPU, a teraflop rated design seems plausible to me. Put 16 in-order cores with 512bit vector processing and each core supporting 4 threads like Larrabee. The IBM A2 has 18 cores running over 2ghz.

If Microsoft is switching to x86 it'll be something off the AMD roadmap most likely.
 
If Microsoft is paying IBM for a custom CPU, a teraflop rated design seems plausible to me. Put 16 in-order cores with 512bit vector processing and each core supporting 4 threads like Larrabee. The IBM A2 has 18 cores running over 2ghz.

If Microsoft is switching to x86 it'll be something off the AMD roadmap most likely.
General purpose code on a Larrabee sucks. There is a reason Intel sells normal CPUs to run the box and the Phis are used mainly as accelerators only for appropriate parts of an algorithm.
 
It's not. Not even the fastest consumer CPU on the horizon in the next year can boast teraflop performance, do you really think they would put one into a budget priced console? The I7-3960X, a $1K CPU, performs at about 140 gigaflops.

They only do 140 Gflops when overclocked to around 4.8Ghz and they're pulling a crap tone of power at that clock too.
 
The Durango / Orbis leak from last year put the Durango CPU at 1.2 teraflops. Considering that source was legit, I am guessing that's the total for the whole console at the time. It may have been improved considering the leaked photos of the alpha kit showed a 6870 / 6950.

Everything I've heard points to the console being modest when it comes to processing power.
 
If Microsoft is paying IBM for a custom CPU, a teraflop rated design seems plausible to me.
It's also daft. It'd be Cell all over again, creating unnecessary problems for developers. If the peak flops come from the GPU, it's all good and devs can use cross-platform skills and knowhow.

I place the possibility of a 1TF CPU at half as likely as seeing a new, unannounced Cell2 as PS4's CPUs. Any 1 TF CPU rumour is either complete crock or someone calling an APU a CPU and counting GPU flops.
 
With the possibility of Kinect 2.0 and games like Madden Football with its physics engine a Teraflop class CPU could be useful to game developers.
 
I think if the Jaguar core throws an exception, it could be handled upon encountering an instruction native to a BD core.
If not, it crashes the program.
The performance penalty could become severe if it happens enough. Software can try to provide hints or force affinity, and the OS could try to keep track of faulting threads for as long as it can.

That's not what big.LITTLE is about though.
The conservative solution would be to have threads favor the core with the broadest ISA support, which would mean the bigger cores stay up longer than necessary.

I don't see the appeal to Microsoft or Sony of big.LITTLE or a similar system in a non-portable console.

big.LITTLE sacrifices performance/mm2 for improved idle power consumption, a trade-off which doesn't make sense for those console makers.

If AMD could get Jaguar cores and Piledriver modules working together at once, that could be worthwhile but is much more difficult to do (although if anyone could do it I'd imagine it would be AMD).
 
If AMD could get Jaguar cores and Piledriver modules working together at once, that could be worthwhile but is much more difficult to do (although if anyone could do it I'd imagine it would be AMD).

I don't actually think it would be that hard. The coherency protocol between nodes is probably the same, or at least very similar, in Jaguar and BD, so the hw complexity could be pretty minimal. All the complexity would end on the OS/scheduling/software side -- you'd have to deal with having many different types of threads on the system.

I don't really think there is much sense, though. BD would be less than 3 times faster in single-core stuff than Jaguar, and Jaguar would probably be less than 3 times better in throughput/watt or throughput/die area. If the difference was larger, there would be sense in shipping a mixture of them, but as it stands, a big pile of Jaguars or a smaller pile of BD cores would probably do almost everything almost as good, and be a lot simpler to code for.
 
iqQAY6xPCC1xw.JPG

https://twitter.com/marcan42

marcan is a Wii hacker ! this seems to be exactly the leaks of vgleaks.com http://www.vgleaks.com/world-exclusive-wii-u-final-specs/

so after all vgleaks has a genuine source . Therefore the ps4 alpha dev kit leaks also seems to be true
 
Status
Not open for further replies.
Back
Top