Predict: The Next Generation Console Tech

Status
Not open for further replies.
The info I´ve posted from the spanish forumer, who claimed to have info from lherre is FALSE. lherre has sent me a PM on another forum and he has told me he has nothing to do with this info.

It´s sad there are people there that use to have fun with these kind of stupid things.

My apologies to lherre and the other B3D forumers that have read it.
 
Nvidia is gunning for the supercomputer market, and Microsofts OS support for ARM is useful in that effort...at least thats how I understand it. ARM for mobile devices and tables is another aspect of the Nvidia strategy.

Windows is basically useless in HPC. The level of customer flexibility that it provides is not appropriate for the market. In addition, the support infrastructure for HPC is not designed around windows and instead is designed around Linux. Trying to make a modern HPC around windows is a lost battle.
 
Linux and x86 are heavily entrenched.
Huh? By compute power, over a third of those linux supercomputers are not x86. Linux is *the* operating system for the non-x86 world -- there are vastly more cores running linux on non-x86 than there are ones that run some other OS (counting only things capable of pre-emption).
 
Hot Chips is coming next month. iirc Jaguar cores will be shown by AMD as some more info on their 7970 GPU, IBM will be showing some Power7+ stuff, etc....
 
More info on the 7970 GPU? I thought we pretty much knew everything there is to know about Tahiti. You mean 8970?
 
More info on the 7970 GPU? I thought we pretty much knew everything there is to know about Tahiti. You mean 8970?

Tomshardware:

AMD will be discussing its current Trinity APU as well as Radeon HD 7970 architecture. Intel will be elaborating on the power management in Ivy Bridge as well as its Medfield Atom Z2460 chip. Much more interesting, however, should be AMD's talks about the Bobcat successor Jaguar as well as die-stacking technology.

Intel has announced a talk about its Knight's Corner many-core architecture as well additional information about the 280 mV IA-32 processor in CMOS that was previously announced at ISSCC 2012. There should also be talk about Intel's vision of a near-threshold processor, as first discussed during IDF Fall 2012.

...

Other news will include the Power7+ chip from IBM, Oracle's 8-socket 16-core Sparc T5 CPU, as well as Fujitsu's 16-core Sparc64 X chip.

Hot Chips is 8/27-29.
 
That's further down the line than I though... Damned summer is such a slow seasons for news overall...
It's like the internet is "half off".

Anyway I'm not holding my breath on Jaguar core, I mean from a performances pov.
I read they aim as low 4.5Watts for the TDP, so I would not bet either on super potent SIMD.
I was speculating ealier on the possibility of Jaguar core to share a potent 128bit SIMD between two cores
actually I can't find my own post :LOL: I wonder if I post it somewhere else or discard it before posting as something over my head I should not talk about
.
The idea is/was that as AMD doesn't benefit from Intel advantage as far as lithography is concerned it has to make more trade off in order to compete in that restricted power envelope. Also AMD seems bent on not using SMT, ever...

So the idea would be to design Jaguar cores as pair but without going as far as the BD approach, both cores would have their own front end and caches but the FP resources would be shared.
The idea would be to keep the size of the paired cores pretty low.

I was also wondering about AMD copying Intel approach with the Atom where a lot of the work is done by the FP/SIMD units, if I got it right during my readings about the architecture the ALU in Atom can't do complex operation (division and some other but I would not trust my memory on that one) and all those operations are handled by the FP/SIMD unit (even for integers).
I wonder if that approach could help AMD design a better/more efficient scalar pipeline. I mean it may remove some complexity in the design and even with more limited means than Intel they could come with something pretty efficient (power and perfs).
As for the SIMD they may recycle the work they did for BD.

Overall the idea would be to have something better than bobcat but without almost the same transistor count / foot print. So if 2 bobcats are ~40millions transistors (without the L2) the jaguar pair would be in the same ballpark while offering higher performances ( I assume that the SIMD utilization is not that high on average and so corners cases aside it would not be a significant bottleneck).


EDIT
More speculation I also wonder if jaguar cores could be bulldozer light, I mean AMD believed in the idea behind bulldozer, SMP is every where now even in the mobile space, is it that crazy to speculate that they may have followed the same approach for their low power cores?
 
Last edited by a moderator:
Just got a short discussion about those crazy Xeon Phi. I am interested in those bastards because of HPC, but would this be a good candidate for console CPU (probably to late to get considered this gen?).
 
Just got a short discussion about those crazy Xeon Phi. I am interested in those bastards because of HPC, but would this be a good candidate for console CPU (probably to late to get considered this gen?).
A single Xeon Phi board consumes 300 watts. Also Xeon Phi has 50+ cores (62 by some documentation, but Intel hasn't yet spilled the beans, they are likely waiting to see how good the yields are).

Xeon Phi has four way HT, so it has 200+ logical cores (hardware threads). Programming that kind massively parallel CPU would be a much bigger step than transitioning from single core consoles to multicore ones (6 logical cores in 360 and 6+2 cores in PS3). Everybody would be forced to transform to fine grained task/job/data-driven engines, and basically rewrite all their existing code bases... except for those few lucky ones that have done that already.

Surprisingly Xeon Phi and PowerPC A2 are very much alike. Both are simple in-order cores, have four way SMT to hide memory latency (and stalls), have wide vector units to reach high flops (256 bit in A2, 512 bit in Phi), use exotic memory technology to boost bandwidth (GDDR5 for Phi and EDRAM for A2). A2 has 16 cores = 64 threads per CPU, Phi has 50+ cores = 200+ threads. Single Phi chip has around 5x peak flops, but also around 5x TDP. There's 16384 A2 cores in a Blue Gene rack. Lets see how many Phis Intel is going to cram in a similar space. It's going to be interesting to see... But no, I am not expecting to see either one in consumer products.
 
A single Xeon Phi board consumes 300 watts. Also Xeon Phi has 50+ cores (62 by some documentation, but Intel hasn't yet spilled the beans, they are likely waiting to see how good the yields are).

Xeon Phi has four way HT, so it has 200+ logical cores (hardware threads). Programming that kind massively parallel CPU would be a much bigger step than transitioning from single core consoles to multicore ones (6 logical cores in 360 and 6+2 cores in PS3). Everybody would be forced to transform to fine grained task/job/data-driven engines, and basically rewrite all their existing code bases... except for those few lucky ones that have done that already.

Surprisingly Xeon Phi and PowerPC A2 are very much alike. Both are simple in-order cores, have four way SMT to hide memory latency (and stalls), have wide vector units to reach high flops (256 bit in A2, 512 bit in Phi), use exotic memory technology to boost bandwidth (GDDR5 for Phi and EDRAM for A2). A2 has 16 cores = 64 threads per CPU, Phi has 50+ cores = 200+ threads. Single Phi chip has around 5x peak flops, but also around 5x TDP. There's 16384 A2 cores in a Blue Gene rack. Lets see how many Phis Intel is going to cram in a similar space. It's going to be interesting to see... But no, I am not expecting to see either one in consumer products.

Thanks for the detailed response! I did not think about the power consumption and the impact on the massive parallel programm execution. I heard rumors that they have over 50 cores, but what is a big unknown is the amount of RAM the put in the card (which is pretty important for HPC) and if the memory bandwidth is really high enough to get the performance.
 
Huh? By compute power, over a third of those linux supercomputers are not x86. Linux is *the* operating system for the non-x86 world -- there are vastly more cores running linux on non-x86 than there are ones that run some other OS (counting only things capable of pre-emption).
Companies like Dell and HP are getting better and better at delivering clusters, and these days, a big selling point with clusters is reliability, and those guys pretty much just deal in x86. I couldn't tell you a whole lot more about the purchasing process than that, since I'm not that involved in it. My university pretty much only even talks to HP, Dell, and IBM now. And I don't think we're ever going to go with POWER again. But I think it may be moot soon, cloud computing keeps falling in price, and local clusters just have a lot of fixed overhead.

On the Linux thing, it has as much to do with the software as with the OS. A lot of the important HPC software, esp in my field (CFD is a huge chunk of HPC), has been in sort of continuous development since the 70s, when the "supercomputers" all had their own versions of Unix. Linux took over because it did an endrun around the whole "one Unix to rule them all" problem. So even if MS gets Windows HPC to the point where it will work as speedily as Linux, like, that's nice, but NASA isn't going to port OVERFLOW or WIND-US to it. Rolls-Royce, GE, and Pratt aren't going to convert the internal codes they've been developing for 15 years (I know RR has their proprietary tool so integrated into their toolchain that there's zero chance of them dropping it). Procter & Gamble just switched to OpenFOAM, and those guys sure al hell aren't going to port their stuff to Windows.

The only way MS has a shot in the HPC space is to pull an Apple, dump their kernel in the toilet, and rebuild on a *nix foundation.
 
assuming that want to go the SOC approach ( if not at the begging then after a shrink) then hell no.

also what intel 8 core CPu are they going to use? Haswell EP ??? yeah really likely. 8 thread is far more believable but again i just dont see it, why would intel want to see a CPU to Microsoft that they could likely sell for more to an OEM. Why would Microsoft want to buy an intel X86 chip when they could get an AMD one for less especially if 128bit SIMD units are key to game CPU throughput as AMD matches intels performance unit for unit.


i could keep listing reasons why its really unlikely but the F1 is about to begin.
 
Devkit is probably legit, if Digitial Foundry managed to confirm that with several devs. But it would only give a rough indication of final hardware, as the 360 devkits were basically Mac's?. Still, interesting that there's seemingly an Nvidia card in there, and it would give an indication of the amount of RAM that Microsoft is shooting for (at the moment, that would seem to be 6GB).
 
assuming that want to go the SOC approach ( if not at the begging then after a shrink) then hell no.

also what intel 8 core CPu are they going to use? Haswell EP ??? yeah really likely. 8 thread is far more believable but again i just dont see it, why would intel want to see a CPU to Microsoft that they could likely sell for more to an OEM. Why would Microsoft want to buy an intel X86 chip when they could get an AMD one for less especially if 128bit SIMD units are key to game CPU throughput as AMD matches intels performance unit for unit.


i could keep listing reasons why its really unlikely but the F1 is about to begin.
Why are you assuming its a CPU that we know about? Why wouldn't they make a CPU for MS considering the returns they could get? How do you know what they can get from Intel and how much they would charge?

You haven't listed a legit reason yet, just assumptions based on IDK what...
 
MS probably thought Intel/Nvidia would be safe bet with the fact that all the next gen engines are currently running on that exact combo. That would mean engine ports with zero problem (like DF sources said) and developers initiative to push things to Nvidia side of the fence when developing the game.

It would also be logical for MS to get hardware from vendors that don't have anything to do with what Sony is going for because we would get pretty much the same consoles next time around.
 
http://www.neogaf.com/forum/showthread.php?t=484451

Could this be real guys?

I think it was Rangers who said he contected the dev kit guy and it was all a fake.

The guy definitely said some fishy things, combined with his fishy twitter, and I concluded he was probably full of it, but if Eurogamer says devs say the kit looks real then obviously I reevaluate. Sone of the things he said had the ring of truth, others seemed to stretch credibility greatly.

I combed his post history at Assembler forums and if anybody would come up with a legit Durango dev kit, it'd probably be somebody like him. He was definitely legit Xbox dev kit knowledge-wise.

Probably the point where I decided he was fake was, first he told me it was an Intel/Nvidia combo, which already had me incredulous (i just assumed AMD cause of all the discussion here) then later he reneged and said it was an AMD 7XXX, but he still maintained it was an Intel CPU. The way he flipflopped on the GPU ruined his cred with me, but who knows maybe he was just screwing with me. I do believe he could have a dev kit. It almost seemed to me like maybe he just doesn't know very much about hardware.

Basically what comes out of this to me now is the possibility of Intel and/or Nvidia in the next box. We'll see.

Here was the (3) posts summing my "interview" with DaE: http://forum.beyond3d.com/showpost.php?p=1655531&postcount=13379 http://forum.beyond3d.com/showpost.php?p=1655535&postcount=13381 http://forum.beyond3d.com/showpost.php?p=1655601&postcount=13394

If it adds any rounding out to the DaE story, he did tell me he's starting an indie studio, though the Durango kit had nothing to do with it. Also that he's worked on big name xbox titles, but for some reason i was skeptical of that.
 
I think Intel would have to take a big hit to their typical margins to provide another console CPU. Maybe it's possible. Maybe Intel's foundry capability out weighs all those.

A really mature 22nm process would do wonders for the CPU capabilities. An 8 Thread i7-3770T (2.5 GHz) has only a TPD of 45W. Mobile i7's drop down to 35W. My guess MS could get a custom part below that from Intel, the question is the price.
 
Intel cpu is awesome....haswell i5 on 22nm and AVX2 wow! AMD jaguar cores ain't got nothing on this....Haswell cpu assuming is 65w...gpu nvidia 660TI class at 120w..so only 185w and some leftover for xenos/xenon soc for BC and some background tasks?...remember that slide about dual cpus....i hope Sony is as aggressive as this..
 
Status
Not open for further replies.
Back
Top