Predict: The Next Generation Console Tech

Status
Not open for further replies.
Is it in the realm of possible that IBM and AMD work together on an APU, sharing their newest design? It's a bit different situation that the partnership for designing the Xbox360 SoC, were both AMD graphic architecture, and IBM core architecture were obsolete.

Putting together 2 rumors: 6 cores are heavily modified Power7/Power8 cores, 2 cores are high-throughput SIMDs, designed by IBM, no ROPs, no TMU, with OpenCL/DirectCompute capability.

O RLY????? thats not the way i remember it( i have a pretty damn good memory).
 
O RLY????? thats not the way i remember it( i have a pretty damn good memory).

You misunderstand me. I'm refering to more recent time. In 2010, IBM designed a SoC with both Xenos and Xenon on die. For doing this they needed the IP of the graphic chip. No problem for AMD, Xenos design was obsolete. But now, if they work together on a APU, AMD would have completely access to IBM most recent technologies/architecture and design and viceversa. Would IBM allow one of their competitor in the server market do this?

By the way, 6 "Power7-like" cores and a teraflops class GPU, would still be a fairly big SoC at 32nm.
 
Last edited by a moderator:
Every generation is different. There are always different pressure points and inflection points.

Just noting that because, "It didn't go over well last gen" isn't always a good metric for a new platform. Have the pressures and market realities changed?

Not saying they have but I think you could at least toss out one theoretical way where a Split Memory Architecture, ala PC, could trump a UMA: UMA means all your memory is the same. In most cases that means you either need really expensive memory or are going to have poor performance. But what we see on the PC is the CPU have different tolerances and there is a cost spread in memory. So using the "PC" model you could go with 6GB of DDR3(4) for the CPU and 2GB of fast GDDR5 (or specialized architecture) *vs* the UMA which would be a crazy expensive 4GB of GDDR5 or, gasp, slower GDDR5 -- and the kicker is the 4GB is going to cost as much as the 8GB. And while it would be nice to have one memory pool having more stable memory performance patterns is a plus and the question is: will a console GPU be "using" 4GB of memory? Not that the system cannot, but the GPU specifically? Would it perform better with 2GB of fast memory + 4GB of cached video data sitting in main memory?

I have also always wondered about the logic that in VRAM you may, as an example, at times only have 10-20% of your data being actively rendered but that data is soaking up 90% of your VRAM bandwidth. Basically a small footprint client sucking up the entire source. Throw that into an UMA and you have an even smaller percentage of asset sucking up all system resources. This is one area where the eDRAM made sense (it offloaded an expensive part of the graphic budget + made for a more regular bandwidth consumption pattern from the UMA).

Again, I am not shooting down the prospects of an UMA, only that things change. If for the same cost of 4GB of GDDR5 you can get 6GB DRR3 and 2GB of GDDR5 you create a very interesting scenario.

True, but what's the need for so much RAM on the CPU side? The V-RAM has to hold the framebuffer, textures, models. It's also true, that with a fast link between the V-RAM and the RAM, the second one could be use for caching. And the latency for the CPU would be considerable lower.

What about voxel rendering and raytracing? I know they are really memory intensive, but on the GPU side or CPU side? Considering that many developers will have hybrid rendering with both for the end of next-generation, it may give us insight on what developer want.
 
Gamesindustry said:
"There aren't as many shaders, it's not as capable. Sure, some things are better, mostly as a result of it being a more modern design. But overall the Wii U just can't quite keep up"
Why are any of you thinking whether this is made up comment / hallucination or not? There is no way that Wii U GPU could have less shaders than PS3/XB360-GPU's. No way at all, even some random integrated GPU's have more shaders and being a modern design with less shaders just isn't happening. Too many paradoxes for this to be real.

About MS also using a X86 cpu, I don't believe in it.
X86 should help making games easier to develop between PC and XB, yes?
 
Why are any of you thinking whether this is made up comment / hallucination or not? There is no way that Wii U GPU could have less shaders than PS3/XB360-GPU's. No way at all, even some random integrated GPU's have more shaders and being a modern design with less shaders just isn't happening. Too many paradoxes for this to be real.


X86 should help making games easier to develop between PC and XB, yes?
It would but it's not like the 360 development environment is not mature by now or sucks. The 360 was/is a symmetric multiprocessor design, nothing fancy, BC should be easily achieved for the relevant titles.

I see few intensive for MS to move to X86 at this point. For Sony Cell design spelled that BC would be hell moving forward, they may have give up on it altogether way more easily.
 
Non-technical, non-hardware next-gen tech moved here. Anyone posting OT posts here now is likely to see a week or two's console forum ban. A little meandering, fine. But know that talk about the sales or ranting about the specs or complaints about second-hand sales etc. is just noise. If you can't tell the difference then you shouldn't be posting in the tech forum.
 
Well you missread my post I'm speaking of a coherent memory space ;)
It would be pretty much like in bi processor set-up or a cell blade for that matter both physical partition of the ram are acessible to both processors.

All you need is a fast link (not that fast) between the two processors thatsupport coheremcy traffic (on top of data traffic).

you can have that on the cheap too, you can use existing hypertransport, or PCIe with coherency extension (I believe AMD has been working on it and wants to put it on future PC motherboards)
 
Microsoft moving to x86 could mean parity/unification of Xbox and Windows. They could both be made into a service rather than a hardware. Such an idea does lend credence to the earlier claims that Windows will be able to play Xbox games... Imagine accessing certain Windows applications from your Xbox dashboard, or accessing your Xbox dashboard from your phone and being able to play "real games" on your phone in a PSV/3-esque partnership.

Hardware: It wouldn't be unreasonable to see a dual-module(quad-core) and a Pitcairn-class GPU. You get low power and the coveted "generational gap."
 
It would but it's not like the 360 development environment is not mature by now or sucks. The 360 was/is a symmetric multiprocessor design, nothing fancy, BC should be easily achieved for the relevant titles.

I see few intensive for MS to move to X86 at this point. For Sony Cell design spelled that BC would be hell moving forward, they may have give up on it altogether way more easily.

Am I wrong or the Power ISA has also less overhead than x86?
I think also that IBM can offer more performance per watt than AMD (and Intel it's out of the question), has much experience of AMD on customs design, and some of their new technologies are quite interesting (i.e hardware transactional memory)
 
Am I wrong or the Power ISA has also less overhead than x86?
I think also that IBM can offer more performance per watt than AMD (and Intel it's out of the question), has much experience of AMD on customs design, and some of their new technologies are quite interesting (i.e hardware transactional memory)
never deal with any of them personally :LOL:
But pretty much like you I read that the POWERPC while nowhere near clear has indeed less overhead than X86.

I don't expect MS to move from POWERPC.
 
I wonder which is the most powerful CPU right now in the world, POWER7 or Sandy Bridge E? (there's a 150 watt eight-core Xeon version I believe).

both Intel and IBM seem to be doing the same technical prowesses. IBM does more "slow" CPU (blue gene, PowerPC A2, embedded PowerPC) while Intel only has crappy Atom ; this is maybe where a smaller footprint instruction set is useful. it was also useful for Xenon where the aim was to cram in as much flops and threads as possible in 2005.

it seems to me that AMD is able to do custom designs now, as long as you want bulldozer-ish or bobcat-ish cores. that costed them a lot (bulldozer is crappy)
 
I wonder which is the most powerful CPU right now in the world, POWER7 or Sandy Bridge E? (there's a 150 watt eight-core Xeon version I believe).

both Intel and IBM seem to be doing the same technical prowesses. IBM does more "slow" CPU (blue gene, PowerPC A2, embedded PowerPC) while Intel only has crappy Atom.

it seems to me that AMD is able to do custom designs now, as long as you want bulldozer-ish or bobcat-ish cores. that costed them a lot (bulldozer is crappy)
Clearly Intel has the best CPU in the world overall. IBM has more room to fine tune its design for specific use.

Clearly Intel is awesome, at this point I'm confident that if INtel were to costum craft something (including gpu) it would beat the crap out the competition by a significant margin, they have the cache, awesome, mem controller, access to higher density and less power hungry lithography.
Blend in their experience now with Larrabee, the fact that their GPUs start to no longer suck (at least in perfs/mm2 or Watts).

Well sadly... Intel sell pretty much everything it produces and has no reason to let any of its design in the wild for a bargain :(
 
Clearly Intel has the best CPU in the world overall. IBM has more room to fine tune its design for specific use.

Clearly Intel is awesome, at this point I'm confident that if INtel were to costum craft something (including gpu) it would beat the crap out the competition by a significant margin, they have the cache, awesome, mem controller, access to higher density and less power hungry lithography.
Blend in their experience now with Larrabee, the fact that their GPUs start to no longer suck (at least in perfs/mm2 or Watts).

Well sadly... Intel sell pretty much everything it produces and has no reason to let any of its design in the wild for a bargain :(

Intel and IBM are aimed at different market. Intel is oriented at the consumer and workstation market, IBM at high-performance workstation, high-performance computing market. Intel makes CPU and motherboards, IBM makes everything from the CPU to an entire server facility. Offcourse, a Power7 won't be faster than a i7 at Handshake and other consumer tasks, but it will be definitely faster when it comes to powering large database, and large scale simulation. On the gaming side, Intel can offer higher single-thread performance, while IBM can offer more on heavily multithreaded workloads. It's up to the dev saying what they rather have. :D
AMD and Nvidia have much more know-out than Intel on the GPU side.. as we can see with their integrated solution, and their inability to launch Larrabee.
 
Am I wrong or the Power ISA has also less overhead than x86?
I think also that IBM can offer more performance per watt than AMD (and Intel it's out of the question), has much experience of AMD on customs design, and some of their new technologies are quite interesting (i.e hardware transactional memory)

Yes, I thought that (and the IP ownership issue) was the reason Microsoft went with IBM instead of Intel with the 360.

I remember reading a thread on here with devs saying how Xenon was the best CPU choice for the cost/power consumption at the time and if MS had gone with x86 the 360 wouldn't be able to hold up as well as it has.

So given MS are going with Power7 and Sony with AMD x86 is it still true that IBM offer better performance per watt/die size?
 
Yes, I thought that (and the IP ownership issue) was the reason Microsoft went with IBM instead of Intel with the 360.

I remember reading a thread on here with devs saying how Xenon was the best CPU choice for the cost/power consumption at the time and if MS had gone with x86 the 360 wouldn't be able to hold up as well as it has.

So given MS are going with Power7 and Sony with AMD x86 is it still true that IBM offer better performance per watt/die size?

Are we sure that's a given?
 
Intel and IBM are aimed at different market. Intel is oriented at the consumer and workstation market, IBM at high-performance workstation, high-performance computing market. Intel makes CPU and motherboards, IBM makes everything from the CPU to an entire server facility. Offcourse, a Power7 won't be faster than a i7 at Handshake and other consumer tasks, but it will be definitely faster when it comes to powering large database, and large scale simulation. On the gaming side, Intel can offer higher single-thread performance, while IBM can offer more on heavily multithreaded workloads. It's up to the dev saying what they rather have. :D
AMD and Nvidia have much more know-out than Intel on the GPU side.. as we can see with their integrated solution, and their inability to launch Larrabee.

Well I would still rank Intel engineery higher by a tad. Intel is overwhelming in the X86 but let's no forget their others experiments, Itanium, larrabee(s), SCC.

Intel may catch-up sooner than later with IBM in the multi processors realm, I believe that's for now they don't have that much intensive to do so. Versus IBM they lack the software part of the equation that allow IBM to sell their hardware at such premium.

But I would not be dismissive of Intel experiemce in regard multi threading, eventhough larrabee wasn't released but only the fact that they managed to create such a chip is impressive: +30 cores, +120 threads, a complete vector ISA.

It looks like they are also well advanced in experimenting with grid processors e.g SCC.

then there are gpu. They still don"t compete in graphics but are taking over in compute (see some llano vs core ix comparison). Their drivers are still not there that"s for sure.

Overall it looks like to me that Intel has amassed a lot of experience on a lot of computing fields in gje last years. It may not translate in commercial product for now but I can definitely see these efforts paying for themselves in mear future.

Then there is Intel human power, the amount of projects and product they are working on in
parallel is impressive too.

So coming back to our subject wich is console hardware my belief is that if one Intel premium team were to leverage all these researches and experiences, without the burden of strict X86 compliance, I beliwve that they would pull ahead of everybody for a given silicon and power budget, and by a significant margin. To me whereas a lot of so called journalist are describing Intel lead as challenged it's on fact growing. Arm will discover the mountain they have to climb if they are at aome point to try to even come close in perfs.

Worse case scenario is hat even lithography advantage doesn"t make them performant in the mobile realm. In this case nothing prevent Intel to develop either ARM or MIPS cpus and most likely crush competing companies.
 
So given MS are going with Power7
I would very much expect MS to *not* go with Power7. At the moment, IBM has 4 separate recently developed cpu lines, of which Power7 would probably be the *worst* choice in a console environment. It spends quite a lot of space and power on things no console dev will ever care about, while not spending on things they do. I think the most likely choice is something PPC 470-based (or more likely, given the timeframe, PPC480-based, as a direct development of the 470 design), or even PowerPC A2-based over the Power7 line.

A PPC470 would buy them little less than half the single-threaded brunt in dramatically (think tenth, not half) less space and heat, which they could spend to get more cores or beef up the GPU.
 
well the IBM "smart" CPU lines do a great job at performance per watt but aren't the greatest for single thread performance.
some of us have expectations of spending transistors for single thread performance this time, else you're doing a Xenon 2.0 :p

IBM could be willing to do a custom design based on POWER7 and this would mean get rid of XML accelerators, BCD etc., cut back on the L3 and/or the number of cores.
 
Status
Not open for further replies.
Back
Top