Predict: The Next Generation Console Tech

Status
Not open for further replies.
Well, we know esoteric hardware is pretty much off the cards. We know they'll be conventional boxes, and there are unlikely to be any major surprises. We also have some weight to the idea that neither box will offer cutting-edge performance.

Other than that, we shouldn't be surprised at a lack of progress. We can only speculate, and I'm sure no-one could have predicted any other console. Who'd have given XB2 a US eDRAM part back in a 2000 prediction, or that Sony would change from the straight-forward design of PS1 to the freaky hardware of PS2 to the mishmash hardware of PS3? Or that Nintendo would overclock their PS2 competitior to make a PS3 competitor?

The value of this thread probably only really becomes apparent when the console specs are released, and we can reflect on who guessed right and how.

We only know the things that are completely obvious to anyone with a decent understanding of PC hardware progression and console form-factor/power limitations. We have no answers to any of the specific questions regarding any of the next gen hardware.
Besides the fact we have some people spewing nonsense on here like next-gen consoles (not Wuu) having DX10 GPUs and Cell2.. I am disappoint with this thread.

we know that this thread is the spawn point for "expert prediction of control noobs" to join the forum post crap for 20-30 posts then get banned :D:LOL:

This :D
 
Well it seems that we are headed to two X86 powered systems and Ive to say that it's a bit disheartening to me.
I can't help but think that if AMD offers a neat package, I sadly suspect that performances is not what sealed the deal (wrt to the CPU).
I can see the benefit of X86: tools, environment (win8 is portable, android, etc.) and the CPU themselves are mostly off the shelves parts (less R&D) but looking at AMD cpu performances, well I feel like IBM could have beaten what most likely AMD is to offer.
I wish we would have seen something akin to what Aaron hinted a few pages ago (or in another thread) like 2 big cores and 6/8 throughput cores.
A geek wet dream could have been if IBM get both the up-coming power8 and the cores set to replace powerA2 (I would be surprised if IBM doesn't already work on their replacement) on parity and on parity with the best X86 offering (things like 8 wide SIMD units and gather support).
I can't see MS supporting the financial investment for such a project but IBM could have been interesting if somebody were to take in charge a part of the investment.

I feel like it's something that AMD can't touch as even if they improve BD cores, I feel like CPU performances may not be that great especially with Haswell set to launch in the same year as the next generation consoles. I don't have high expectation for Jaguar cores to say the least.

Overall, well it's an economic choice and that makes sense, but especially if for MS doesn't use a SOC I'm going to feel... well a bit underwhelmed. Intel based PC should continue to fly around consoles as far as CPU performances are concerned.
 
Whats wrong with AMD's CPU performance? Everyone is taking about SIMD FPU's performance for gaming and in that department per AVX/SSE SIMD unit they are just as strong is not stronger (FMA) then intel.

Piledriver shows really solid gains over bulldozer in INT performance per clock with minimal architectural core changes, reduced power consumption and higher clocks. Then remember that a console SOC will have the core after that, Steamroller ( I dont buy the jaguar rumors yet) which will be the equivalent of Intels tock.

You then have to consider that THE MOST IMPORTANT parameter in a modern system is movement of data. How is IBM going to design an integrated SOC that is going to maximize flexibility and memory access across both CPU and GPU aka HSA?

Just a question before you nock a core you yet no nothing about (Jaguar) have you ever looked at bobcats raw performance numbers? In most integer workloads per clock its on par/bests K10. To me that was the biggest disappointment of bulldozer. if they have put 128bit AVX units on jaguar and AMD have the interconnect for many cores + HSA then how the hell is IBM going to get close to that kind of SOC for a console?

Overall, i just dont think you understand the balancing game that has to be played. When cpu's and gpu's can start playing in each others backyard having a massive amount of flops on the end of a very high latency bus isn't going to be as great an advantage as it currently is.
 
Lulz. What's wrong with AMD cpu performances?
Well there is quiet some threads and comments on the matter here.

As for the a Soc, first I said "especially if it's not a soc, etc."
But if it were it might be doable for Ibm, just need a proper interconnect, no to mention that if it were a SOC Amd would be obviously working on the project.
 
Lulz. What's wrong with AMD cpu performances?
Well there is quiet some threads and comments on the matter here.
I see, so you admit you don't know what your talking about :LOL: ( PS im quite aware where bulldozer is performance deficient. Why it is, doesn't seem to have that many concert answers) . What you completely ignored is for what console game developers want ( good SIMD and lots of threads), even bulldozer let alone piledriver or steamroller are great CPU cores for the task.


As for the a Soc, first I said "especially if it's not a soc, etc."
Two points here,
1. power budget
2. transistor count

One chip or two a console has a target power budget. power budget is a far greater limiting factor in this instance the transistor count. To not go a SOC especially with the drive towards a single address space would be a big mistake.

But if it were it might be doable for Ibm, just need a proper interconnect, no to mention that if it were a SOC Amd would be obviously working on the project.

Why would AMD handover all the HSA work to both MS and IBM? unless there both giving amd 10% of their respective companies i don't see what is in it for them :?:
 
Last edited by a moderator:
I see, so you admit you don't know what your talking about :LOL: ( PS im quite aware where bulldozer is performance deficient. Why it is, doesn't seem to have that many concert answers) . What you completely ignored is for what console game developers want ( good SIMD and lots of threads), even bulldozer let alone piledriver or steamroller are great CPU cores for the task.
Sorry but I don't see how BD provide good SIMD and lot of threads.
Developers and people in the business have stated that it's tough to get good perfs out of the bd SIMD units. The cores are big and for their size they don't provide that many threads (if that even make sense...).
Two points here,
1. power budget
2. transistor count
Bd is a not ref and by far for both.
One chip or two a console has a target power budget. power budget is a far greater limiting factor in this instance the transistor count. To not go a SOC especially with the drive towards a single address space would be a big mistake.
Well that's BS there are laptop with a plain CPU and a discrete GPU that does better wrt to power consumption than APus, what is your point?
There are many factors, power consumption is one, you also have die size, clock speed and performances target for the project.

Unified space address is possible even out side of SOC. How does that happen with multi processors set-up?


Why would AMD handover all the HSA work to both MS and IBM? unless there both giving amd 10% of their respective companies i don't see what is in it for them :?:
Do you expect for example MS to use HSA for games? It's a software stack as far as i understand. There are others, not too mention that MS for example never gonna use that for the next box.


I answered you shortly, I want to go to bed but it's pretty obvious that you are more than heavily biased toward AMD products. FYI my favorite bet for next gen would be indeed a SoC, quiet big (~300 sq.mm or a bit more) with a 256bit bus (close to what we hear from SOny's side).
Point is no matter the advantage X86 cpu have on the software side and as I don't bite into the noise Intel I was just pointing that for the CPU ( I even pointed out that most likely it's a bigger R&D efforts) IBM can do better than AMD. For economical reasons... which are good imo (better than firing tenth of thousand of people like we see this gen...) it's been chosen otherwise.

If you can't deal with people criticizing AMD cpus, well you should work on that issue as it's gonna get worse... sadly.


/ Typing from my all AMD HP Pavilion dv6... :rolleyes:
 
Last edited by a moderator:
Sorry but I don't see how BD provide good SIMD and lot of threads.
Developers and people in the business have stated that it's tough to get good perfs out of the bd SIMD units. The cores are big and for their size they don't provide that many threads (if that even make sense...).
What are you talking about? seriously simple Google, first two results

with 1/2 the width of a SB core @ 256bit avx it has over 50% of the peak performance
http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-5.html

in less synthetic tests with 1/2 the width its still neck and neck
http://www.lostcircuits.com/mambo//...k=view&id=102&Itemid=42&limit=1&limitstart=13

look at things like h264 and its identical if not a little ahead clock for clock
http://www.techarp.com/showarticle.aspx?artno=669&pgno=5

Then there is the fact that FMA and INT SIMD which are sitting idle in most to all of these benchmarks can be used by devs on a console can use.

As stated by developers on this forum its harder to get 8 floats in a lot of code while quite easy to get 4 which shows up in qute a few AVX 256 benchmarks where SB should just run away from bulldozer but cant.

The bulldozer FPU can schedule 4 128bit ops a cycle, a console SOC could easily have 4 AVX units over 2 AVX and 2 INT SIMD units.

I would debunk the rest of what can only be called a rant but im at work and have wasted enough time so that will wait until i get home.
 
Last edited by a moderator:
Debunk what? Seriously man...
You think Bd looks good against Intel and IBM offering. That's your problem. Not too mention your crazy attitude...

U can't read what I said properly in the first place only because I dare criticze AMD cpus. Please ignore me and move forward.
 
I can't read what you said either because none of it seems to make any sense. Sorry but your last few posts wrt BD just look like nonsensical ranting with no sources to back up your muddled claims.
 
Debunk what? Seriously man...
You think Bd looks good against Intel and IBM offering. That's your problem. Not too mention your crazy attitude...

U can't read what I said properly in the first place only because I dare criticze AMD cpus. Please ignore me and move forward.

excellent:!::!: throw around a few nonsensical claims, have a sook at the first set of data points take your bat and ball and go home.

Theres no question that orochi as a product isn't good. But thats rather irreverent when looking at the individual aspects, performance of those aspects and how you would develop a SOC for gaming.

In the aspects talked about there is nothing wrong with the bulldozer core.

here is the 65 watt trinity soc
http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/APU/Pages/APU.aspx#4

Lets say the target is 150watts for everything, double that soc (130watts) add about 10% perf for the new core assume same SIMD width and process node for this comparison:

3.4ghz base upto 4ghz for 8 int cores, 8 128bit FMA units, 8 128 SIMD units
768 shaders @ 760mhz
256bit memory bus
unified memory space.

1.4tf GPU
220gflop of FMA SIMD

what do you think should be done within that power budget?
 
Last edited by a moderator:
What's up with the AMD-fanboyism here? :LOL:

Relevant performance: http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/8

Power consumption: http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/9

That's the worst thing you could put in a console in terms of performance and perf/watt.

Who said put Orochi with its 4 HT links, 8Mb of unwanted L3, its soft edge flip flops, more limited AGLU's, non resonant clock mesh and its non functioning INT DIV in a console? Look at trinity as a base especially in the mid 3ghz range. its perf per watt is quite respectable there. The point about bulldozer was that despite all its badness its FP capabilities aren't one of them.

I love how someone who simply trys to understand why something is "bad" not just that it is bad gets labeled as a fanboy when pointing out the positives.

edit:

FX-4100 has a TDP of 95watts for the 3.6/3.7/3.8 quad core CPU
A300 has a TDP of 65 watts for its 3.4/4.0/4.2 quad core and 384 VLIW4 shaders @ 760mhz. so at all core turbo your looking at around a 40% power saving over FX-4100 linearly extrapolate that out(yes i know that isn't accurate) to an 8 core and its in the same ball park as 2500k/2600k in perf per watt.
 
Last edited by a moderator:
What's up with the AMD-fanboyism here? :LOL:

Relevant performance: http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/8

Power consumption: http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/9

That's the worst thing you could put in a console in terms of performance and perf/watt.

You do realize that anything BD-related that would be put to console, wouldn't be BD but PD (Piledriver) or even SR (Steamroller) based, which perf/watt figures are still a bit unknown, as only few laptop-PD-Trinity-figures are available atm.
 
3.4ghz base upto 4ghz for 8 int cores, 8 128bit FMA units, 8 128 SIMD units
768 shaders @ 760mhz
256bit memory bus
unified memory space.

1.4tf GPU
220gflop of FMA SIMD

what do you think should be done within that power budget?

I think that's a very reasonable expectation. Personally, I think we'll see lower clock rates ~2GHz, 8C/4M chip but with 256-bit AVX (maybe even AVX2 improvements).

My only concern is that using something like DDR3/4 even on a 256-bit bus, the BW with relatively low especially when the BW has to be shared with a powerful CPU and a powerful GPU. I think we may see something exotic (like a 128+ MB eDRAM) here to help with that, but other than that I think CPU and GPU will be fairly traditional.
 
I can't read what you said either because none of it seems to make any sense. Sorry but your last few posts wrt BD just look like nonsensical ranting with no sources to back up your muddled claims.
I can't read what you said either because none of it seems to make any sense. Sorry but your last few posts wrt BD just look like nonsensical ranting with no sources to back up your muddled claims.
Issues faced by the BD architecture have been raised multiple time by members that are definitely worse listening. I won't go ahead and lose I don't know how long to track and gather those posts for you.
Those posts are here spread across the forum.

As for stating that I would have preferred IBM for the CPU to AMD for performance I don't see what you don't get or what is so nonsensical about it.

Isn't that called, you know, the PS3? ;)
Well I won't track Aaron's pink posts but he pointed out that it would be an interesting option (that's not in the thread about the Cell).
Usually when he states something it's relevant , not that it will happen though. It would be different than the Cell as all the cores have look the same from a software pov.

Anyway won't happen :)
 
Last edited by a moderator:
Issues faced by the BD architecture have been raised multiple time by members that are definitely worse listening. I won't go ahead and lose I don't know how long to track and gather those posts for you.
Those posts are here spread across the forum.

You're not taking the time to go into any specifics what so ever. You're just laying down a blanket statement that everyone says BD is "bad" and then concluding that furthermore it can't be fixed.

The biggest problems with BD IMO are related to scheduling and cache. The FPU is not one of the problems and is actually superior to Intels current offerings, the only catch is that there is only one unit for each two integer cores. Personally I think with HSA and being able to offload much of the traditional FPU work to the GPU that this approach will make sense in the future and certainly in a console APU.

We know Piledriver performs 15% better than Orochi per clock, we know that the A300 APU is only 65w with a GPU as part of the package and with very decent clock speeds. As has been mentioned above, shedding much of the unneeded extra baggage of orochi coupled with these improvements alone would make a pretty decent console CPU. Steamroller will likely improve performance/watt even more, hell, they may even fix the cache properly for once.

As for stating that I would have preferred IBM for the CPU to AMD for performance I don't see what you don't get or what is so nonsensical about it.
Nothing nonsensical about that, but stating over and over that BD is "bad" and "everyone says so" isn't exactly an argument as far as I'm concerned.
 
Does anyone know which buildings on ms campus would bring developing next gen games?
Yes, hilariously, MS studios don't contain any game studios. (Phone is in some of them hardware in some, and XBox in a couple). All the game studio teams are in Redmond town center or one of the downtown bellevue buildings.

Why?
 
Status
Not open for further replies.
Back
Top