Predict: The Next Generation Console Tech

Status
Not open for further replies.
How have we seen that in the 360? It has no dedicated audio hardware beyond an XMA decoder.
English is not my native language so I wrote exact opposite. I wanted to say that, following 360 case of not having dedicated audio unit and thus having to waste sometimes full core for audio processing, Nintendo probably went with "enhanced" version of their old CPU since they have dedicated audio hardware in Wii U.

BTW bkilian, I imagine you guys will probably be proud if 360 version of some big 3rd party game (ACIII for example) performs best on your console considering its first console that was out in 2005, and still holding up admirably. Especially since it was tauted as "half gen" console by some :)
 
You need to stop living in the past and wake up to the reality that is today. In the past, AMD had advantages in areas such as GDDR5 performance, but NVIDIA has caught up and arguably even surpassed AMD with respect to GDDR5 performance (and even with a 27% memory bandwith deficit in total, GTX 680 still manages to perform similarly to HD 7970 in most games).
Because memory bandwidth is the only factor in game performance, right?
AMD had in June 2008 GPU with as fast GDDR5 memories as nVidia managed to introduce ~2 years later - it wasn't until GTX600-series where they finally managed to achieve higher GDDR5 clocks than AMD had in 2009.

And despite the fact that nVidia is now shipping cards with higher memory clock than AMD is, it still doesn't change the fact that AMD/ATI has designed and/or co-designed every GDDR since GDDR3, including the GDDR6 coming next, or the eDRAM experience which they have, nor the experience they have with shared pool of memory on GPU & CPU, NUMA etc etc etc

Knowing the past and aknowledging the past doesn't mean living in the past.

If your argument is that AMD is the only company in the world that has the skill and expertise to create a class leading high end console, then that is just naive, and you are promoting FUD.
No-one is claiming that - they just have advantages over nVidia on several fronts.
If there is a concern about a "complete package", then Project Denver will address those concerns next year (you do realize that project has been in the works for more than four years, right?).
How do you know how Project Denver will turn out?

One should never blindly assume that other reputable companies do not have the resources, skill, and fortitude to create a groundbreaking new product.
Of course not, but one shouldn't blindly ignore the known factors, past experience etc either.
Xenos was designed from the ground up by AMD for use in Xbox 360. Why can't another good company design something else from the ground up for use in a next gen console?
Xenos wasn't designed from the ground up by AMD for use in XBox 360, it was based on the R400-base from 2003.
 
Knowing the past and aknowledging the past doesn't mean living in the past.

You mean just like the design of 5800 Ultra clearly foreshadowed 6800 Ultra, the design of 7800 GTX clearly foreshadowed 8800 GTX, and the design of 580 GTX clearly foreshadowed 680 GTX? It is not always easy to predict the future based on the past, as you can see from these examples.

How do you know how Project Denver will turn out?

We will have to wait and see how it turns out. But if AMD and Intel can successfully integrate a CPU and GPU (which is one of the main goals of Project Denver), then why can't NVIDIA do it too?

Xenos wasn't designed from the ground up by AMD for use in XBox 360, it was based on the R400-base from 2003.

From what I can gather on wikipedia, Xenos was a custom GPU that shared some design features with R520/400, but also shared some design features with more forward looking products such as R600, in addition to having two pieces of silicon with the eDRAM. So some of the R&D effort was shared elsewhere, but it was certainly highly customized for the application.
 
A 476FP could easily be considered an "enhanced Broadway" and would achieve the clock speed you and others have mentioned in the past.

That's really stretching the usage of word 'enhanced' though, isn't it?
It would be akin to calling Sandy Bridge an enhanced Pentium Pro.
 
From what I can gather on wikipedia, Xenos was a custom GPU that shared some design features with R520/400, but also shared some design features with more forward looking products such as R600, in addition to having two pieces of silicon with the eDRAM. So some of the R&D effort was shared elsewhere, but it was certainly highly customized for the application.
R400, not R420 or R520.
The original plan on ATI was R300 > R400, that didn't happen and they went R420 and eventually R520, which both were on the "R300 base"
R400 was supposed to have unified shaders and whatnot back in those days, but got cancelled for one reason or another.
 
That's really stretching the usage of word 'enhanced' though, isn't it?
It would be akin to calling Sandy Bridge an enhanced Pentium Pro.

This is what IBM has to say about the WiiU CPU:
The all-new, Power-based microprocessor will pack some of IBM's most advanced technology into an energy-saving silicon package that will power Nintendo's brand new entertainment experience for consumers worldwide. IBM's unique embedded DRAM, for example, is capable of feeding the multi-core processor large chunks of data to make for a smooth entertainment experience.
I think you might want to consider what the ones actually making the CPU says about it in their official press release.
 
I'm guessing the new CPU is code compatible with Wii for full BC, making it an 'enhanced Broadway' in the most naive layman interpretation. That'd make it a new tri-core PPC CPU, maybe 1.6 GHz, and yet be much faster than an overclocked 3*Broadway.
 
In addition to those, IBMWatson (one of the official IBM Twitter accounts) said that it's Power7
http://forum.beyond3d.com/showpost.php?p=1663142&postcount=1939
@IBMWATSON is it true that they're sticking you inside of #WiiU?

@TheDavidHansen #WiiU uses same #power7 chips.

https://twitter.com/IBMWatson/status/78473693843562498

edit:
and

IBM Watson‏@IBMWatson

@DeSero "WiiU has 45nm custom chip ... a SOI design & packs same processor tech found in #IBMWatson" http://engt.co/l9uQLv - @engadget
 
Last edited by a moderator:
This is what IBM has to say about the WiiU CPU:

I think you might want to consider what the ones actually making the CPU says about it in their official press release.

In which they call it 'all-new'. It is seemingly contradictory to describe it as 'all-new' as well as an enhanced Broadway/Gecko.
Although I wouldn't read too much into the marketing terms used in press releases.

Personally I think a 476fp derivative would be the most sensible option for Nintendo to take, but based on Nintendo's past decisions I am prepared to be disappointed.
 
They tweeted almost the same thing that in the original PR. The "same processor technology as in IBM Watson" doesn't say much.
Other than that I don't expect that official IBM account to state nothing new wrt to the WiiU tech that would break their agreements with Nintendo.
 
I see little advantage in ARM ... who would have the expertise to design a high performance SOC? NVIDIA is new to CPUs ... I guess Samsung maybe, but they don't quite have the reputation of IBM.
I think it will be intel COTS CPU or an IBM designed SOC again, in which case ARM makes no sense. Cross platform ISA compatibility just isn't relevant IMO.
Well no matter the actual rumors (Nv is off the radar for now) the leaked (early) documents about Durango design goals pointed to either ARM or X86. They also pointed out to always on device.
They also spoke about some "upgradability" in the design if memory serves right.
Say (for the sake of the discussion I would not put the odds high... at all) you have two SOC a low power one and high power one. The low power one is a Tegra something (3+ or better), it has its own Ram, etc. It runs a windows 8 RT (with extra support for kinect). It can be upgraded at anytime. For MS it's straigh forward. Apps run on any windows 8 rt devices. No extra support (if win8 RT is successful) plenty of apps and casual/social games would be available on the system "for free" from MSFT pov ( I mean they still make cash while selling apps but that's all).

Now there is the high performance part. Indeed Nvidia is still new to CPUs and they don't have high performances CPUs (even mid performance) but those early docs as well as the current rumors point to a healthy amount of cpu cores (8 comes quiet often).
Honestly I doubt that it's workable to have 4 modules / 8cores in the device (going by the all AMD rumors). It's big and it's hot. To me if there are really 8 cores they have to be pretty low power cores. Hence in the case of an AMD based system I would be close to discard BD (or its heir) cores. That let Jaguar cores. We should learn more about those really soon now (today?).
Anyway I don't have high expectations for those, I hope they are good though.
The question is going how 8 of those would compare to say 8 ARM Cortex A9 within the same power envelope? I'm sure it's not easy to answer as whereas bobcat were better per clock (I hope Jaguar are even better) those A9 can be clocked most likely higher within the same power budget.
Then there is the silicon cost and the A9 may compare nicely to Jaguar cores at least they do well vs Bobcats.

The whole thing is what kind of CPU power the high performance SOC provides. If AMD is the providers I would say (and going by 8 for the number of CPU cores) I would say that high performance CPU are out of question.
Nvidia may use ARM A15 hard design but packing 8 of those and clocking them nicely may be costly both in power and silicon.

Either way we are speaking 8 threads with SMT so at max four cores which let both AMD and ARM based solutions out of the pictures. In which case it's either an Intel based or IBM based solution.

Intel can provide both a low power SOC for non gaming operation (atom) and high performances CPUs but they won't make a SOC blending anything but their IP. IBM is different but will provide only the latter.

Overall whereas I would not bet on Nvidia, I think that they could deliver a solution every bit as good if not better than an AMD only platform (going by the early Yukon documents requirements).
 
They tweeted almost the same thing that in the original PR. The "same processor technology as in IBM Watson" doesn't say much.
Other than that I don't expect that official IBM account to state nothing new wrt to the WiiU tech that would break their agreements with Nintendo.

They also earlier tweeted clearly that it's Power7-family
 
Couldn't "same processor tech" be anything ranging from the manufacturing process to the use of eDRAM?

I'm not passing judgement on the Wii-U CPU, but I have a very hard time believing those are Power7 chips in that system.

The "same processor tech" statement is ambiguous, but the "same #power7 chips." not.
 
The "same processor tech" statement is ambiguous, but the "same #power7 chips." not.

True, which is why I was only commenting on the one quote. :p

It would be nice to have some clarification on the chips. I'm not even asking for exact specs, but something a bit more concrete would be nice. We don't even know how many threads the CPU has, do we?
 
They also earlier tweeted clearly that it's Power7-family
Well I missed that :oops: Flew through the last posts too quickly.

I'm still not sold, I find it odd that IBM would give more information than Nintendo it self?
They were allowed by Nintendo to release more info by proxy as enthusiasm about the WiiU is low now?
It's not the most successful way to do it imo. Either way they are trying to make fool of us.


It also doesn't match what he heard so far. The power7 is "big" needs a L3 to function (like Intel late architectures), it supports 4 threads.
So with supposedly two other tiny cores the system would support 6 hard threads (pretty much like the 360).
Then do the two others cores support the exact same instruction set? Either way the system will be no longer be a SMP set-up and it won't be straight forward to move one thread from one core to another.
If they do it looks like a lot of work, IBM has some soft core design (the 470s ) that they could taylor for the WiiU but doing modifications to the power7 it self sounds also like a lot of work (especially things critical like the front end and the cache hierarchy not too mention that if you change that can you still call the CPU a Power7?).

So are we left with:
3 custom CPU based on the 470s
3 lightly modified power7
1 power7 + 2 custom CPU, not exactly a SMP set-up
1 power7 + 2 custom CPU, SMP both cores support exactely the same instruction set (performance varies though depending on where you execute your code).

I would discard the 3 power7 cores on power dissipation alone even if they are clocked pretty low.
I would also be surprised to hear what we hear wrt to perfs if that were the case.

The last option would be interesting but that a lot of work (modify two different cores especially the super/crazy complex power7).

The "not exactly SMP set-up" could be an interesting option, the power7 is off the shelves (not worse the money to modify it) and the other cores are built up to Nintendo specs. They would be the one running when BC is involved (for the specific SIMD instruction for example). I would almost regret that Nintendo didn't push further and add more tiny cores (like four instead of two).

Either way they are playing with word and we have the first option.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top