Broadway specs

If it is, it doesn't show it in the datasheet. I'm not seeing any additional FUs here or extra scheduling logic, pipeline stages, etc, that might account for a difference. Since "50% more logic" is your theory, one would think you might do the leg work and read the datasheets for both the 750CL and 750CXe to compare -- since the answer is literally right in front of you. I think all of the reasons I've given are far more valid (less than 50% reduction in die size from new process, fan-out on such a small core), not to mention the fact that this 11 sqmm figure is a made-up number and is in all likelyhood inaccurate.

Its not as easy as looking through the data sheet for the 750CXe and 750CL to find the differences. Because Gekko was already different from 750CXe and its never easy to see the differences in two different data sheets to begin with. By the way, 11mm^2 is in no way a made up number..

It will be close to that, but you are talking about the average feature size, so it's not going to be exact depending upon the chip. I'll be honest and say that I can't even believe people are disputing IBM's datasheet because someone's die estimate was off by 5 sqmm.

You keep talking about a 5mm^2 difference as if its so small as to be insignificant. If you where talking about CELL then it would be, but when your talking about a CPU which should only be 11mm^2 in the first place its anything but.
 
Last edited by a moderator:
While that is true(Gekkos extended FP ISA is fairly primitive compared to AltiVec, even before we consider half the throughput etc.) - the dual-FP nature of Gekkos SIMD means free/direct component access and simple access to unaligned data.
So a 4-way SIMD with similar kind of programmer access(well, throw in some enhancements to ISA and mapping of the register set) would be considerably more efficient in FP SIMD(as far as attaining peak performance goes) then Altivec can ever be. - The real world example of which we got circa 2 years ago - just not in an IBM cpu(obviously). :p

*if that's what i'm thinking, then..*

hmm, i wonder how a well templetized vector c-code would run on that 'real world example'...

*gives half a kingdom for a _good_ vectorizing compiler for the 'real world example'*
 
Last edited by a moderator:
There is an interesting developer diary on IGN Wii regarding Elebits and physics on Broadway :
http://wii.ign.com/articles/737/737553p1.html

We had another interesting experience with the Wii hardware when we started to implement our physics system. The strength of the Wii is controls rather than crunching computations, but we wanted to use a new physics engine to enhance the sense of realism in the game. It was challenging, but we pulled it off and our physics system is now a great showcase for the game. Every object has an appropriate weight and interacts realistically with the environment.

So it seems implementing complex physics on Broadway is doable, but far from easy... OTOH, Elebits is a launch title.
 
By the way, 11mm^2 is in no way a made up number..
Then source it.


You keep talking about a 5mm^2 difference as if its so small as to be insignificant. If you where talking about CELL then it would be, but when your talking about a CPU which should only be 11mm^2 in the first place its anything but.
IBM might not be able to fit the I/O pads onto a 11mm^2 die so had to up it to 16mm^2.

Cheers
 
I thought generally that's what happened.

For some parts yes, for others no.

Wires don't shrink much (if at all) and some types of logic (IIRC ROMs) don't shrink very well.
 
Then source it.

Source what?, the fact that a chip of Gekko's size should logically shrink to 11mm^2 on a 90nm process? Surely such a basic chip shouldn't be an exception?

IBM might not be able to fit the I/O pads onto a 11mm^2 die so had to up it to 16mm^2.

So exactly what do we think IBM did with this chip to justify any time and money what so ever? They took a chip that wasn't even close to being complex 6 years ago and didn't improve it in the slightest. They couldn't manage to fit this extremely simple chip in the amount of space a chip of its size should fit in. Then finally they couldn't manage to produce a extrmely simple 6 year old chip they did absolutely nothing with until a couple of months ago. Anyone actually believe that?...
 
Last edited by a moderator:
So exactly what do we think IBM did with this chip to justify any time and money what so ever? If some are to be believed they didn't improve the chip in the slightest. But not only that they couldn't even manage to fit a 6 year old chip design in the amount of die it normally would fit in.. Then finally they couldn't manage to produce a chip they did absolutely nothing with until a couple of months ago. Anyone actually believe that?...
You know, that argument makes a lot of sense. What have IBM been doing. But I see parallels with RSX too. After a couple of years-ish and a few dozen engineers, what exactly did nVidia do with RSX? It's a G7n with a few ROPs disabled and a different clockspeed. How much effort is it to add FlexIO support?

Generally, correlating human work-hours with end results seems an unknown. I don't think we can ever say 'this company worked n years on this project, so should have achieve x amount of progress.' Maybe both RSX and Broadway have differences from the source-designs that account for the time taken to produce them. Until we know that, it seems, no matter how implausible to common sense, that companies can spend lots of effort doing not-a-great-deal.
 
You know, that argument makes a lot of sense. What have IBM been doing. But I see parallels with RSX too. After a couple of years-ish and a few dozen engineers, what exactly did nVidia do with RSX? It's a G7n with a few ROPs disabled and a different clockspeed. How much effort is it to add FlexIO support?

Generally, correlating human work-hours with end results seems an unknown. I don't think we can ever say 'this company worked n years on this project, so should have achieve x amount of progress.' Maybe both RSX and Broadway have differences from the source-designs that account for the time taken to produce them. Until we know that, it seems, no matter how implausible to common sense, that companies can spend lots of effort doing not-a-great-deal.

There are some parallels, but only lose ones IMO. G70 to RSX may not be much of a difference but at least Nvidia where actually developing the G7x architecture during PS3's development time. The technology only first arrived in finished silicon the PC space a year ago. Apparently Broadway was basically finished a few years before Wii was even a twinkle in Nintendo's eye..
 
Last edited by a moderator:
Apparently Broadway was basically finished a few years before Wii was even a twinkle in Nintendo's eye..

I'm still thinking the Wii we are getting is not what was originally planned... I think Nintendo initially had a much more ambitious design in mind, and that a combination of factors (relatively disappointing NGC sales, DS success, 360 early launch, desire to produce a very small and quiet console with full NGC BC...) made them change their mind.
 
Source what?, the fact that a chip of Gekko's size should logically shrink to 11mm^2 on a 90nm process? Surely such a basic chip shouldn't be an exception?

Ah. So you're relying on your experience as a process engineer to determine the die size of the 750CL.
 
Source what?, the fact that a chip of Gekko's size should logically shrink to 11mm^2 on a 90nm process? Surely such a basic chip shouldn't be an exception?

Ah, so you made it up.

You presume that all metal layers have been scaled down perfectly. You also presume that no modification was made to transistors to lower leakage (the shorter the distance between source and drain the higher leakage, so you only make your critical path transistors super short). You also presume that IBM isn't pad limited at 90nm; - you have to fit the same number of pads to the substrate onto the much smaller die.

Look at other processors: AMD's Athlon 64 shrunk from 144mm^2 to 84mm^2, not 69mm^2

Cheers

*edit: removed Northwood vs Prescott comparison
 
Last edited by a moderator:
Well it's nice to see they are keeping compatibility with the GC,much like an upgraded PC keeps compatibility. AFAIK the Wii is not emulating GC games,and if it weren't for the new interface it would play GC games out of the box. This to me makes much more sense than just throwing out your architecture every 5 years and starting from scratch. Must be a pain in the ass for console devs to have to learn an all new architecture every five years and it must drive costs up.
Could we see a future where you no longer buy specific console "Wii" or "Wii2" games ,but simply a "Nintendo" game that works on any new Nintendo system, like PC?
 
I really believe that Nintendo intends to go portable with this design, rather soon. Why else stress such a low power design?

Since nobody else mentioned it, I'll spout the Nintendo R&D explanation. Para-phrased, taken from the "Iwata Interviews" here.

Wii's efficient power usage enables it to be on 24/7 in WiiConnect24 without damaging the hardware or using too much energy

Adapting it for portable use is a great idea, too. It's really a matter of when the DS begins its decline.
 
This to me makes much more sense than just throwing out your architecture every 5 years and starting from scratch.
Generally I like the new architecture. Backwards compatibility has been a stone around PCs necks holding them back and severely limiting how they can develop. The fact that a new design has to support old methods totally limits the direction PC designs can go.

Looking at Cell, we see a great example of where no regard for BC or other compatibilities has created an innovative, powerful and flexible architecture.

Could we see a future where you no longer buy specific console "Wii" or "Wii2" games ,but simply a "Nintendo" game that works on any new Nintendo system, like PC?
That would be diabolical. Devs would have to target multiple systems with the same software, which'll lead to underused hardware and gimp'd higher level games (just like PC). If they're going to do that, Nintendo may as well become PC software+hardware developers!
 
That would be diabolical. Devs would have to target multiple systems with the same software, which'll lead to underused hardware and gimp'd higher level games (just like PC). If they're going to do that, Nintendo may as well become PC software+hardware developers!

I was thinking more like Wii2 would simply be an upgrade of Wii with more power and the same controller. Not multiple new systems existing at once.So Nintendo systems would resemble PC's in how they evolve from one generation to the next.Retaining complete compatibility in every way but simply increasing processing power.
I know I didn't explain that very well. Each new system would be completely compatible with the last,so at some point once the old incompatible systems were off the market(GC), it would just be a "Nintendo" system and " Nintendo games" instead of Wii or GC or N64 ,and all new games would be compatible.
 
Last edited by a moderator:
Generally I like the new architecture. Backwards compatibility has been a stone around PCs necks holding them back and severely limiting how they can develop. The fact that a new design has to support old methods totally limits the direction PC designs can go.

But that stone has helped increase the popularity of of PC by allowing older software to continue to work. Obvioulsy from a very hardcore cutting edge standpoint you want new stuff,but trying to appeal to a wider audience needs a more measured approach no?
 
That would be diabolical. Devs would have to target multiple systems with the same software, which'll lead to underused hardware and gimp'd higher level games (just like PC). If they're going to do that, Nintendo may as well become PC software+hardware developers!

Shifty, the issue with PC games is not so much in the machines' BC capabilities as it is in the fact that there's not such thing as well-defined target hw platform on the PC front. and if you decide to target a more-or-less well-defined high-end-ish PC configuration you immediately obliterate your audience. IOW the problem with PC game development is that you have to deliberately produce software in BC mode to maximise your base. the situation with consoles is a bit different - if you code for the ps2 you don't have to have the ps1 in mind. same with wii - a wii title does not need to know of GC hw. so the fact that your target platform can also run legacy code comes totally transparent to your code.

of course, with all that said, hw BC compatibility in consoles does not come for free. yes, tradeoffs get to be made, and yes, something gets sacrificed along the way, which something could have contributed to the new generation of software. but the question that stands then is: what is more imporant to you - a couple of more advancements in the new titles, or the ability to run your favourite tite from last gen past the lifespan of your old console?
 
Last edited by a moderator:
There is an interesting developer diary on IGN Wii regarding Elebits and physics on Broadway :
http://wii.ign.com/articles/737/737553p1.html



So it seems implementing complex physics on Broadway is doable, but far from easy... OTOH, Elebits is a launch title.

Althought improbably I wonder if they did get the proper speed and sense of weight to the objects.

Althought many may not think that I personally think that this game is a nice showcase to the HW, it does have more and better physics than any of the last gen games, complex AI (they describe it briefly in a IGN article) with a lot of AI agents, complex scenarios, from what I have seen nice animation, it only seems to lack particle fxs (IIRC there isnt many if at all), I even like the gfx they are presenting now althought it need some AA (still I would like to see something more like shadows and self shadowing).

This game alone seems to bring more than we saw in all in the last gen it just need AA, more acurate physics and good shadowing to be perfect (Iguess particle fxs dont fit in this game).
 
I'm still thinking the Wii we are getting is not what was originally planned... I think Nintendo initially had a much more ambitious design in mind, and that a combination of factors (relatively disappointing NGC sales, DS success, 360 early launch, desire to produce a very small and quiet console with full NGC BC...) made them change their mind.

being somewhat of a proponet of this theory myself, let me bring to the the inquring minds the following globe'n'mail interview with Ron Bertram, general manager of Nintendo of Canada Ltd.

Nintendo decided after it began developing the Wii that it was time to shift course and exit this arms race. If there was a single breakthrough moment in the company's strategy, it was probably in late 2003 when president Satoru Iwata told middle-aged board members like himself that Nintendo didn't make anything he could play.

Mr. Iwata had been struggling to come to terms with a decline in interest in gaming in Japan, and he kept returning to the same conclusion: Nintendo had to expand the market beyond hard-core players.
 
"We started developing Wii right after Nintendo launched the GameCube. You know, as soon as we complete one system, we start thinking about the next one. Needless to say, we don’t design new components or technologies from scratch. Rather, we have to base our designs on existing technologies. In the world of technology, there are so-called Roadmaps (overviews of proposed technologies/products) that are used by each industry in order to make general forecasts about where semiconductor technology is heading, as well as the evolution of disc and wireless technologies. Engineers and developers normally refer to these Roadmaps while developing hardware that they plan to release in the future. Looking again at the completed Wii, I feel that it has turned out to be something completely different from what was predicted in the mainstream technology Roadmaps.
Iwata: What gives you that impression?


Takeda: This may sound paradoxical, but if we had followed the existing Roadmaps we would have aimed to make it “faster and flashier.” In other words, we would have tried to improve the speed at which it displays stunning graphics. But we could not help but ask ourselves, “How big an impact would that direction really have on our customers?” During development, we came to realise the sheer inefficiency of this path when we compared the hardships and costs of development against any new experiences that might be had by our customers.
Iwata When did you start feeling that way?
Takeda: It must have been about a year after we started developing Wii. After speaking with Nintendo's development partners, I became keenly aware of the fact that there is no end to the desire of those who just want more. Give them one, they ask for two. Give them two, and next time they will ask for five instead of three. Then they want ten, thirty, a hundred, their desire growing exponentially. Giving in to this will lead us nowhere in the end. I started to feel unsure about following that path about a year into development.
 
Back
Top