Wii: More Than Meets the Eye *Spin-off*

Status
Not open for further replies.
Well now we lnow that

if napa is the gpu and has 24MBytes of eDRAM as cache then there would like 47.76mm2 for pipelines, texture units,etc; and we would have 72mm2 of eDRAM in vegas(something that I forgot to calculate in my previous post)

vegas 72mm2 = 40.9MBytes

Now, if vegas is the gpu, then we would en up with 50MBytes in napa+multimedia application processor(i.mx27)

I have heard many people say that vegas has 3MBytes of embedded eDRAM

3MBytes of UX6D = 3 MBytes * 1mm2 / 0.568181818MBytes
3MBytes of UX6D = 5.28mm2

leaving 72mm2 - 5.28mm2 for the GPU space
but since Nintendo has stated that the wii can be upgraded for HD and that they havent done it yet due that they havent found enough readon to do so, then maybe the vegas might have 5 o 6 MBytes of eDRAM cache to perform it without trouble

5MBytes = 8.80 mm2
6MBytes = 10.56 mm2

Well, this last part is just an assumption, maybe the vegas could use part of the eDRAM on the napa to achieve 720p since xbox 360 does it that way but by employing NEC UX6 and not UX6D as Hollywood .


Well, you might ask by now.- Why is it important to know the die size of the gpu?
Since this way we can compare it with a flipper made in 90nm and not 180nm and see the diference
in die space.

Flipper had 3MBytes of 1T-SRAM(manufactured by NEC) and had a die size of 110mm2 at 180nm, but there are others that say that Flipper measured 120mm2, so we will take the two possibilities in to account.

http://techon.nikkeibp.co.jp/english/NEWS_EN/20061127/124495/

http://www.segatech.com/gamecube/overview/

Flipper 180nm = 110mm2 Flipper 90nm approx =110/4 Flipper 90nm approx = 27.5mm2

Another thing we have to take into account is that the emmbedded memory on gamecub made almost 1/2 of it, approximately 3/8 according to the image displayed on the last link.

Flipper 90nm without embedded memory
27.5mm2 * 3 /8 = 10.3125mm2
27.5mm2 - 10.3125mm2 = 17.1875mm2

So, if vegas was the gpu with 3MBytes of eDRAM(72mm2 - 5.28mm2 = 66.72mm2)
then
vegas approx = 66.72mm2 / 17.1875 = 3.88189 times bigger

vegas with 5MBytes of eDRAM(72mm2 - 8.8mm2 = 63.2mm2)
then
vegas approx = 63.2mm2 / 17.1875mm2 = 3.67709 times bigger

if napa was the gpu(47.76mm2 for gpu considering 24MBytes of embedded eDRAM as cache)

napa gpu approx= 47.76mm2 / 17.1875mm2 = 2.778763 times bigger

ajjajaja, as Satoru Iwata said = Wii will be 2 to 3 times more powerful than Gamecube

ajajajajajaj
 
What's the desired end result anyway? We've seen what the hardware can do. It's certainly not much beyond Gamecube regardless of the numbers.

Even 360 isn't particularly impressive at this point in time and it is clearly superior in every way compared to Wii.
 
Right now there is only one company I trust the must, and thats Factor 5.

Factor 5 Wii engine 'does everything the PS3 did, and then some'
http://www.joystiq.com/2008/02/12/factor-5-wii-engine-does-everything-the-ps3-did-and-then-some/

Of course that for achieving that they would have to make proper use of the GPGPU of the Hollywood, but thats something I will cover later,maybe this night, or maybe tommorrow.

Here I will leave you a good reason why even those developers who know well the Wii havent made proper use of it yet.

Tim Sweeney(from Epic Games): GPGPU Too Costly to Develop
http://www.tomshardware.com/news/Sweeney-Epic-GPU-GPGPU,8461.html


But that could be solved with tools and sdks that would ease the work of course, just dont know if thats gonna happen right now.

For now just consider the Wii Hollywood based in ATI R520(x1000 family), I cannot assure which of the cards is based on, but my guess is that the GPGPU of wii has integrated the most convinient features of all of them.
 
GPGPU? Do you want to run Folding @ Home on your Wii or something?

Are you referring to GPU physics processing? You are way overestimating the hardware inside Wii. Who cares if it's theoretically capable of running general purpose code. It can barely handle modern graphics code.

And realize that companies will say just about anything short of a complete lie to hype their products. Let me know when there's a game on Wii that matches anything on PS3 or 360. Dead Space Extraction and The Conduit are pretty decent and the best I've seen on it, but they are definitely reminiscent of the old Xbox more than anything from the newer machines.
 
Last edited by a moderator:
Precisely, as long as most of the wii community understimate the system, studios wont feel the need to give away good games for the hardcore audience,. This is precisely why I started my research.

To make everyone know about the true specs of wii so that people would pressure nintendo and studios into realising more hardcore and graphics appealing games

Sincerely, I think that Nintendo didnt want to give away the details of wii and decided to concentrate at the beggining in just the wiimote so that way it wouldnt have to compete with neither Sony nor Microsoft, since the GPGPU technology was too new back then and there were available few tools to make proper use of it, but thats changing.

Just remember what happened to systems like the ATI Jaguar, a good example that horsepower is nothing if its difficult to use an expensive.

We could say that Nintendo has followed ATI Jaguar´s example but with a different approach. Nintendo had the wiimote to make the people concentrate on just gameplay, and secretly hidded the Hollywood´s true nature so that customers wouldnt argue, like we do now, for not realising games like those from 360 and ps3.

Thats the conclusion I have ended up with.

You just have to think as an industry and everything will make sense.
 
The GPGPU technology is going to be used for displacement mapping.

But something has bothered me since I read those patents. If you read them with careful they indirectly claim that the Wii Hollywood has a GPGPU, just pay close attention to things like dot-products and floating point units in the rendering pipeline; then check the definition of GPGPU in wikipedia and you will understand

Of course that the displacement mapping patetnts are just the begginig of the other proves I have.

http://nintendo-revolution.blogspot....st-secret.html

While the poly count is significantly lower, there is some strain on the CPU. Johannes Hirche writes:

Rendering displacement mapped surfaces is a process that involves a significant number of geometric and arithmetic operations. When applied to a triangle mesh, it involves prior retessellation of the base domain surface and transformation of the vertices and normals. Even on fast CPUs, it is a time consuming operation, wasting bandwidth and processing power.

This is why displacement mapping has not been widely used in real-time graphics. However, new and refined techniques allow for displacement mapping to be implemented in real-time. Again, Johannes Hirche writes:

The main focus was to explore new techniques suitable for hardware implementation in order to reduce the bandwidth strain on the system bus by moving the tessellation process onto the graphics subsystem. (...) A possibility to overcome these problems is to tessellate the individual triangles sequentially and to adaptively add triangles where necessary, until a desired level of accuracy is
reached. (...) With only minor user interaction or conservatively predefined input parameters the sampling schemes produce adaptive tessellations with very low error measures.

This is why Wii has a GPGPU, since GPGPUs can teorically achieve between 100 and 250 times faster the aritmetic and floating point calculations than high end cpus.

And if you consider the displacement mapping an implementation problem, do not worry since it has been available since ATI 9700 but not for gaming or movies; and .-

According to ATI, Gamecubes flipper could perform displacement mapping with twicks on the hardwre ,of course, gamecube would only produce a displacement mapping image, but could not use displacement mapping for gamecube but no games could make use of it

ATI gamecube capable of displacemetn mapping but not for gamming
http://ati.amd.com/developer/gdc/GDC...ppingNotes.pdf


As you can see, the problem of the displacement mapping is that would require a lot of power in realtime graphics. This is why displacement mapping has not been widely used in real-time graphics since would require a lot of bandwith(powerful cpu, a lot main memory, fast memory, fast cpu, wide bises, etc)

I consider that displacemet mpapping would be available in ps3 thanks to cell and its fast main ram,
but in xbox 360 I have my doubts since it has a GDDR3 as main ram, and regardless of being 512MBytes, the key for displacement mapping is performance; not to mention that I am not sure if one or even two cores would be enough for the aritmetic and floating calculations issue of this technique.

While in wii, the only problem is that GPGPU language is different than the conventional GPU one, so, if they want to make use of displacemetn mapping, they would have to provide very sophisticated tools(and expensive but dont know how much) and sdks to ease the work and make proper use of it without headaches.
 
for 24MBytes of 1T-SRAM at 90nm you will requiere 211mm2

and neither vegas nor napa have that much room

even using 1T-SRAM-Q at 90nm woul be enough, since you would need more than 100mm2 of room for 24MBytes.



And lets not forget what the official homepage of mosys syas about their available macros

Only 1T-MIM is available since 90nm and further.

If you check this document here: http://www.tsmc.com/download/english/a05_literature/Emb_HDM.pdf

You will find that cell density is much higher than the figures you're using (I'm assuming wikipedia).

"TSMC has partnered with experienced 1T IP providers including MoSys and MOSAID"

"The cell size is~0.21-0.23-micron^2 for 90nm 1T-MiM", which is also .21-.23mm^2 per Mbit ->40- 44mm^2


This is clearly quite a bit smaller than either chip, but you also have to consider pad limits and wiring interface between the two chips. The size of a chip is not just the manufacturing process.

Of course, it should also be clear that different manufacturing plants will have different results with process technology, but even if you include some mythical "overhead", it would be less than double. If you use the wiki table where 1.1/0.61 (overhead/no overhead), that would lead you to 72.7mm^2 - 79mm^2.

-----> 72.7mm^2 to 79mm^2 with "overhead" and TSMC's figures for density for 24MB 1T-SRAM MiM.



Here is a link to show that TSMC is at least one of the major producers of Wii chips:

http://news.cnet.com/8301-13924_3-9934049-64.html?tag=mncol;title


Then again, why would Nintendo, even if it got the choice but didnt have it, would employ standard 1T-SRAM if it was more expensive required more manufacturing processes than the newest implementations of Mosys macros.

Eaach new technology mosys comes up for is cheaper, so why to use standard 1T-SRAM
This is not a given because new manufacturing processes are by nature and definition not a mature technology. It is very much an early adopter approach so costs can still remain high. It will be cheaper to use a well-known manufacturing process.



jajajaja, man, why are you making examples with games that were destined originally for the gamecube and then just optimized a little for wii.

Mario Galaxy is nothing more than na port of wii that was optimized a little, just like Zelda Twilight Princess, even the Smash Bros Brawl.

The "lazy devs" excuse is hardly valid, especially for first party development teams. If the resources were there, they would have used it.

Are you saying that a 2008 game (SBB), released one and half years after the Wii release was just some shoddy development?
As a first party team, Mario Galaxy's developers would also have had ample time for their Holiday 2007 release to have the majority of development on Wii.


Plus, if Nintendo has not yet upgraded the resolution is because is convinient for them, since this way they earn more profit and can sell the games at a cheaper price.
Setting up the back buffer to use a higher rendering resolution does not cost them anything more. Since they already have a component video peripheral (at a nice high price tag), they could output high definition signals and use that as incentive for people to buy such an overpriced accessory if they so chose.



However, as far as the Wii is concerned, we have not found a significant reason to make it HD-compatible at this time.
There is a world of difference between HD-compatible and HD-compliant. Compatible would mean upscaling the front buffer that goes to the output.
 
Hello mr. tapionvslink. I think you've got yourself all muddled up.

The GPGPU technology is going to be used for displacement mapping.

Generally, an application of GPGPU programming is something that isn't a graphical effect. Especially not using a fixed function capability like this form of displacement mapping.

But something has bothered me since I read those patents. If you read them with careful they indirectly claim that the Wii Hollywood has a GPGPU, just pay close attention to things like dot-products and floating point units in the rendering pipeline; then check the definition of GPGPU in wikipedia and you will understand

Well I'm not sure what patents you mention, but dot-products are a fundamental part of any 3D graphics accelerator, going way way back to the early days of openGL and before. The most fundamental of these is a matrix multiplication, which is just 4 vector dot products. Absolutely basic stuff.

Of course that the displacement mapping patetnts are just the begginig of the other proves I have.

Patents are not proof. But I'm still curious to see what you have that proves the GPU in the wii can perform a large portion of the tasks it's CPU was designed to perform. (I apologize if you have already posted this, I'm a busy man)...

Johannes Hirche writes:

Rendering displacement mapped surfaces is a process that involves a significant number of geometric and arithmetic operations. When applied to a triangle mesh, it involves prior retessellation of the base domain surface and transformation of the vertices and normals. Even on fast CPUs, it is a time consuming operation, wasting bandwidth and processing power.

However those operations are very similar to other operations in a very simple 3D pipeline like the wii's. The tricky bit is subdividing the geometry. However, when a chip is designed it's often the case that small modifications - hacks if you will - can be made to exploit the existing capabilities of a chip to perform an otherwise complex operation. The limitation is that capability was usually very limited in it's flexibility, which is why all the various 3D tessellation and displacement mapping standards have pretty much all died.

This is why displacement mapping has not been widely used in real-time graphics. However, new and refined techniques allow for displacement mapping to be implemented in real-time.

Not the case. Displacement mapping hasn't been used in games because it's an absolute nightmare to author and tweak content for it. You are authoring textures instead of raw geometry. Also, generally, it's *far* less efficient use of resources than an artist optimised poly mesh. (speaking in the context of ancient graphics hardware, like that in the wii)

Johannes Hirche writes:
The main focus was to explore new techniques suitable for hardware implementation in order to reduce the bandwidth strain on the system bus by moving the tessellation process onto the graphics subsystem. (...) A possibility to overcome these problems is to tessellate the individual triangles sequentially and to adaptively add triangles where necessary, until a desired level of accuracy is
reached. (...)

In other words, hack the hardware to make it do something 'cool', but of little practical value, because:

With only minor user interaction or conservatively predefined input parameters the sampling schemes produce adaptive tessellations with very low error measures.

It's really in-flexible!

This is why Wii has a GPGPU, since GPGPUs can teorically achieve between 100 and 250 times faster the aritmetic and floating point calculations than high end cpus.

No.

And if you consider the displacement mapping an implementation problem, do not worry since it has been available since ATI 9700 but not for gaming or movies; and .-

What?

According to ATI, Gamecubes flipper could perform displacement mapping with twicks on the hardwre ,of course, gamecube would only produce a displacement mapping image, but could not use displacement mapping for gamecube but no games could make use of it

ATI gamecube capable of displacemetn mapping but not for gamming
http://ati.amd.com/developer/gdc/GDC...ppingNotes.pdf

'tricks on the hardware'. Exactly.

As you can see, the problem of the displacement mapping is that would require a lot of power in realtime graphics. This is why displacement mapping has not been widely used in real-time graphics since would require a lot of bandwith(powerful cpu, a lot main memory, fast memory, fast cpu, wide bises, etc)

No. It requires a lot of flexibility to be any use, it's just fixed function hardware was never very flexible. It's only now with DirectX 11 is tessellation becoming flexible enough that it may see some use in a selection of titles.

I consider that displacemet mpapping would be available in ps3 thanks to cell and its fast main ram,
but in xbox 360 I have my doubts since it has a GDDR3 as main ram, and regardless of being 512MBytes, the key for displacement mapping is performance; not to mention that I am not sure if one or even two cores would be enough for the aritmetic and floating calculations issue of this technique.

They both can do displacement mapping. Displacement mapping in it's simplest form is just geometry tessellation, with a texture lookup per generated vertex to sample the displacement value from a height map. Both machines have vertex texture fetch, and xenos (the xbox GPU) even has a hardware tessellation - it's just very few developers use it because it's not really a useful thing to do.

While in wii, the only problem is that GPGPU language is different than the conventional GPU one, so, if they want to make use of displacemetn mapping, they would have to provide very sophisticated tools(and expensive but dont know how much) and sdks to ease the work and make proper use of it without headaches.

There is no GPGPU language on the wii.
 
It's an illusion that Nintendo sells games at a cheaper price. A Nintendo-developed game that launches at $50 will remain at $50 for several years. Typically, first party Microsoft and games drop in price at least 50% within 2 years, making them even cheaper than Wii games. And if you want to talk about ported game development costs reflecting on sales price, why not mention games that we know for a fact were ported like Pikmin, Donkey Kong Jungle Beat, Mario Tennis, or Metroid Prime 2 & 3. They sold at truly discounted prices. There's isn't even any evidence of Mario Galaxy having any deep development on GC.

If you're really embarking on this quest for phantom hardware based on something you believe Factor 5 said, your trust is misplaced. Not even Factor 5 could trust Factor 5.
 
You saw what Factor 5 did with PS3, right? As in, all their amazing visual promises and PR material, only for the final game to drop well short. So why trust their promises regards a Wii engine, when all they're really trying to do isn't inform buyers, but to generate hype and interest?

Why don't you trust the hundreds of games already made? Why do you think that Nintendo, who design hardware to meet their own exacting requirements for their own software studios, aren't going to make use of the hardware they have put in their box? :oops: Nintendo know exactly what they've got, and they use it. The best they can manage is Mario Galaxies etc. Why don't you trust High Voltage Software, who set out to create the best possible Wii engine? You think in making the Conduit, they thought they'd leave out half the eDRAM and the displacement tech, just for the fun of it?

You rationale makes no sense in light of other evidence. You're basically following a Dan Brown paper trail of loose connections and suppositions.
 
Question: Displacement mapping won't flatten even at grazing angles compared to all other forms of bump mapping, or did I read it wrong?
 
Yes. Displacement mapping actually displaces the surface of the object. Ordinarily this would be achieved by displacing vertices. I don't yet know of a system that can do per-pixel displacement in a conventional rasteriser, which seemed to be what Nintendo's early patents were for. eg. You can't have a flat 4-vertex quad ground and displace a terrain from it. You need lots of vertices to form a mesh and displace the vertices to make the terrain.
 
If you check this document here: http://www.tsmc.com/download/english/a05_literature/Emb_HDM.pdf

You will find that cell density is much higher than the figures you're using (I'm assuming wikipedia).

"TSMC has partnered with experienced 1T IP providers including MoSys and MOSAID"

"The cell size is~0.21-0.23-micron^2 for 90nm 1T-MiM", which is also .21-.23mm^2 per Mbit ->40- 44mm^2


This is clearly quite a bit smaller than either chip, but you also have to consider pad limits and wiring interface between the two chips. The size of a chip is not just the manufacturing process.

Of course, it should also be clear that different manufacturing plants will have different results with process technology, but even if you include some mythical "overhead", it would be less than double. If you use the wiki table where 1.1/0.61 (overhead/no overhead), that would lead you to 72.7mm^2 - 79mm^2.

-----> 72.7mm^2 to 79mm^2 with "overhead" and TSMC's figures for density for 24MB 1T-SRAM MiM.



Here is a link to show that TSMC is at least one of the major producers of Wii chips:

http://news.cnet.com/8301-13924_3-9934049-64.html?tag=mncol;title


This is not a given because new manufacturing processes are by nature and definition not a mature technology. It is very much an early adopter approach so costs can still remain high. It will be cheaper to use a well-known manufacturing process.





The "lazy devs" excuse is hardly valid, especially for first party development teams. If the resources were there, they would have used it.

Are you saying that a 2008 game (SBB), released one and half years after the Wii release was just some shoddy development?
As a first party team, Mario Galaxy's developers would also have had ample time for their Holiday 2007 release to have the majority of development on Wii.


Setting up the back buffer to use a higher rendering resolution does not cost them anything more. Since they already have a component video peripheral (at a nice high price tag), they could output high definition signals and use that as incentive for people to buy such an overpriced accessory if they so chose.



There is a world of difference between HD-compatible and HD-compliant. Compatible would mean upscaling the front buffer that goes to the output.


I have seen those documents of 1T-MIM of TSMC long time ago, and is for sure 50% less than an 1T-SRAM-Q, just check in wikipedia the die size of the 1T-SRAM-Q(full macro) and just take half of it´s die.

Besides, I were are talking about NEC here, or do you see the firm of TSMC in the Hollywood package?

NEC is using MIM2 technology, not MIM, if you check there website I am sure you will get What I am saying.

http://www.am.necel.com/process/edramprocess.html

See, there is mim and also there is mim-2

But here is another prove of NEC that my calculations are right
http://www.necel.com/magazine/en/vol_0039/vol_0039.pdf

Just read the part were NEC says that Gamecube had 24MBits(3MBytes) of embedded DRAM and that the new UX6D can put as much as 256MBits in half-size of the typiical SoC of embedded DRAM.

256MBits = 32MBytes

Flipper =110mm2
half of Flipper = 55mm2

55mm2 = 32MBytes
1mm2 = 0.58181818MBytes


And do you rememmber that UX6D could place 1MBit in 0.22mm2

0.22mm2 = 0.125MBytes
1mm2 = 0.5618181818MBytes

So isnt 0.56181818MBytes aproxx= 0.58181818MBytes ?
 
It's an illusion that Nintendo sells games at a cheaper price. A Nintendo-developed game that launches at $50 will remain at $50 for several years. Typically, first party Microsoft and games drop in price at least 50% within 2 years, making them even cheaper than Wii games. And if you want to talk about ported game development costs reflecting on sales price, why not mention games that we know for a fact were ported like Pikmin, Donkey Kong Jungle Beat, Mario Tennis, or Metroid Prime 2 & 3. They sold at truly discounted prices. There's isn't even any evidence of Mario Galaxy having any deep development on GC.

If you're really embarking on this quest for phantom hardware based on something you believe Factor 5 said, your trust is misplaced. Not even Factor 5 could trust Factor 5.


Do you remember mario 128?

It started as a Gamecube game and then most of the work that was made was transfered to Wii and was called Mario Galaxy

An if wii games have that much price without HD, it only means more profit, adding HD will just reduce the profits.
 
You saw what Factor 5 did with PS3, right? As in, all their amazing visual promises and PR material, only for the final game to drop well short. So why trust their promises regards a Wii engine, when all they're really trying to do isn't inform buyers, but to generate hype and interest?

Why don't you trust the hundreds of games already made? Why do you think that Nintendo, who design hardware to meet their own exacting requirements for their own software studios, aren't going to make use of the hardware they have put in their box? :oops: Nintendo know exactly what they've got, and they use it. The best they can manage is Mario Galaxies etc. Why don't you trust High Voltage Software, who set out to create the best possible Wii engine? You think in making the Conduit, they thought they'd leave out half the eDRAM and the displacement tech, just for the fun of it?

You rationale makes no sense in light of other evidence. You're basically following a Dan Brown paper trail of loose connections and suppositions.


Of course I trust games that are already made, like Resident Evil DarkSide Chronicles, Death Mountain, Gladiator, the Grinder, Silent Hill, Dead Space Extractiom, etc.

Tell me, do you really believe you can achieve those graphics with just an 1.5 overclocked gamecube?


What´s going on is obvous, right now they are using the GPGPU to take the job of doing the physics, normally done on the CPU, this way they can do the job not just faster, but they also save bandwidth and reduce strain on the cpu, and with the saved resources they can puch the graphics even more. Thats why Nintendo has aquired sdk like Nividia physx.
 
You saw what Factor 5 did with PS3, right? As in, all their amazing visual promises and PR material, only for the final game to drop well short. So why trust their promises regards a Wii engine, when all they're really trying to do isn't inform buyers, but to generate hype and interest?

Why don't you trust the hundreds of games already made? Why do you think that Nintendo, who design hardware to meet their own exacting requirements for their own software studios, aren't going to make use of the hardware they have put in their box? :oops: Nintendo know exactly what they've got, and they use it. The best they can manage is Mario Galaxies etc. Why don't you trust High Voltage Software, who set out to create the best possible Wii engine? You think in making the Conduit, they thought they'd leave out half the eDRAM and the displacement tech, just for the fun of it?

You rationale makes no sense in light of other evidence. You're basically following a Dan Brown paper trail of loose connections and suppositions.


Mario Galaxy, jajajaja, even ATI laughed at what Mario Galaxy did on the E32006, the tip of the iceberg remember.

http://www.gamedaily.com/articles/features/ati-wii-graphics-at-e3-tip-of-the-iceberg/?biz=1#

Read.-
ATI is also responsible for providing the custom GPU for Microsoft's Xbox 360, so we tried to find out how the "Hollywood" chip compares to what's in the 360. Once again, however, Swinimer sidestepped the question. "They're different chips for different platforms and different uses. I don't think it's a fair comparison to put them on a chart [to analyze]. That's not what it's all about... I think if you focus on the capabilities that the chip will have for the average consumer, with the amazement and wow factor, I think that's the value that we bring."

So is a GPU(wii) different form another GPU(360)?
or is a GPGPU(wii) different from a GPU(360)?

Different chips for different uses.

so does a gpu have different use from another gpu thst is more powerful?
or does a gpgpu have different uses from a gpu that has great power?


I would admit that mario Galaxy manages good HDR, but one thing is HDR and other is graphics, there is no question that the most rescent games like Resident Evil DarkSide Chronicles outperform Mario Galaxy in graphics.



I am telling you, Nintendo is starting to become CAPCOM 2, they want to squeeze something at the limit and when the franchise cannot drive wnough attention, then they come up with something else.

Now do you remeber this
http://www.videogamesblogger.com/20...10-could-it-have-holographic-data-storage.htm

read this part.-

With Nintendo, developers like [Shigeru] Miyamoto decide. As long as they are comfortable with the current technology’s ability to deliver meaningful surprises to the users, we don’t need new hardware. However, when they start demanding something new, when they see the existing hardware can’t provide what they need, then that is when we decide to launch the new hardware. As for timing, it may be three years from now, five years from now or eight years from now.” That’s between 2012 and 2017 people

Do you really think people will stand with a console like Wii for 8 years from now just relying in the wiimode and maybe the vitality sensor?

Now this is a reasoning from a common person, we are talking about a company, there is no way people will keep playing Wii for 8 years from now just for gameplay and with graphics at the level that have been shown right now?
 
Hello mr. tapionvslink. I think you've got yourself all muddled up.



Generally, an application of GPGPU programming is something that isn't a graphical effect. Especially not using a fixed function capability like this form of displacement mapping.



Well I'm not sure what patents you mention, but dot-products are a fundamental part of any 3D graphics accelerator, going way way back to the early days of openGL and before. The most fundamental of these is a matrix multiplication, which is just 4 vector dot products. Absolutely basic stuff.



Patents are not proof. But I'm still curious to see what you have that proves the GPU in the wii can perform a large portion of the tasks it's CPU was designed to perform. (I apologize if you have already posted this, I'm a busy man)...



However those operations are very similar to other operations in a very simple 3D pipeline like the wii's. The tricky bit is subdividing the geometry. However, when a chip is designed it's often the case that small modifications - hacks if you will - can be made to exploit the existing capabilities of a chip to perform an otherwise complex operation. The limitation is that capability was usually very limited in it's flexibility, which is why all the various 3D tessellation and displacement mapping standards have pretty much all died.



Not the case. Displacement mapping hasn't been used in games because it's an absolute nightmare to author and tweak content for it. You are authoring textures instead of raw geometry. Also, generally, it's *far* less efficient use of resources than an artist optimised poly mesh. (speaking in the context of ancient graphics hardware, like that in the wii)



In other words, hack the hardware to make it do something 'cool', but of little practical value, because:



It's really in-flexible!



No.



What?



'tricks on the hardware'. Exactly.



No. It requires a lot of flexibility to be any use, it's just fixed function hardware was never very flexible. It's only now with DirectX 11 is tessellation becoming flexible enough that it may see some use in a selection of titles.



They both can do displacement mapping. Displacement mapping in it's simplest form is just geometry tessellation, with a texture lookup per generated vertex to sample the displacement value from a height map. Both machines have vertex texture fetch, and xenos (the xbox GPU) even has a hardware tessellation - it's just very few developers use it because it's not really a useful thing to do.



There is no GPGPU language on the wii.

GPGPU wikipedia.- http://en.wikipedia.org/wiki/GPGPU
GPGPU

From Wikipedia, the free encyclopedia


Jump to: navigation, search
General-purpose computing on graphics processing units (GPGPU, also referred to as GPGP and to a lesser extent GP²) is the technique of using a GPU, which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the CPU. It is made possible by the addition of programmable stages and higher precision arithmetic to the rendering pipelines, which allows software developers to use stream processing on non-graphics data.



GPU functionality has, traditionally, been very limited. In fact, for many years the GPU was only used to accelerate certain parts of the graphics pipeline. Some improvements were needed before GPGPU became feasible.

-------

Read the patents that tallk about the arrengement of the rendering pipeline and the part that says that there would be reduction of strain on the cpu.



2. Why could displacement mapping be Nintendo related?

Firstly, there is a Nintendo patent that has caused this topic to crop up in this community before. It is entitled Method and apparatus for efficient generation of texture coordinate displacements for implementing emboss-style bump mapping in a graphics rendering system.Its abstract is a bit of a mouthful, unfortunately. Read my highlights, though:
A graphics system including a custom graphics and audio processor produces exciting 2D and 3D graphics and surround sound. The system includes a graphics and audio processor including a 3D graphics pipeline and an audio digital signal processor. Emboss style effects are created using fully pipelined hardware including two distinct dot-product computation units that perform a scaled model view matrix multiply without requiring the Normal input vector and which also compute dot-products between the Binormal and Tangent vectors and a light direction vector in parallel. The resulting texture coordinate displacements are provided to texture mapping hardware that performs a texture mapping operation providing texture combining in one pass. The disclosed pipelined arrangement efficiently provides interesting embossed style image effects such as raised and lowered patterns on surfaces.
This proves that Nintendo has not only been interested in this technique but is a patent holder. The section entitled ´cross-reference to related applications´ references 25 separate provisional patent applications that are thereby incorporated into the patent. Almost all of them date back to 2000. This would suggest that it is an important patent that has kept Nintendo busy but doesn´t date back too far to be cutting edge.

Secondly, relating back to making the process of displacement mapping more efficient and less of a strain on the CPU, one way of adaptive tessellation might actually be the last Nintendo patent I talked about in great detail, called Three-dimensional image generating apparatus, storage medium storing a three-dimensional image generating program, and three-dimensional image generating method. A number of readers pointed out that the patent had nothing to do with actually visualising graphics in 3D, but rather optimizing a 3D world to be viewed on a 2D display. Then, that patent made little sense to me. But in the context of trying to reduce the computational strain on the CPU involved in displacement mapping, this may make perfect sense.

Lastly, whether the Revolution´s graphics chip will turn out to be based on the R520 or R530, it will be Radeon technology. And its manufacturer ATI has the following advice for developers on their Designing for Radeon development support page:
Use multi-texturing effects for realistic low polygon primitives. For example, you can use emboss style bump mapping to achieve the illusion of a bumpy surface that would take a lot more polygons to approximate otherwise. Similarly, other intelligent use of texture maps can reduce the polygon count of your mesh designs.​
This may not be unusual, since nVidia will undoubtedly have similar advice on their development support pages, but at least it shows that ATI is also very concerned with this technique. In fact, ATI supported this technology earlier than nVidia, it seems. While the Radeon 9500/9700 was capable of displacement mapping, the GeForce FX was only partly so. The Radeon 9700 Pro already supported adaptive tessellation. In fact, ATI has an exclusive technology called ´Truform 2.0´, which is a kind of tessellation.

Now, there have been numerous rumours about Nintendo having discovered some kind of secret development technique. This may not be secret per se, but it would make sense if Nintendo had discovered a way of implementing displacement mapping efficiently. They have a patent relating to this technology and they have a strong ally who has some expertise in this field.

It may explain why Nintendo have not yet talked about the graphics chip or shown any real game footage yet. It would also explain why the basic hardware features that have been suggested seem to be underpowered at face value, yet Nintendo maintains that their graphics will be on par. This may yet turn out to be the Revolution´s last secret.


---

So, arent those things related to GPGPU?
Remember that displacement mapping is different from other techniques since it requires insane floating point and aritmetic calculations to be achieved in real time graphics.

a GPGPU can do that stuff between 100 and 250x faster

Besides, the technology was there when Hollywood was being developed and the firts ATI Radeon videocard that included it were the ATI520(x1000 family), which were launched in the market at the second half of the 2005, and Hollyood was neither done in the E32006 nor finished when NEC announced that they would provide eDRAM for Wii in June 2006.

I told you before, the only inconvinience of the technology is that the software for taking adventage of its capabilities was just starting to appear, and that software was also buggy.

And besides, GPGPU language is different.

Also, I think I found an example of how to perform displacement mapping using GPGPU language:
http://users.design.ucla.edu/~acolubri/processing/glgraphics/home/gpgpu.html
 
Status
Not open for further replies.
Back
Top