Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
I would rather ask the question why wouldn't the Wii U have a Direct X11 capable graphics card? But as others have said, it's relatively meaningless in the console world, where various features exposed through DirectX11 have been done long before DirectX11 came out by virtue of being able to freely mix and match CPU/GPU resources and have a far more open memory model.
 
I would rather ask the question why wouldn't the Wii U have a Direct X11 capable graphics card? But as others have said, it's relatively meaningless in the console world, where various features exposed through DirectX11 have been done long before DirectX11 came out by virtue of being able to freely mix and match CPU/GPU resources and have a far more open memory model.

The one place AMD's DX11 class GPUs do have a large edge on DX10.1 is better GPGPU and tessellation. With the former I remember reading that a large reason for that is memory, which could easily be fixed, since Wii U uses completely different Memory structures more than likely. I really don't see a reason to use R700 in the final box, unless it was highly customized, and then who knows what it would look like, but likely not a whole lot like R700 as we know it today.
 
The problem with tessellation is that you need to tessellate your base model more to be able to tessellate it further :(
(It's not saving memory like some of us would have liked, modeling in High Order Surfaces is also quite inconvenient for artists :( )

I think those Wii U specs are very close to the truth, as for memory it's 1.5GiB minimum, maybe Nintendo went up to 2GiB since early devkits.
 
Well I'm not completely sure if Wii U is using "Diffuse DoF" effects but that is what it looks like. I could certainly be wrong about that, and hopefully someone on here knows for certain, because that should tell us if the console does in fact have "DX11" capabilities as developers have said.

Don't DX9 capable cards use is something called "Poisson Disk Blur"? both are talked about inside the PDF, I'm not coming from a large background in graphics programming, so I could be very wrong about all of this, but I from what I've seen and from what I've read, Wii U's DoF looks much more advanced than we have seen in DX9 cards, and DX10.1 cards used the same effect.
The DOF method in the article quoted is actually also a post process effect, just like all other DOF effects used in real time games. It's not efficient for an immediate mode rasterizer to produce DOF during rendering (it would require huge amount of geometry passes).

There are many games with (various quality) DOF effects in current generation consoles. I remember coding my first post process DOF effect on DX8 hardware. These new compute shader based DOF effects are of course better looking and more efficient than older methods. However there's nothing radically new in the new effects. You could implement a (DX9) pixel shader version of these effects, and it would look (almost) identical, but of course perform somewhat worse (prefix sum requires log n passes). High quality DOF in a console game would not in any way prove it has compute shader capable (DX11) hardware on it.

There are actually many other graphics processing tasks that benefit even more from compute shaders than DOF does. Many dynamic lighting algorithms (huge amount of local light sources) and many global illumination algorithms (for example light propagation in volumes) benefit nicely from compute shaders. But this if course depends on the algorithm. Some algorithms gain more than others.
Don't DX9 capable cards use is something called "Poisson Disk Blur"?
No DirectX version has any build it support for DOF. You have to program your own algorithm for it. There are dozens of different algorithms that developers use, each with different trade offs. We do not use poisson disk blurring, but some developers do.
Wii U's DoF looks much more advanced than we have seen in DX9 cards, and DX10.1 cards used the same effect.
I doubt two WiiU games from different developers use the same DOF algorithm, unless they are using the same middleware. There might be similarities between algorithms that different developers choose to use, but implying that all WiiU games use the same DOF algorithm is likely not true. Some games might have extra GPU power to spend for high quality DOF, while some games might prefer to use those cycles for other purposes (and settle for a simpler DOF algorithm).
The problem with tessellation is that you need to tessellate your base model more to be able to tessellate it further :(
(It's not saving memory like some of us would have liked, modeling in High Order Surfaces is also quite inconvenient for artists :( )
Agreed on both points :(
 
The DOF method in the article quoted is actually also a post process effect, just like all other DOF effects used in real time games. It's not efficient for an immediate mode rasterizer to produce DOF during rendering (it would require huge amount of geometry passes).

There are many games with (various quality) DOF effects in current generation consoles. I remember coding my first post process DOF effect on DX8 hardware. These new compute shader based DOF effects are of course better looking and more efficient than older methods. However there's nothing radically new in the new effects. You could implement a (DX9) pixel shader version of these effects, and it would look (almost) identical, but of course perform somewhat worse (prefix sum requires log n passes). High quality DOF in a console game would not in any way prove it has compute shader capable (DX11) hardware on it.

There are actually many other graphics processing tasks that benefit even more from compute shaders than DOF does. Many dynamic lighting algorithms (huge amount of local light sources) and many global illumination algorithms (for example light propagation in volumes) benefit nicely from compute shaders. But this if course depends on the algorithm. Some algorithms gain more than others.

No DirectX version has any build it support for DOF. You have to program your own algorithm for it. There are dozens of different algorithms that developers use, each with different trade offs. We do not use poisson disk blurring, but some developers do.

I doubt two WiiU games from different developers use the same DOF algorithm, unless they are using the same middleware. There might be similarities between algorithms that different developers choose to use, but implying that all WiiU games use the same DOF algorithm is likely not true. Some games might have extra GPU power to spend for high quality DOF, while some games might prefer to use those cycles for other purposes (and settle for a simpler DOF algorithm).

Agreed on both points :(

Hey, thanks for the reply. I was thinking different DX APIs might have used specific functions for DoF. It's really hard to point out DX11 from DX10.1 for me, especially from the end results. I'm glad I know now that DoF is a dead end when it comes to investigating DX11 capabilities for Wii U, even if I didn't get the results I want, at least I can stop reading about it and looking for a hidden silver bullet that could prove one way or the other.

I'm also worried about tessellation, but with what some devs were able to do on 360 with it's limited resources, and some of those resources being 3 times bigger on Wii U, I'm going to assume moderate tessellation will be possible in the majority of games, especially as more and more performance is drawn out of the box.
 
The problem with tessellation is that you need to tessellate your base model more to be able to tessellate it further :(
(It's not saving memory like some of us would have liked, modeling in High Order Surfaces is also quite inconvenient for artists :( )
Can you explain what you mean? Are you just saying it's a negative that tessellation isn't infinite? And why don't you think it saves memory?
 
Can you explain what you mean? Are you just saying it's a negative that tessellation isn't infinite? And why don't you think it saves memory?

I mean that for a model to tessellate well it needs to be rather well tessellated to begin with, which means it's taking more memory than a "standard" model, also you don't tessellate just for smoothness, so you usually rely on a displacement map which takes space.
Ideally tessellation would have helped compressing meshes by only storing high order surfaces (NURBS...), but artists can't really model anything complex with that.
(There are continuity problems, it's far less flexible than triangles...)

So tessellation adds details, but is not exactly the big step forward we(?) anticipated.
(You also have to make sure you don't generate triangles too small since hardware still uses pixel quads, cull the displaced triangles that are facing backward...)

I'm almost tempted to say it's like SSAO, it looks good, but it's not really worth it. (And SSAO always breaks one way or another.)
 
Is there any progress regards silhouette mapping? nAo linked to a paper on that some years ago and it looked promising.
 
Can you explain what you mean? Are you just saying it's a negative that tessellation isn't infinite? And why don't you think it saves memory?
Tessellation doesn't help much if you have objects with lots of high frequency (sharp) details or discontinuities. You have to include all these sharp details and discontinuities in the pre-tessellated mesh (bloating it's size considerably), or there will be lots of visible surface crawling (continuous tessellation moves verticies along surfaces). Pixel shader techniques (parallax occlusion mapping, quadtree displacement mapping, etc) can be used on surfaces with high frequency details (such as cobblestone roads and shipping container sides) without any crawling issues. This is of course true for vertex based tessellation/displacement techniques as well, as long as triangles are always tessellated down to one pixel size. But pixel sized triangles are not efficient to render with current quad based rasterizers, and current hardware tessellation doesn't support that fine grained tessellation (64 splits is the maximum).

Mesh that is going to be used with pixel shader based displacement techniques tends to require much less extra polygons than a mesh that's going to be used with vertex based tessellation/displacement. Of cource if we compare the mesh with tessellation to a high polygon (baked) mesh, tessellation of course saves memory (but the exact amount highly depends on object shape and surface).
 
dof is not a dx11 feature... it can be done on just about any hardware. How do we know its free because a couple of games are using it? Like on gaf posters were saying that doesn't mean anything yet you post it again...

Do you have anything to back up what you say, seem to be a copy paste from gaf. Why would that gpu design be cheaper when it the latest design? I got my info from everything we know, early specs and leaked sdk. This dx11 or new latest and greatest amd gpu is like the new "gpgpu", too funny.

Funny you keep saying thing about x360 when no else in this threads has said anything. Idea man on gaf called it a xbox360 plus right before E3. Looks like he was right on the money. Idea man been right on every leak btw on gaf....

I said unlikely in my last post not impossible about that gpu. We just don't one piece of proof that they ever moved from r700. The thing that always stuck out to me was r800 was out when they started working with r700. If they wanted newer feature set why start at r700 when r800 was out and support all of them....

Please point out to me the r700 card that has "Eyefinity" - I've looked and can't seem to find one.
 
Indeed. I dont understand why everybody keeps talking about r700 when one of the first rumors we got was that japanese website where it was specifically stated that there was eyefinity support.

I'm sure AMD could add eyefinity to r700 but I really dont see the point in that.
 
Indeed. I dont understand why everybody keeps talking about r700 when one of the first rumors we got was that japanese website where it was specifically stated that there was eyefinity support.

I'm sure AMD could add eyefinity to r700 but I really dont see the point in that.

The leaked spec sheet (and my own personal third-hand knowledge) have both pointed out the r700 as the base GPU. The features listed (texture size limits, etc) are near-identical. There were also many rumours of the first kits using R700s in some fashion but as ERP has mentioned and Microsoft has demonstrated, kits aren't usually an indication of final hardware.

With that said, I have yet to see an r700 with Eyefinity. And if ATI have added Eyefinity support to an r700 then they can add whatever.
 
Indeed. I dont understand why everybody keeps talking about r700 when one of the first rumors we got was that japanese website where it was specifically stated that there was eyefinity support.

I'm sure AMD could add eyefinity to r700 but I really dont see the point in that.

This obviously begs the question why you think rumors from "that Japanese website" are somehow the gold standard versus other rumors.
 
This obviously begs the question why you think rumors from "that Japanese website" are somehow the gold standard versus other rumors.

Probably the same way anonymous dev quotes saying the Wii U is weak seem to be the gold standard versus actual developers saying positive things about it.
 
Didn't they assume eyefinity because of the two screens ?

Yes, my 6800 supported dual screens back in 2005....

In this press release, AMD mentions the custom Radeon HD GPU made for the WiiU to have "...high-definition graphics support; rich multimedia acceleration and playback; and multiple display support."
multiple display support is the direct quote from amd.
 
Probably the same way anonymous dev quotes saying the Wii U is weak seem to be the gold standard versus actual developers saying positive things about it.

You mean like the actual Tekken developer speaking out negatively against the WiiUs cpu?

I mean yes... it's quite nebulous at the moment, on all fronts, but if Nintendo really was "sure" of their hardware, they would've talked about it, or put out a press release... but other than "IBM" and "AMD", there's not much known, again, which I find quite suspicious.
 
Probably the same way anonymous dev quotes saying the Wii U is weak seem to be the gold standard versus actual developers saying positive things about it.

I'm not trying to talk about "power", but why you think it's not an r700. Which could be quite powerful (such as a 4850) if thats how you wish to go.

hmm, so since it must support two displays the thinking it might be an eyefinity gpu. interesting, i cant comment on that due to lack of knowledge (whether an r700 could easily support two displays, or easily be made too).
 
I'm not trying to talk about "power", but why you think it's not an r700. Which could be quite powerful (such as a 4850) if thats how you wish to go.

There's no way it'll be that powerful. That's not Nintendo's modus operandi. It'll probably be similar to a low end low heat generating graphics card you find in a laptop.
 
You mean like the actual Tekken developer speaking out negatively against the WiiUs cpu?

I mean yes... it's quite nebulous at the moment, on all fronts, but if Nintendo really was "sure" of their hardware, they would've talked about it, or put out a press release... but other than "IBM" and "AMD", there's not much known, again, which I find quite suspicious.

They have talked a little about it last week...


Staying with graphics but going back to the idea of getting third parties involved, have you approached Epic with the specs of the Wii U to try to make sure that third-parties using Unreal Engine 4 can easily port their games to Wii U?

I think that the Wii U will be powerful enough to run very high spec games but the architecture is obviously different than other consoles so there is a need to do some tuning if you really want to max out the performance.

We’re not going to deliver a system that has so much horsepower that no matter what you put on there it will run beautifully, and also, because we’re selling the system with the GamePad – which adds extra cost to the package – we don’t want to inflate the cost of each unit by putting in excessive CPU power.
Now what do you consider to be "excessive CPU power."

http://www.independent.co.uk/life-s...-now-on-to-compete-over-graphics-7936301.html
 
Status
Not open for further replies.
Back
Top