Does trueform have a future?

Does trueform have a future?

  • No it will not appear in any new games

    Votes: 0 0.0%

  • Total voters
    211
The "No it will not appear in any new games" does seem a little bit out there. I mean really I do think that Microsoft intends on pushing this sort of technology more.

DX9 will have Displacement mapping and depth-adaptive tessellation(fancy npatches from what I can see.) and ATIs hardware supports it. All ATI is doing is supporting a feature of DX8 with trueform DX9 will have trueform 2 in full support of the API.

Now just because nvidia does not support this technology YET does not write n-patches off, thats silly. From what I understand nvidia did have support for HOS in the form of RT patches but the implementation was so poor that they had to disable the feature in the drivers. Nvidia will support n-patches in DX9 on the NV30 surprised? I'm not, it is a slick technology straight up.

Developers are already creating games with the technology in mind. It is not going to fade away simply because nvidia does not support it YET. I suppose that when nvidia does start to support it n-patches will be the next best thing sense sliced bread??
 
That is interesting, meaning what? That TRUFORM can be turned on any game by the drivers without the adverse effects that was seen before

Oh, heavens no. :)

It's been pretty clear that TruForm (1.0/2.0) requires developer support in order to function and that it isn't some "AA/AF" style feature that can be turned on for existing games.

The quote from ATI concerning TruForm 2.0 simply eludes that possibly supporting TruForm will be a code change only, or the unleashed PR rep theory.. heh.

From what I understand nvidia did have support for HOS in the form of RT patches but the implementation was so poor that they had to disable the feature in the drivers. Nvidia will support n-patches in DX9 on the NV30 surprised? I'm not, it is a slick technology straight up.

My understanding on the disabling of HOS for NVIDIA cards was concerning software emulation of HOS, which would yield poor benchmarks.

When API detection yields HOS support, yet the IHV doesnt have hardware support for such, it will be emulated with RT-patches/software. This would yield low performance on NV20/25 with games with default support (similar to how an older Radeon runs with TruForm), so it was deemed "best by the powers known as NVIDIA" to just disable entirely in order to avoid bad press concerning performance.
 
My understanding on the disabling of HOS for NVIDIA cards was concerning software emulation of HOS, which would yield poor benchmarks.

I seriously doubt that is the real reason. The emulation could always have been disabled.

Here's what I believe the real reason is: Game developers didn't like it. One of the major problems with RT-patches/NV_evaulators was the fact that they weren't trivial to implement in games while also considering hardware that didn't support them. I thought that this problem would delay their acceptance until DX8 hardware was the minspec. But, now that nVidia has just plain discontinued support, this doesn't appear to be the case.

The way I see it, there were two possibilities as to why RT-patches/NV_evaluators were discontinued:

1. Developers didn't consider them flexible/usable enough. That is, it would have taken just too much work to put them to good use.

2. They just weren't good enough at geometry compression to make any realistic difference in performance.

Regardless of the reasons, here's what I see in the meantime:

First, we will have support for backwards-compatible HOS techniques, such as Truform. Hopefully we'll see some implementations that do a much better job, however. Additionally, for the needs of the future, we really need a robust HOS technique that game developers will actually use that may not be quite so friendly on older hardware.

Secondly, this "supported but not used" technique should come into being. I don't believe that many game developers would bother to write the multiple models required for proper use of HOS, but the support of robust HOS that directly correlates with the HOS used in 3D modelling programs is an absolute must. There really is going to be no other way to fill the geometry pipelines of future video cards.

In sum, an ideal HOS implementation would support:

1. Ease of design. That is, the artists shouldn't have to necessarily learn anything new to use these HOS: they should just use the ones they've been using in design work, only now they'll be exported to the model used in the game, not turned into triangles.

2. Performance. There must be adequate compression of the geometry for any HOS implementation to make sense. That is, a good HOS implementation would be useful for a wide variety of surfaces (i.e. not just pillars), or it just won't be worth the programming to implement them.

The way I see it, a primitive processor is most definitely the way to go, with adequate tools to work with the major 3D modelling programs, of course.
 
Chalnoth said:
First, we will have support for backwards-compatible HOS techniques, such as Truform. Hopefully we'll see some implementations that do a much better job, however.

Could you please explain what was "wrong" with the implementation of Truform? The fact that the developers actually had to spend a little time supporting the feature and they couldn't just get it to work by enabling a switch??? I don't like to sling mud, but it seems to me your only problem with Truform is that ATi implemented it first. If Nvidia had implemented it, I can't help but think that you'd be saying it was the greatest thing since sliced bread. And, no, I don't even think Truform is that great, but please provide some explanations as to why you consider it "poor" (in not so many words).

Also, I can't help but mention your reply to the ATi PR statement that you'd "believe it when you see it". That's fair, and in fact the PR statement was likely full of shit. But you've been jumping on people for weeks and calling people biased whenever they question the truth of statements by Nvidia's CEO. I don't mind you questioning ATi's word, but please don't hold a double standard when people are skeptical of some of Nvidia's claims. The door swings both ways.
 
Here's three reasons for you:

1. Just turning it on results in terrible image quality problems in some situations (meaning no tweaking).

2. Adjusting vertex normals to fix the problems can create other ones, although minor (specifically: vertex lighting won't be "correct.") Will also almost certainly require certain vertices to be sent through more than once (you'd often need to set the normals differently on the two different edges for a proper look...otherwise you'd just move the "balooning" effect around, not solve it), which could definitely reduce performance.

3. Adjusting vertex normals is certainly non-trivial in many situations. How would you like to play around with the vertex normals in a 3000+ poly model? Even if only 10% of those need adjustment, it would still be a hassle. As far as I've seen, all that ATI does is offer a plugin for previewing N-patches, not one for auto-adjustments or the like.

What I see as ideal is an HOS implementation that requires little to no work on the part of the artist, and little work for the programmer. There really isn't any need for it.

I've seen some subdivision surface demos that actually looked pretty good for pretty much any surface. It's certainly possible to do it...but Truform doesn't. Speaking of which, where are the demos/screenshots of Truform 2.0?

Regardless, subdivision surfaces are only a temporary measure. Optimally, pure HOS will eventually come into use.
 
Also, I can't help but mention your reply to the ATi PR statement that you'd "believe it when you see it". That's fair, and in fact the PR statement was likely full of shit.

And WRT to the other point he also mentioned I don't think ATI ever said that they would enable it globally - and I certainly don't remember it on their site. There was interviews with ATI people discussing this as a possibility, wouldn't commit to it because of the issues with incorrectly resolved normals which causes odd effects.
 
DaveBaumann said:
And WRT to the other point he also mentioned I don't think ATI ever said that they would enable it globally - and I certainly don't remember it on their site. There was interviews with ATI people discussing this as a possibility, wouldn't commit to it because of the issues with incorrectly resolved normals which causes odd effects.

I'd love to see what "forced Truform" would do to menu-type screens that use polys for display :)
 
RT-patches (NV_evaluators) and N-patches (TruForm) are two different stories.
N-patches operate on triangle level while RT-patches operate on "patch" level (a grid of control points; vertices). Now take a cube for example; it has 8 vertices and 6 faces and 6 face normals. Normals have to be included in vertices in D3D or OpenGL world so games will usually just avrage all normals of faces that share that vertex. This will make your cube look like air-bag if you enable TruForm. To solve the problem you will have to make a cube with 24 vertices to fix the problem (multiple vertices will have same position but different normal and this increases t&l times).
RT-patches does not have this problem, but they need to have models stored as a set of control points and that is totaly incompatible with hardware that does not support RT-patches.
Displacement mapping works much better and much easier. It just tesselates geometry based on texture you provide. If you look at models in Doom 3 you'll see that it will be created with very high triangle counts. However these high res models will be downscaled and a normal map will be made. Now such tool could easily output a displacement map instead of normal mep.

Problem with RT-patches as in GeForce 3&4 was not actually caused by NVidia. It's Direct3D runtime that's causing problems here, becouse if a card does not support N-patches D3D runtime will emulate them with RT-patches. This means that all this patches will be created when DrawPrimitive is called and then never used again (well they will be used again next frame but there is no way driver could cache this). OpenGL support for RT-patches is still here (for now) and it works OK (there were some samples that demonstrated this just recently).
That "primitive processor" is the way this will go. As far as I have talked with NVidia it will still operate on triangles, but will have information about neighbors, will be allowed to create/discard vertices and triangles,...
 
Chalnoth-
I seriously doubt that is the real reason.

You "seriously doubt" anything that contains any factual basis.. but seem to only applaud and support baseless, fictional-at-best, theorizations. :)


http://www.theinquirer.net/?article=1792

"GEFORCE 3 and its derivatives included a nice feature called HOS (Higher Order Surface) implemented by a polynomial surfaces technique, and a few days back we asked Nvidia if this HOS had gone to the knacker's yard. It has been disabled, Nvidia confirmed."


http://www.digit-life.com/articles/gf4/index1.html

"The drivers of the NV20 and NV25 do not support hardware tessellation of smooth surfaces (HOS based on RT-Patches). When a card doesn't support N-Patches on a hardware level the API tries to emulate them using RT-Patches. It makes operation of N-Patches very slow. NVIDIA thus had to disable the RT-Patches so that games supporting N-Patches won't be too slow."


There was also some posts on the DX boards where an NVIDIA rep confirmed the performance/disable reasoning, but dont want to bother spending more time looking for the URL as no matter if there are 20 URL's, three NVIDIA reps declaring this, and Jehovah himself stating something as fact, you will still "seriously doubt" it if it has any negative connotation towards your favorite IHV.

Cheers,
-Shark
 
Chalnoth said:
Here's three reasons for you:

1. Just turning it on results in terrible image quality problems in some situations (meaning no tweaking).

Oh ... I guess we should all complain that dot3 bumpmapping doesn't work in our old games either. We can't replace the modulation operation in current games with a dot3 operation and get perpixel lighting, thus the dot3 implementation is faulty/sux. :rolleyes:
The fact is that Truform does an incredible job in comparison to other new technologies in terms of being able to easily add support for and keep backwards compatibility. Plus that many model will actually just work, no tweaking needed.
 
Sharkfood said:
There was also some posts on the DX boards where an NVIDIA rep confirmed the performance/disable reasoning, but dont want to bother spending more time looking for the URL as no matter if there are 20 URL's, three NVIDIA reps declaring this, and Jehovah himself stating something as fact, you will still "seriously doubt" it if it has any negative connotation towards your favorite IHV.

Cheers,
-Shark

That fails to explain why the NV_evaluators extension has been discontinued for OpenGL (They are no longer supported by the 40.41 drivers).
 
That fails to explain why the NV_evaluators extension has been discontinued for OpenGL

Dont see how it doesn't... but then again, most folks don't understand how you continually fail to see the obvious.

Why would a company retain/support extensions for a disabled feature? I'd think this would be pretty self-explanatory.
 
Sharkfood said:
Dont see how it doesn't... but then again, most folks don't understand how you continually fail to see the obvious.

Why would a company retain/support extensions for a disabled feature? I'd think this would be pretty self-explanatory.

Um, the NV_evaluators extension wasn't disabled until the 40.41 drivers. Direct3D isn't everything.
 
up said:
If you look at this Doom 3 sceenshots one could cry for Truform (or something better 8) )
http://www.shacknews.com/screens.x/doom3/DOOM+3/1/073002/shot0233.jpg

Any form of HOS will break self-shadowing in DOOM3.

Regardless, it should have been obvious for a long time that due to the shadows, the triangle counts in DOOM3 were going to be much lower than those in other games released at a similar time.

Please see the game in motion before you think the lack of high-poly models is a problem. The low-poly models are a tradeoff that has allowed the use of much more sophisticated effects (i.e. bump mapping, shadows).
 
It sounds to me like both N-patches and RT-patches have problems... N-patches just seem like kind of an easy "hack" implementation. I know, probably a lot of thought really went into it, but it's just trying to increase model quality on cards that support it while still not breaking older cards. RT-patches being not backwards compatible pretty much means it needs broad support of all card manufacturers. Now with even Nvidia dropping support for it, it doesn't look like it's going to go anywhere. So obviously we'll need some other HOS implementation.

Or maybe not...won't developers just resort to the "brute force" approach by using much higher poly models? Is HOS even necessary considering the speed at which graphics cards are advancing?
 
HOS are most certainly necessary, because of the speed at which graphcis cards are advancing.

That is, the #1 benefit of HOS is geometry compression. HOS enable more triangles to fit in less space in memory, as well as use less memory bandwidth (and, particularly, less AGP bandwidth).

I really believe that triangle rates are going to continue to outstrip pixel fillrates, as they have been in the past (ex. we've gone from 4 pixel:1 triangle in the original GeForce to 8 pixel and 4 triangle in the Radeon 9700, meaning triangle rate has grown twice as quickly as fillrate...something which I fully expect to continue).

A secondary benefit of HOS is the ability to make full use out of the most powerful vertex processors. With enough usage of HOS, you could extremely easily scale geometry across a gigantic range of different vertex processing capabilities. Of course, for this to look good, the HOS would need to be applied to nearly every surface, requiring a paradigm shift in how artists design models (This needn't necessarily be more challenging...just different...).
 
Sharkfood said:
That fails to explain why the NV_evaluators extension has been discontinued for OpenGL

Dont see how it doesn't... but then again, most folks don't understand how you continually fail to see the obvious.

Why would a company retain/support extensions for a disabled feature? I'd think this would be pretty self-explanatory.

In this case Chalnoth is actually right, it does not explain why they dropped OpenGL support. Lack of developer interest is much more likely. Even though NV_evaluators were spoken lots of for a while on opengl.org it seamed it was soon forgotten too. It never made it close to being supported by any game, only a handful of demo coders used it.
Still, I can see it being of good use in professional applications though, but I guess the interest so far has been cold there too if nVidia decides to drop it.
 
Back
Top