A break from the R9700 NV30 quarrel

Seeing that the specs for the two leading cards with dx9 support have been announced(on the nv30 more or less) I was wondering how do the 4 vertex shading engines of the parhelia compare to the ones of the above mentioned. Strictly in terms of flexibility- can it loop? what is the max possible vertex program length? :-?
 
Just for the sake of argument-those parts are so at least 3 weeks in the future! :LOL: Seriously, I`m curious as to what matrox have done RIGHT on the Parhelia. A good vertex shading engine could be that kind of thing :)
 
What did Matrox do right? Very little. Though the part will probably prove to have the best 2D display and the only one in the near future to support triple-head, it doesn't have much else going for it.

The quad vertex-shaders really don't get used in most situations, and most of the memory bandwidth is wasted.

Additionally, the FAA algorithm, while an excellent idea, does not properly detect all edges last I saw.

If they can fix FAA, that will be the one shining light for Matrox, in terms of 3D technology.
 
Whether it can detect all edges or not is pretty academic. This is the first card on the market that effectively removes edge aliasing.

Running it at 1024 * 768 FAA should be similar to running a GF at 2048 * 1532 (whatever) 4 * MSAA, except with better image quality and a higher monitor refresh rate. Switch on AF and, well, you get the idea.
 
I don't think it's only an academic question, take a look at screenshots and you will see many cases of non-FAAed edges. Typically, aliasing gets worse in motion, so those edges can stand out more negatively than an overall worse AA for the rest of the scene depending on case. Don't get me wrong, 16xFAA truly looks stunning when it works, but it appears to work a bit less often than I had hoped for right now. Liek a said a few weeks ago, its hard to judge the issue without first hand experience with Parhelia, so I won't make a judement on just how good or bad the issues with FAA really are...
 
Jerry Cornelius said:
Whether it can detect all edges or not is pretty academic. This is the first card on the market that effectively removes edge aliasing.

I don't think it's academic at all. While it does do excellent AA on the edges it does detect, it does absolutely none on the edges it fails to detect.

I'm of the mind that the reason to continue improving things like FSAA, anisotropic, and other rendering accuracy-type features is to not increase the image quality in most situations, but decrease the number of circumstances where you notice significant problems.

For example, with FAA enabled in a game where not all edges are properly-detected, most edges will look great, but then one edge will show up in a scene that looks terrible...especially in contrast to the properly-AA'd edges.

So, I'd much rather have an AA technique that makes all edges look a bit worse, then a few edges look really bad.

Of course, it is conceivable that Matrox will fully-solve this problem, but I somehow doubt it.
 
Actually I don't think its a problem with Matrox detecting edges. I'm still working on figuring out the entire algorithm because they won't tell much about it, but from what I can tell so far it seems that there is a temp cache on the chip that holds fragment information that is overflowing. When the buffer overflows it drops pixels that are detected along edges, thus making thew edge non anti-aliased. Once I get the entire algorithm figured out I plan on doing an article on this implementation and others (including one you guys haven't heard of yet), that are all based on fragment data.

Note: In speaking of this I'm not talking about surfaces that are alpha blended or against something using alpha.
 
Yes, that is a very good possibility, but I'm pretty sure I remember seeing edges that weren't properly-AA'd even when there were very few edges on-screen.

Also, at least some of these edges were very easy to understand why they might not be detected. It's hard to explain right now, though, I'm tired :p
 
Guys, you are going around the main question. About FAA, I think that it is a step in the right direction. There are chances that the edge detection algorithm will get better with drivers, but that`s just wishful thinking now. I am not interested in the performance of the vertex shaders, I simply want to know how they stack up the competition in terms of flexibility and if they can do about the same things as their counterparts in ATI`s and nVidia`s next gen cards. :cry:
 
Though the part will probably prove to have the best 2D display and the only one in the near future to support triple-head, it doesn't have much else going for it.

Does anyone know if it supports HeadCasting(tm)? That would at least make it three things, for certain, going for it. :)
 
the one thing I like about FAA and I'm not sure whether its true for RGMS is the lack of text/HUD blurring. In all SS forms of AA I've seen (although titles vary) HUD/Text box/console text blurring can make a particular level of AA unusable IMO even though performance is adequate.
 
Testiculus Giganticus, the Parhelia's vertexshaders are technically quite well featured! They are fully conform with DX9's minimum standard for VS 2.0. Wether they might actually exceed this spec in some cases I can't say, it's been a while since I read the whitepapers. Unfortunatelly pixelshaders are only on the same level as Nvidia's NV25 series, they offer PS 1.3 compliance, since this level seems to be more or less the standard for most DX8 developers that should not be much a problem for the next generations of games though.

The ironic part appears to be, that judging from some theoretical benchmarks and test conducted, the advanced programmability of the vertexshaders (compared to other DX8 generation boards) doesn't mean a whole lot. They appear to have a significantly lower triangle troughput than an equally low-clocked GF4Ti, even though Parhelia has 4 VS units and GF4 only 2, which goes to show the number of VS or PS units on a chip has little to do with performance, the design of the VS or PS units is vastly more important than their pure number. Maybe that "problem" is due to not fully functional drivers, but it also might be that their vertex shaders are simply slower by design then competitor's solutions (Matrox does have far more limited manpower after all), while probably still being sufficiently powerfull for pretty much all upcoming games...

About FAA, an article about the propable algorithms used by Matrox would be very cool, and I agree that FAA is a very interesting and forward looking technology that shouldn't be ignored due to some early problems. Yet, while nothing has been proven without a doubt, a number of tests indicate that one reproducable flaw of FAA are edges that are created by intersecting polygons. Not sure if that is fixable through a driver update, may well be by design that FAA only AAs true polygon edges, we simply don't know yet. Will be interesting to find out... :)
 
Randell said:
the one thing I like about FAA and I'm not sure whether its true for RGMS is the lack of text/HUD blurring. In all SS forms of AA I've seen (although titles vary) HUD/Text box/console text blurring can make a particular level of AA unusable IMO even though performance is adequate.

Any MSAA algorithm that doesn't use a blur filter (i.e. NOT nVidia's Quincunx or 4x9 mode) shouldn't affect text at all.

They appear to have a significantly lower triangle troughput than an equally low-clocked GF4Ti, even though Parhelia has 4 VS units and GF4 only 2, which goes to show the number of VS or PS units on a chip has little to do with performance, the design of the VS or PS units is vastly more important than their pure number.

It is possible that it has something to do with the memory architecture, as four vertex shaders might need to each access different data. Of course, the shark demo may seem to contradict that...
 
Chalnoth said:
Any MSAA algorithm that doesn't use a blur filter (i.e. NOT nVidia's Quincunx or 4x9 mode) shouldn't affect text at all

Ok then, so in theory and practice SS is an overall superior from of AA to MS except for performance, this one extra area of 'practice' over theory tilts me towards MSAA then. I've generally subscribed to SS is better than MS but damn it I hate the effect of AA on the CS console or the chat box in DAoC in anything other than 2xquality mode on my 8500.
 
I'm still a bit confused as to why the attention is put so much on the vertex shaders only, they're far from being the most important part of current and upcoming products. Vertexshaders are good, a high polgon count with flexible programmability for these polygons is an important base to have and vertex shaders can be usefull even beyond that. But IMHO Pixelshaders 'll do even more for improving visual immersiveness of games in the long run...

I also have my doubts that VS 2.0 support will be of much good for Parhelia. The performance gap to upcoming DX9 products is huge, and by the time games requiring VS2.0 come out, Parhelia will probably be regarded the same way a GF1 SDR is today. I have high hopes Matrox has been continuing to work on Parhelia and will have a new fully DX9 part for us next spring... :)
 
I'm a bit fearful of Matrox's financials, personally.

The chip they produced was quite a bit of investment, and now it seems it won't be competetive.

Hopefully their business sales are enough to keep them alive...but will it be enough for them to think that high end 3d graphics are worthy of pouring more money into?
 
RussSchultz said:
I'm a bit fearful of Matrox's financials, personally.

The chip they produced was quite a bit of investment, and now it seems it won't be competetive.

Hopefully their business sales are enough to keep them alive...but will it be enough for them to think that high end 3d graphics are worthy of pouring more money into?

It seems to me that if they are financially stable it would be relatively simple to refine their invested technology in the Parhelia into something more competitive. The chief hurdle would be support for floating point. With ATi and nVidia to pave the way to 0.13 it seems likely they could close enough of the performance delta in a 0.13 part to still remain competitive in their specific markets, and possibly even be considered by some gamers.

This would depend on ATi delaying a higher performance 9700 and the NV30 stumbling in some regard (performance/release date), however, so I don't see them being competitive to gamers unless they do both 0.13 and significant driver performance enhancements. I really wasn't expecting the 9700 to, by all indications atleast, so strongly match up with the Parhelia image quality standard.
 
It seems to me that if they are financially stable it would be relatively simple to refine their invested technology in the Parhelia into something more competitive. The chief hurdle would be support for floating point.

It may not necessarily be the R&D and personnel cost that would stop them but the wafer costs these days - they may need to make some kind of a return on Parhelia before they are able to make another outlay like that.
 
I'm not sure if you're putting mask costs into the wafer costs or not.

Generally, a company won't build wafers willy nilly without orders.
 
Back
Top