Matrox meeting , no benchmarks but :)

While I appreciate that the thread has gravitated towards the new 3DLabs GPU, I thought I'd respond to an earlier post. I see so many Q3 references here, but they seem to be from people who don't play the game much. Or at all.
I thought I'd give the perspective of someone who in spite of his awfully advanced age really does (did) play Q3 seriously.

Hellbinder[CE said:
]
My Radeon 8500 plays Q3 with Smoothvision and 16x Ansio like a bat out of hell. GF4 Ti's can crush Q3 with FSAA. What is the big hoopla? If *should be playable* with these settings, with a little nicer quality.. that tells me this card is a DOG. It is going to get crushed by the next wave of games. and is going to get EMBARRASED by the NV30 and R300...

As far as the graphics card is concerned, Q3 is a solved problem now. But not in the way that is implied above.

Most league players I know play with the highest resolution possible while still being totally "CPU" limited. For a Radeon 8500, slightly overclocked on a tuned Athlon 1900XP DDR system, that works out to 800x600 (or 1024x768 which is where I find the slowdown to be small enough to be negligeable.)

Most league players I know play with vertex lightning because it enhances player model visibility. (Similarly, many force both player model and player colour)

Most or all league players use very bright settings, totally loosing all atmosphere but enhancing visibility in dark areas.

AA is worse than useless both due to the performance hit, and the fact that it reduces the edge shimmering that helps with enemy detection. Totally counter productive.

All league players cap their fps at a number that produces a whole number when 1000 is divided by it. For example 100 (1000/100=10), 125 (1000/125=8) and so on, due to peculiarities in Q3 physics calculation. However this fps should be constant. (!!!) At this point in time, with top level gfx cards, the problem lies with the host processor. Not even the Athlon system described above can keep 125 fps at all times. And when it doesn't, I notice it clearly, but it happens seldom enough that I haven't wanted to drop my maxfps to 100, because that would feel slightly less fluid all the time. Others prefer to go the other way. Noone in the scene cares one whit about either max or average fps. Minimum fps is the only thing that's relevant at all.

Generally speaking, competitive players care only about responsiveness and enemy detection. Anything that distracts or detracts from these goals is avoided.
Under those conditions, any last generation gfx-card performs adequatly. But conversely, no gfx-card today can do Q3 at 1024x768 4xAA without the framerate dropping more than any serious Q3 player would accept. Not even the GF4.

What you consider "playable" in Q3 depends on who you are.
Once you have gotten used to a highly responsive system, and are skilled enough to benefit from that increased responsiveness, you're not likely to appreciate less responsive gameplay. (It is interesting to note that the new DOOM is a single player game. This makes sense, since you are not likely to convert any competitive players to a game that plays at 60 fps, nor would they particularly appreciate the sohisticated lightning, in fact, they would do everything they could to get rid of any dark areas for the light to play against.)


Addendum: Just because Q3 is a solved problem for league and tournament players, doesn't mean that it isn't as useful or more than any other application benchmark, since benchmarking is about the information gathered, rather than the number produced. The opinion that the high fps numbers Q3 produces these days makes it a useless benchmark is not founded in logic.

Entropy
 
DaveBaumann said:
The elements listed there, such as displacement mapping or VS2 are they unique? VS2 will be adopted by all DX9 cards coming out by the end of the year as, we presume, will Displacement mapping.

I think he’s correct that other architecture will follow suit sooner or later – they may not have got all the element right, but this is probably the general vicinity others fill go towards. Now, is it really innovative to ‘hardware’ these features or to offer enough flexibility to be able to provide these and others?

Well not to nit pick but in that regards nothing in 3d has really been unique or innovative as one competitor has usually match each other. Take AA or Ansio, TnL, Programmable TnL (Pixel/Vertex shaders), ect... And not like I want to disagree with you Dave, but in some cases unique or innovative is all a matter of timing now days. If they have it first, then its unique or innovative even though better ways are always sure to come. Thus IMHO, the first card to market will unique or innovative that brings these features. And we all know that if you really have something that is unique, the getting wide developer support is very tough.

Sorry to me his/her post was nothing more than dribble. Once again I really enjoy it how people can condemn a card with out even seeing what it can do.....
 
I dont want them to tell you how they do it, I want them to tell you exactly how programmable their VS is :) From that we can make our own conclusions.

If they have enough programmability to say sample a texture (which I greatly doubt) and autonomously create vertices (for which I have my reservations) an important question becomes ... if OpenGL 2.0 is so great why is their hardware already exceeding its abilities to abstract it.
 
Well not to nit pick but in that regards nothing in 3d has really been unique or innovative as one competitor has usually match each other. Take AA or Ansio, TnL, Programmable TnL (Pixel/Vertex shaders), ect...

I think that Pixel Shaders are a true innovation (well, bump mapping - PS are a development of that) – the other you list have been with us a long time and they have been here through SGI and their high end renderers; Pixel Shaders have been developed from the consumer space to answer the particular need of simulating high detail without the geometric complexity. In that sense I class pixel shaders (bump mapping in it various forms) as a true recent development from the consumer space, rather than just the consumer space adopting more and more features from high end systems as process tech gets better.

If they have it first, then its unique or innovative even though better ways are always sure to come. Thus IMHO, the first card to market will unique or innovative that brings these features. And we all know that if you really have something that is unique, the getting wide developer support is very tough.

Actually, I would say that’s also a lot to do with you market position – its difficult for Matrox to push a new feature because they have no gaming presence right now and they are start from fresh, to get real development support for it is going to take a long time. With ATi and NVIDIA on the other hand (especially NVIDIA) if they give developers access to a new feature you can be the developers will be more willing to develop for it since they are nigh on guaranteed to have a reasonable number of cards that support that feature in not too great a time; they also know that feature will probably trickle down to the low end products as well.

To a certain extent this highlights the need even further not to go the fixed function route as I said before on a more flexible design the feature can just be programmed – it is used then great, if not then the hardware will just as happily be doing something else (faster) with just the waste of the code for that feature. So, why not be innovative and build a unit that can do more than just its one task? It reduces the risk for the smaller vendor but enables the developer to do more if they want to.

Once again I really enjoy it how people can condemn a card with out even seeing what it can do.....

Please not that I am not condemning Matrox’s part here, I’m eager to see it in action, and I believe it will perform pretty handily, I do have concerns over its timing relative to whats been announced by 3Dlabs and what I suspect will be occurring only a few months down the line.
 
I dont want them to tell you how they do it, I want them to tell you exactly how programmable their VS is :) From that we can make our own conclusions.

Well, apparantly some people are already using it to do Wavelet pre-processing on this stage of their pipeline. I'll admit, it means nothing to me, but it may to you!
 
Some clarifications on my earlier comments

about the parhelia , it'll be unique for a few weeks/months , until they release products with the same features


about the p10 , since it is for the workstation market , i don't think they're aiming for the perfomance market, they want to accelerate everything that was done in software before
 
DaveBaumann said:
Please not that I am not condemning Matrox’s part here, I’m eager to see it in action, and I believe it will perform pretty handily, I do have concerns over its timing relative to whats been announced by 3Dlabs and what I suspect will be occurring only a few months down the line.

Was not implying you at all. I know your very objective and level headed. Not so for others :)
 
DaveB,

sorry i should have read your posts more closely, i didn't see you had already mentioned the p10 performance issue.

Re:p10 filtering:
They support (bi/tri)linear, anything else needs to be programmed, right? Now, I remember reading recently on b3d something about filtering methods superior to anisotropic (it was either FAST filtering or SYNC filtering or something like that). Since this is such a basic need in 3d i really think a high quality filtering algorithm (at least aniso) should be hardwired and optimized for maximum speed... of course that's just IMHO.

Perhaps p10 programmeable support for aniso or other filtering algorithms will be fast enough...

Regards,
Serge
 
Since this is such a basic need in 3d i really think a high quality filtering algorithm (at least aniso) should be hardwired and optimized for maximum speed... of course that's just IMHO

I don't know what the stanadard filtering options are to tell you the truth, I'll have to find that one out. However, I believe I talked about aniso with them, basically it can be programmed to support whatever level of filtering is required since this will essentially just be a loop in the texture program dependant on the number of samples required beyond what their texture sampling units can sample.

Do you not think that hardwired support for bi/tri wouldn't be enough? The texture sample program could then just have a parameter for the degree of anistropy required (1 for standard bi/tri). Dunno.
 
Well i'd really need to see how it is programmeable, but here are the issues AFAICS : 1.) you have to write code to determine where the samples are taken 2.) you need to write code to figure out how many samples to take (the degree of anisotropy)... this is all eating into the available pixel processing power, program length, etc...

Again unless we have details it's really hard to say. it looks like they have 16 fp coordinate processing units per pixel pipe (enough to handle 4 vec4 texture coordinates). Depending on how many bi/tri samples you can obtain per pipe per clock it might be competitive. then again it might not, so...

My other concern is that every time you want to use this kind of filtering on a texture, you would need to include filtering code in the p10 shader program. Basically every combination of rendering options possible could require a separate shader program (unless their unit supports branching and subroutines and the like).

just a few thoughts,
Serge
 
DaveBaumann said:
Performance, I agree, will be a big factor though.

... and marketing! If you compare the P10 launch with Parhelia launch, there are worlds between. 3D Labs really needs to catch up (may be with the help of Creative) to meet todays levels of marketing.

Matrox really know how to impress with their illustrations (sourround gaming things, cool HOS/DM videos, etc.). 3dlabs just published some boring architecture presentations, nothing that could impress the media or gamers that much.
 
Maybe 3DLabs are holding back on the hyping-front till the consumer versions are drawing close? Have Creative Labs help them with some nice models for a few tech demos? I dunno.

I've been thinking a bit about how they'll dumb the boards down for the consumer market... I'm thinking they'll go for something like nVidia do. Have one line of cards that are really dumbed down (some pipelines removed / SIMD arrays decreased) for value cards - like the MX series. Then have a high-end gaming line like the Ti series - and then a proffesional line like the Quadros, that are basicly the same as the high end gaming ones but with drivers optimized for Maya rather than Quake III (and probably a lot more memory). The very programmable arcitechture of the P10 could allow for much more difference than GF4Ti->Quadro4 drivers do. As lots of filtering, AA and stuff are software defined - some quality features could be in just the Oxygen drivers. In other words - the drivers can be more capable rather than just unlocking some of the features of the hardware. And with the flexibility of the P10 they could really extend the program specific driver optimizations that already exist in their Oxygen line. Or am I way off?

Regards / ushac
 
I think high end 3d graphics chips are headed to the same destiny as high end graphics workstations.

The gradual evaporation of the high end market is happening because consumer 3d demand and commercial 3d demand are focused on the same goal: high performance, photorealistic, 3d graphics. What's more, the requirements of the consumer market are actually more demanding than the high end due to its real-time nature. As a result, you are now seeing the innovative 3d hardware features appearing first in consumer cards (programmable vertex shaders, programmable pixel shaders, etc.) and migrating their way up. The result is that consumer cards are beginning to meet the needs of the professional market and the differentiation between high end desktop 3d and professional 3d is disappearing.

I think 3Dlabs made a smart move hooking up with Creative and moving into consumer 3d. It's where the high quality 3d graphics market is heading.
 
Well, the requirements of the professional and high end consumers are very similar, but not identical (as shown by the Wildcat III article). But you're right, 3D graphics is such a large business that the consumer segment is now driving the progress. There'll still be more expensive cards for the professinals and the super large graphics clusters by SGI and the like though. Hmm... I'm wondering what future Wildcats will be like? Will they be doing special versions of the P10 chip for that? And/or boards with dual chip? Or more?

Regards / ushac
 
P10 is now the primary development platform across all of 3Dlabs. Future Wildcats will be based off that architecture but they will be enhancing and going extreme on the muliprocessing elements. Wildcats will be the very, very, high end boards still.
 
I’m gradually leaking out more and more of the article here…!

psurge

Well i'd really need to see how it is programmeable, but here are the issues AFAICS

Well, these are the types things we talked about – all you are doing if you want more texture samples (for higher degree anisotropy) is creating more loops in the texture processors so you are just adding clock cycles. A similar thing can be done to extend the colour range – if you want the pipes to handle 64bit blending then it can chunk it up over two passes (internal passes, not geometry).

Basically every combination of rendering options possible could require a separate shader program (unless their unit supports branching and subroutines and the like).

I don’t think they will be making this level of functionality available to the developer; or at least, not game developers. I think you’ll find that they will compile their own little shader routines and then just expose different levels of functionality through the drivers.

Mephisto

... and marketing! If you compare the P10 launch with Parhelia launch, there are worlds between.

Dunno actually – what did they do differently? They both got sites/press in for meetings and handed them a bunch of guff for tech previews to be published, there was little difference there. I think it was more the ‘community’ hype that took off with Matrox because it’s a part that has been threatening to be released for so long; P10 kinda came out of the blue for many. The other problem P10 has, as I’ve said before, is the fact that P10 is inherently difficult to market – its devoid of lots of interesting marketing buzzwords because it doesn’t need them – buzzwords usually indicate a fixed function area of the pipeline which doesn’t exist in P10; its flexible so it can probably do many of the things others apply buzzwords to.

Perhaps the P10 material was a little bland, however you have to bear in mind that there are two distinct things at work with P10 – 3Dlabs are still a workstation company so their PR is still very firmly aimed for that market; Creative are fully responsible for the consumer part so marketing that will be their task – I don’t think we’ve heard it all from the marketing side of the P10 architecture.
 
OK, I'm a little late here, but anyways.

ben: (Parhelia)
11: Are we supposed to read between the lines here? :)

13: Are/will those pixel programs be available for download? It sure would be interesting to see what they do. But even if it looks great, I wouldn't say that the pixel shaders seems powerful if they need 8 pixel shaders with 100 pixel shader instructions to render that bass. I hope there is some miscomunication here about that.

14: As also pointed out by ram. Does "in other words" mean your interpretation, or was the guy you met saying these "other words". Because while the PR talks about 64 dynamically allocated sample, and mentions some ways to allocate them, they don't mention 1 pixel with 64 samples.

DaveBaumann: (P10)
Higer bitdepth by dual internall pass (add+adc?). I like it. And I'm assuming that the internal format is 10:10:10:10.

Please say that there is efficient ways to have more than 4 "color"-components in one pixel/texel. :) And please say that they can do "swizzles" in the pixel programs just as in DX8 VS.

If somebody wonders, I absolutely love to (mis)use hardware in ways it wasn't initially meant to. And I see some interesting image filtering to do here. OK, they have mentioned that themselfs, but it would be interesting to see how flexible "image" formats it could handle.
 
Over in the Rage3D forums, Hellbinder wrote:

Hellbinder said:
Parhelia is a laughable product, that offeres NOTHING but surround gaming. It cant even out perform the GF4 Ti 4600... Go read the results of the Interview Ben6 did with matrox this week. ( www.beyond3d.com ) They are very clear as is ben6 that It is not nearly as fast as a Gf4.
hmm..... Ben, comment?

Entropy
 
I and Matrox don't want to give anyone the wrong idea.Without AA and anisotropic filtering the Parhelia is likely not going to win on "simple" games like Quake3 against a Geforce4 Ti4600, as the clockspeed isn't going to be 300mhz. Once you go beyond 2 textures in a game, add FAA and anisotropic filtering , however, the Parhelia shouldn't be touched by the Geforce4 Ti4600. As to whether that's enough, this summer/fall? I really can't say, except to look at the Microsoft Meltdown and GDC presentations for what a accelerator for DX9.0 PS 2.0 and VS 2.0 would require. Also note the Displacement Mapping and Surround Gaming features , and 10bit color. And Matrox's history.

I will say , Nvidia and ATI were NOT showing next gen hardware at E3 except for Doom3's demonstration, which was apparently a last minute decision.
 
Personally, I think Matrox has been surprisingly (surprising because certain IHVs so often lie about their products' performance) honest about Parhelia's performance sans AA and aniso for today's games.
 
Back
Top