Matrox meeting , no benchmarks but :)

HellBinder: I find your last post a bit agressive.
(besides it seems have only little based on facts if at all.)

But I am not here to start flame war. (I leave it to others. I am 100% sure that some long time Matroxuser finds that post more than just a bit aggressive.)

I just wanted warn you. You are stepping to path that has no crossing back to objectivity.

and why on earth write so much? you could easily put that reply in three words: "IMO, parhelia Sucks." :)
 
"Nvblur Glide API "True Time" "

"Anti Aliasing: Anisotropic filtering "

Sounds like they've got the inside sources, alright.
 
Hellbinder[CE] said:
My Radeon 8500 plays Q3 with Smoothvision and 16x Ansio like a bat out of hell. GF4 Ti's can crush Q3 with FSAA. What is the big hoopla? If *should be playable* with these settings, with a little nicer quality.. that tells me this card is a DOG. It is going to get crushed by the next wave of games. and is going to get EMBARRASED by the NV30 and R300...


I have a 8500 and like it. Works well and I use it as the primary PC for my UT work. However when I do play games, truning on Smoothvison with all other eyecandy turned on then it starts to slow down.

You also forgot that then included in that quote Surrond gaming wich no other card out can do. Not only that but it puts a big strain (remeber is the drawing a lot more pixels in these cases).

Also there is always something bigger, faster, better coming "soon" all the time. Why should I get the NV30 when I know 6 months later the NV35 will be coming out (or something faster).

Again I find anyone saying anything too postivie or two negitive about this card silly as we have no idea what it can do. Why dont we wait until we have some benchmarks......
 
I think Hellbinder is definitely on the right track here reading Ben's report with the relevant information..

That this card isnt going to be faster than the GF4 Ti at this stage in the game, when the next chip series from Nvidia is going to destroy the GF4 series.. doesnt bode well. Add on that boards are going to be starting around the $400 price point.. and Parhelia is losing a lot of luster.

I thought it was fairly suspicious that Matrox wasnt trumpeting the highest 3D Mark score ever (tm)! at any point.. or mentioning speed at all. In a world where speed is king.. Parhelia is too little, too late (much like Matrox's previous 3D offerings).

I found it pretty humorous the amount of praise being heaped upon this chip, all derived from press material from Matrox. When the reviews start pouring in.. I dont expect them to be as gratuitous or gushing as anything we've seen on this board.

Something that troubles me about Parhelia personally, is that there is nothing remotely unique about it. It offers a lot of gimicks, but there is nothing in its featureset that cant be done/hasnt been done better already. Even the super gimicky "Surround Gaming" option (which will be far more difficult to support than TruForm, and we see how widespread the support for that is [sic]), isnt something thats impossible to do on other boards on the market.. its something that no one thought was a good idea. The desk real estate for even 3 plasma monitors is going to be huge to configure for that type of gaming, and the costs associated are going to be huge as well. For a professional workstation, I could see something like this being extremely useful.. but for gamers?

Its sad that the Parhelia hype has overshadowed the one chip that IS actually bringing something new to the table, thats 3D Labs offering. That so many people jumped on the Parhelia bandwagon, but missed the P10's is pretty confusing to me. Here is an architecture that, while not the fastest piece of silicon on the planet, offers a glimpse of the shape graphics processors will take in the next few years. Even if it turns out to be a total dog performance wise, it at least has something new to bring to the table, that eventually push the technology in the industry forward, rather than a handful of gimicks, on a painfully late part.

Of course, this is all just my opinion.

(P.S. Its good you found another chip to champion Nappe, you took a large bath on the whole bitboys thing eh?)
 
Something that troubles me about Parhelia personally, is that there is nothing remotely unique about it. It offers a lot of gimicks, but there is nothing in its featureset that cant be done/hasnt been done better already

what about FAA , Displacement mapping , depth adaptive tessalation , VS 2.0 , Surround gaming


i'd like to see what's so unique about nvidia's next refresh... err chip

I have faith Ati will bring something new and tasty

and apart from the chip architecture and the rumors flying around, you've never seen the perfomance first hand , and I don't think i ever saw any Pre-release 3dmark2001 score from nvidia itself

What if Quake 3 perfomances are the same as a gf4 but look 10x better
i'd like to see the gf4 run morrowind faster than the Parhelia ( or even ut2k3 )

Parhelia is , a Lot , at the Right time ..
 
muted said:
Something that troubles me about Parhelia personally, is that there is nothing remotely unique about it. It offers a lot of gimicks, but there is nothing in its featureset that cant be done/hasnt been done better already

what about FAA , Displacement mapping , depth adaptive tessalation , VS 2.0 , Surround gaming


i'd like to see what's so unique about nvidia's next refresh... err chip

I have faith Ati will bring something new and tasty

and apart from the chip architecture and the rumors flying around, you've never seen the perfomance first hand , and I don't think i ever saw any Pre-release 3dmark2001 score from nvidia itself

What if Quake 3 perfomances are the same as a gf4 but look 10x better
i'd like to see the gf4 run morrowind faster than the Parhelia ( or even ut2k3 )

Parhelia is , a Lot , at the Right time ..

thx to have answer this guy :)

I read B3D since 2 years every is not several time every day and because my english is really bad never discuss with you, but when I saw text like username wrote, I really wish i could.

I have one question, I hope you understand it. Could someone explain to me, why we can read so many people that seems to 'hate' parhelia ? Did Matrox do something i don't know ?

thx :)
 
Well , there's the simple answer

they're jealous

and the not so simple answer

Back when the g200 was released , it had decent drivers, except it's opengl drivers, the same thing happened with the g400 , anyone that spent enough time with these cards can agree that they were good ( they could have been better but )

But to innocent bystanders hearing that these cards are slower than their nvidia counterparts , and the gimicky dualhead feature ( which spread nicely over to every other company ) they were inferior in every way

I'm generaly happy with the card i have , when it works .... and this goes for nvidia , ati , and matrox ( it did for 3dfx too )
 
what about FAA , Displacement mapping , depth adaptive tessalation , VS 2.0 , Surround gaming

As much as I don’t agree with the form Username’s post took some of the content does strike a chord with me.

The elements listed there, such as displacement mapping or VS2 are they unique? VS2 will be adopted by all DX9 cards coming out by the end of the year as, we presume, will Displacement mapping.

However compare those to P10 also announced – which of these do you suppose can’t be achieved with its architecture? Supposedly, according to the long conversation I’ve had with 3Dlabs it not Displacement Mapping and I’d assume VS2.0 functionality could be programmed (at a guess).

I think he’s correct that other architecture will follow suit sooner or later – they may not have got all the element right, but this is probably the general vicinity others fill go towards. Now, is it really innovative to ‘hardware’ these features or to offer enough flexibility to be able to provide these and others?

Performance, I agree, will be a big factor though.

I am, though, very keen to see what FAA can do.
 
I read somewhere that , matrox licensed DM to microsoft for directx9

*edit*
Will HDM and LOD also be a feature of OpenGL ?

I don't know if we will or allowed to submit any functions to the OGL consortium since we licensed HDM to MS to impliment into DX9.

That's from matrox.com forum's
 
I read somewhere that , matrox licensed DM to microsoft for directx9

So its free for any other vendor to implement in hardware if they have a DX hardware license.

However, it seems to me that Displacement Mapping as a concept has been here far longer than Matrox's license - perhaps that means they licensed an imlentation to MS, which mean alternative implemtations may be used elsewhere...?
 
DaveBaumann said:
The elements listed there, such as displacement mapping or VS2 are they unique? VS2 will be adopted by all DX9 cards coming out by the end of the year as, we presume, will Displacement mapping.

Is this a bad thing? If these features were unique to Matrox hardly anyone would ever use them. Displacement mapping is cool, but it wouldn't stand a chance if Ati and nvidia didn't plan to support it. At least I assume they will support it. Licensing it to Microsoft makes it more likely to find support. If a feature isn't used who cares how innovative it is?

P10 definitely sounds interesting and innovative, but until we see how it performs we won't know if that innovation was actually useful. People always criticize products for not being totally different (innovative?), but it is not practical for companies to completely change their minds every few years about what architecture is best.
 
Is this a bad thing?

I’m not saying it’s a bad thing; I’m possibly suggesting it doesn’t look too great that their competition is likely to have that feature in their hardware only a few months after Matrox’s despite how long Matrox have been touting the feature!

If a feature isn't used who cares how innovative it is?

And this is a point isn’t it – currently you have hardware dedicated to that feature, and if it isn’t used then it’s a waste of hardware. On a more flexible design the feature can just be programmed – it is used then great, if not then the hardware will just as happily be doing something else (faster) with just the waste of the code for that feature. So, why not be innovative and build a unit that can do more than just its one task?

but it is not practical for companies to completely change their minds every few years about what architecture is best.

And this is the point. In which direction are future generations more likely to head -- more ‘hardwired’ features (making for an increasingly complex chip for the sake of process speed) or more flexible pipelines that can be programmed with new features?

Now, if you think the former then Matrox’s part makes sense, however if you think the latter then does P10 not make more sense?

If we really are going down a more programmable route then P10 hits the ground running – they will probably only have to make incremental changes ongoing as the significant part of their architecture change has already occurred. Matrox’s part on the other is still very fixed function meaning that if this is the route that’s to be taken they do have to make another architecture change to come in line. Given how long its taken Matrox to produce this I have concerns over their ability to do it again, which is why I’m wonder if Parhelia goes far enough – had P10 not been here then I doubt I would have these concerns, but I think it’s interesting to see what 3Dlabs have done in comparison to Matrox on roughly the same transistor counts on the same process.

3Dlabs certainly feels that this is the path that is going to be trodden in future, NVIDIA have already said similar at the GF4 launch in relation to NV30 and its what Carmack has been wanting for a long time.
 
DaveBaumann, i pretty much agree with everything you said. What remains to be seen is if the programmeable approach taken by p10 is competitive (speed wise) when programmed to do what Matrox has hardwired.

The two chips seem roughly equivalent in raw vertex/pixel horsepower, but given p10s flexible approach i suspect it will be slower much of the time (i wonder how much occlusion culling and virtual memory management will mitigate this)...

--

I very much hope that r300 and nv30 go in the same direction as 3dlabs (it's so much more interesting from a programmers point of view).

Areas for improvment over p10 are fp pixel pipes, better occlusion culling support, and better texture filtering. Hopefully someone will implement Z3 AA or some form of higher quality smart FSAA...

Regards,
Serge
 
Do we know if 3dlab's hardware is flexible enough to create vertices through programmable tessellation? I know a limitation of the DX8 pipeline is the vertex shader cannot create or destroy vertices, hence it is a separate feature for ATi and Matrox.

Also, I don't exactly remember the z3 algorithm. Did they store fragments in a separate buffer similar to Matrox? If so, how would this work with ATi's hierarchical z buffer? A separate fragment buffer might make it harder to maintain a hierarchy.
 
UserName:
I am still primarily following Bitboys. Matrox is just half way stop for me.

and I found Matrox already when G200 was new. So no nothing new there. After G400 I changed to Radeon, but I cannotsay that I would have been very satisfied with it. AIW Radeon was most expensive model of their Radeon series and still it took over a year get drivers to same level that my G400 had from the begining.)

besides, you are talking how parhelia will perform and how it will be crushed by DX9 hardware and we haven't seen even Parhelia tests yet, without even mentioning R300/NV30 tests. So don't jump on gun before we have some benches.

I agree with ben that there will be only few DX9 games during next 6 or 9 months.
 
Psurge,

What remains to be seen is if the programmeable approach taken by p10 is competitive (speed wise) when programmed to do what Matrox has hardwired.

Oh, if you look though my posts about P10 then you’ll see that performance is the biggest caveat – is P10 too early? In a few respects I think it is, but again they won't have to have a large architecture hau for future generations so its easier for them to make it faster. I do have a small heads up on it performance though.

Areas for improvment over p10 are fp pixel pipes, better occlusion culling support, and better texture filtering.

What do you think is deficient in its filtering support?

3dcgi,

Do we know if 3dlab's hardware is flexible enough to create vertices through programmable tessellation?

Yes, we do know. I think I’m going to have to hurry up with the P10 tech preview…!
 
Will that article describe exactly how they do it too? Because its easy for them they say they support it, and in the meantime emulate it by feeding say the VS a stream of software generated vertices containing only a uv coordinate pair (which is neither a big task, or a huge load on the AGP bus ... but still not ideal) and let it calculate the surface from that.

There's a million things like this which would make a general statement that its flexible enough to incoorperate feature-X not a lie, but not totally upfront either ... unless they exposed the ISA (most importantly how it deals with memory access, and its latency ... OpenGL 2.0 aint enough, certainly not for displacement mapping) to you and we can make some reasonable assumptions about the performance Ill find it hard to take anything on faith.
 
It's really funny how everyone bashes Parhelia for being hyped based only on released chip specs. It's also hypocritical. :devilish: Name one other recent video chipset that has done differently. I've owned GF2, GF3, and now GF4, and all of these were released AND hyped the same way. Also, the Radeon chips were the same way. Anything else I don't even consider worthy of consideration, since other video chips aren't even in the same league as those just mentioned (V4/5/6 had the potential, but thanks to its lateness to market..)

As far as performance goes, do you really think that the ability to get 300 frames per second in Quake 3 Arena is special? What's really special is setting some ungodly resolution, turning on or up all the pretty features (highest aniso, some form of nice antialiasing features, highest color depth, etc., etc.) in the game, and STILL having the minimum framerate be something playable. I think this might be what Ben and others is trying to point out to the more brain-dead amongst us. Sure GF4 runs q3a great (I love this card too), but even if Matrox can't outperform it at 1600x1200x32bit color depth, I have a feeling that as you pile on more and more graphical goodness, to the point that the poor little GeForce card starts getting tired, the Matrox'll hang in there. :D

That's the main point of this new chipset- adequate performance with the best eye-candy (aka IQ). It's not meant to blast mega framerates, only to keep the framerate from dropping into the soup when the going gets really rough. Up until now, the only reason we really worry about maximum framerates was because in general, the higher the max, the higher the minimum framerate. :)

Dunno how many others here had the opportunity to play with a G400Max in its prime, but I did. I also had a TNT1, and a TNT2Ultra. Sure, the TNT2U achieved higher max framerates in many of my games, and likewise a higher average. The interesting part was the the G400 had a higher minimum in most of those games. So, since it had a higher minimum, lower maximum, and slightly lower average, it was a much narrower spread between it's peaks and valleys, which translates into much smoother gameplay experience. I think that's the real pot at the end of the 3D gaming rainbow- the smoothest possible gameplay in our games. ;)

So, don't knock this chip unless it really falls down. And not beating a GF4 in Q3A's max framerate farce isn't falling down. Not maintaining smooth gameplay in all present titles, and in most upcoming titles with most or all graphical goodies on, maybe. And most of all, try to be a bit objective about the chip. Stop the moronic bashing of the chip because the hype's based on tech specs. We get the same thing with every new chip that comes out now, and we all know that nothing's determined besides features until we have actual hardware in our hands to mess with. :p
 
Back
Top