Matrox meeting , no benchmarks but :)

John , yep. But I think they want not to get people's hopes too high and have people have a letdown. It'll be interesting if anyone else but Matrox supports Displacement mapping the same way Matrox does in their fall products re DX9 . I hope they do , but not going to be surprised if they don't .

It's going to be the most exciting fall in 3D graphics in a long time with Parhelia, R300, NV30 ,P10 . Choice is good . Competition is good . 68% marketshare by one company no matter if I like it or not (Nvidia) is not good, it's virtually a monopoly (1st Q 2002 marketshare numbers)

Hopefully we'll see a bunch of Dx8.1 parts around the $100 MSRP range this year too , i.e Xabre , Ti4200, R8500...
 
I will say , Nvidia and ATI were NOT showing next gen hardware at E3 except for Doom3's demonstration, which was apparently a last minute decision.

You know...One could look at this statement and possibly conclude that both ATI and nVidia came to E3 armed with their respective next generation boards...not to show to the public, rather, as a setup for Doom III.

And in the end, Id made a last minute decision to use ATI's R300.

How far off the mark am I?
 
typed , I'm not going to comment on that.. I just find it strange that the press release was made on the very last day of E3 (Thursday), instead of the press conference days of Monday/Tuesday. Kyle over at HardOCP also talked to iD software last weekend, and they hadn't made up their mind up on what hardware to display Doom3 on , which would seem to support the supposition that it was a last minute decision.

Basic whoops number 11 should have been what was number 12 lol. number 13 hrm let me get back to you on that. However, the pond demo with the bass , a shark , a school of little fish just as detailed as that bass ran in realtime. 14. I didn't paraphrase the statement about 64 samples on 1 pixel. That's what I was told . However, to be fair the person talking was a PR person, and not necessarily a hardware or driver guy or a devrel guy.
 
Ok , so now it's clear ATI r300 demoed Doom 3 at E3 ..

do you think it was the r300 running what we saw in the doom3 : legacy movie

because i haven't seen the E3 presentation .. i can't compare
 
Got confirmation from a friend that the statement , as written re 64 samples on 1 pixel in a clock is correct...
 
The gradual evaporation of the high end market is happening

Huh?! When did this start happening? Someone better get on the horn and let ATI, NV, etc know this. As far as I can tell, the only new trend in the market is better market segmentation of products, the high end market has been pretty stable looking at quarterlies from graphics companies.

Personally, I think Matrox has been surprisingly (surprising because certain IHVs so often lie about their products' performance) honest about Parhelia's performance sans AA and aniso for today's games.


Another.. huh?! Matrox has been vague to the point of deception in regards to performance on the Parhelia. Can anyone give me something close to a benchmark, or definitive performance statement from Matrox other than "It'll be fast enough"?

My sense of Parhelia is this.. its a product that is so late to market that it appears to be next gen because it has features that some of same features the next gen chips will have. It also carried a whiz bang new feature, that it ultimately ended up being too slow to use anyway. Displacement Mapping, like Environmental Bump Mapping, is useless if your hardware doesnt have the speed to support it in game situations.

Looking at Parhelia just leaves me befuddled.. here is a chip that will be pegged to the mid end market because of its lackluster performance (if its not beating GeForce 4 Ti in todays benchmarkable apps then its instant midend fodder), at double the price of anything in the mid end, what is there to get excited over it about?

I understand wanting competition in the marketplace, but at least temper it against realistic market conditions.
 
Huh?! When did this start happening? Someone better get on the horn and let ATI, NV, etc know this. As far as I can tell, the only new trend in the market is better market segmentation of products, the high end market has been pretty stable looking at quarterlies from graphics companies.

He’s talking about the high end as in workstation, not consumer – because with the likes of NV25/R200 and onwards the high end consumer parts are pushing more and more into the workstation space. And, NVIDIA and ATi are already very aware of this with NVIDIA making strides into that market and ATi purchasing FireGL.
 
Another.. huh?! Matrox has been vague to the point of deception in regards to performance on the Parhelia. Can anyone give me something close to a benchmark, or definitive performance statement from Matrox other than "It'll be fast enough"?

I don't think benchmarking Alpha/Beta hardware with Alpha/Beta Drivers is considered an indicator of the perfomance

from some of the developer quote , like tim , saying that it ran unreal2k3 in surround gaming suprinsingly fast

If a Gf4 runs quake 3 at 250fps and the parhelia does it at 180
but if the Parhelia runs Unreal2k3 at 100 and the gf4 runs it at 24

i don't care if it's faster in quake 3 .. it's plenty fast enough to play
 
Username said:
Another.. huh?! Matrox has been vague to the point of deception in regards to performance on the Parhelia. Can anyone give me something close to a benchmark, or definitive performance statement from Matrox other than "It'll be fast enough"?

Would you prefer grandiose claims, such as Parhelia's a V2 SLI-killer? As it is, Matrox is going to be facing an uphill image battle due to certain reviewers not benchmarking with AA or anisotropic filtering enabled, so premature claims could certainly backfire on them. Let's face it: only under certain conditions will Parhelia shine.

My sense of Parhelia is this.. its a product that is so late to market that it appears to be next gen because it has features that some of same features the next gen chips will have. It also carried a whiz bang new feature, that it ultimately ended up being too slow to use anyway. Displacement Mapping, like Environmental Bump Mapping, is useless if your hardware doesnt have the speed to support it in game situations.

I find those comments peculiar. It's interesting to me what a few months can make in the perception/reception of/by the masses. The GF4 just came out 2-3 months ago and if Parhelia were to ship by late June or early July, why is this part that might cost what the Ti4600 did initially and yet is somewhat more advanced suddenly so dated by the passage of a mere 4 months' time? Did game develop suddenly do such a quantum leap forward that your average gamer should be so obsessed with new hardware being DX9 rather than DX8.1 compliant? And are we guaranteed that both R300 and NV30 will be fully DX9 compliant, faster than their respective predecessors for this year's games, and less than $400 USD? I'm not trying to be a Matrox apologist. . .just trying to understand how a few months can so radically date this product.

Looking at Parhelia just leaves me befuddled.. here is a chip that will be pegged to the mid end market because of its lackluster performance (if its not beating GeForce 4 Ti in todays benchmarkable apps then its instant midend fodder), at double the price of anything in the mid end, what is there to get excited over it about?

Let's wait for the benchmarks, eh? Personally, I would be surprised to see a GF4 Ti outperforming Parhelia with its 4x AA (or even 4xS, since this setting might be required to achieve any kind of IQ proximity with FAA) and 8x aniso enabled since we all know just how much of a performance loss those settings, especially combined, incur on GF4 boards. And, as a gamer, those two settings are of keen interest to me since I don't like playing without both enabled (for example, I've been taking the performance hit on my GF3 Ti500 playing Morrowind at 11x8 with 2x AA and 8x aniso enabled).
 
Don't feed the trolls?

Another.. huh?! Matrox has been vague to the point of deception in regards to performance on the Parhelia. Can anyone give me something close to a benchmark, or definitive performance statement from Matrox other than "It'll be fast enough"?
and
It also carried a whiz bang new feature, that it ultimately ended up being too slow to use anyway. Displacement Mapping, like Environmental Bump Mapping, is useless if your hardware doesnt have the speed to support it in game situations.
Is it only me or are these two quotes out of Username's 2nd post exceedingly funny when combined?

@John,

full ACK. Very well said.

ta,
.rb
 
I saw one benchmark with surround gaming of 45 fps . Not sure what resolution 3072x768 or 3840x1024 , but it was on MatroxUsers . As to the authenticity? I dunno , Matrox wouldn't let me run a timedemo because the hardware on display had at least 2 different revisions , and not final hardware.
 
John makes perfect sense. Speculation can be interesting, but in order to gain much further insight we simply need data from retail hardware.

BTW, Morrowind is almost totally limited by the host system. Pity they didn't make liberal use of bump-mapping in order to improve looks further, since the GPU just twiddles its thumbs anyway unless you crank the settings hugely. I get the feeling they are feeding lots of polygons through a somewhat dated engine. Artists did a nice job though.

Entropy
 
After reading this: http://www.rage3d.com/board/showthread.php?threadid=33618033

I don't think I can see HellBinder as more than a fan boy, really... (or should I say hype boy, because earlier posts in the very same board would indicate that he was druming parhelia drum a bit too eagerly... so the general mistake: first over hype and then under hype. And, oh dear... He has started same to NV30 and R300 already... :rolleyes: )

*sigh* when ppl notice that anything changes in PC industry as fast as lightning?

the only thing where you can see new DX9 features as in action will be newest 3DMark on the works right now. Hopefully even it will get released before december.
 
John Reynolds said:
Personally, I would be surprised to see a GF4 Ti outperforming Parhelia with its 4x AA (or even 4xS, since this setting might be required to achieve any kind of IQ proximity with FAA) and 8x aniso enabled since we all know just how much of a performance loss those settings, especially combined, incur on GF4 boards.
It is going to be very difficult to get meaningful comparisons between the upcoming boards, and I expect a lot of fragmentation of opinion. Trying to achieve IQ parity using fundamentally different AA methods is very tricky. And since FAA is certain to have issues with certain games in certain situations, Parhelia's AA performance will always come with qualifications. And how much anisotropic filtering should be applied?

Ideally I'd like to see massive, digit-life-style comparisons with many AA and filtering modes at various resolutions, but it isn't realistic to expect so much information from most reviewers.

The good news, I suppose, is that even the worst of the upcoming boards will be very powerful.
 
I'm not trying to be a Matrox apologist. . .just trying to understand how a few months can so radically date this product.

Process technology can make a lot of difference John, it can dictate a lot WRT to price and performance.

I remember the arguments that were had endlessly over 3dfx sticking with tried and tested 'cheap' process technology while NVIDIA were busy adopting the bleeding edge with the high costs of masks and supposedly low yeilds - I think its been proven which is the best route to go in the fast paced 3d marketplace. So much so that it would appear that ATi have also cottoned on to this and seems to be targeting releases closer to when smaller process technologies are available as well.

NVIDIA have shown that .15um has been viable in the consumer space for in excess of twelve months and it would seem that certainly NVIDIA and probably ATi feel that .13um is viable to introduce this fall – only 3 to 6 months after Matrox’s release.
 
by the way... am I the only one noticed that we got new chip fab with parhelia too?

while nVidia, Ati and 3DLabs are using TSMC, Matrox is using UMC...


anyone dears to comment about that? how about yields, shipments on time, workload on manufacturing lines, etc...

UMC's press release says that Matrox is very happy with their yield numbers when compared to complexity of their chip, but as always, it is marketing...

and btw, UMC is developing lines fast... it looks like they already have 0.13µm eDRAM line up and running too...
 
Ati uses UMC too, for their PC parts. I'm not sure for which lines recently, but they have relations with both fabs.

NVIDIA also uses UMC's .15um process as a backup supply of NV2A's for Microsoft.
 
DaveBaumann: pretty much contacts ATi has... (okay, okay, it was bad Star Wars imitation... ;) )

so, ATi has good relations at least to UMC, TSMC and NEC.
at least they have few backup plans if some fab goes nuts. :)
 
so, ATi has good relations at least to UMC, TSMC and NEC.

I assume with NEC you mean the Flipper connection? Because if so I'm not aactually sure that ATi has any direct realtions over the fabbing of that chip.

My understanding is that Nintendo have licensed the design off ATi (ArtX) and are responcible for actually producing the chip themselves, hence the NEC connection. I think this is a fundamental difference between the deals between ATi/Nintendo and NVIDIA/MS -- Nintendo can do as they please with the design, so if they feel its viable to move to a smaller process they can just do it on their own (albiet with NEC's input) and hence get the cost benefits from that; MS on the other hand have paid NVIDIA for a chip not a licensed design, so NVIDIA can do as they please with the chip and they will be the ones that reap the benfits of it.

I think thats how it works.
 
Back
Top