Codename 'Parhelia'...

Well, they are straight out saying that it's just a rumour.

I have to say, that the many rumours that surround Nvidia and its offerings often contain some truth, but out of experience, I think that rumours concerning Matrox generally aren't worth much. They could just as well originate from the Matrox fansite.
 
ChrisK: you mean MURC?? :smile: well, I picked that link from there and nothing indicates this originates from there.

but as said, take it as a grain of salt...
 
Yes, I mean http://www.matroxusers.com .

I'm lurking on their forums from time to time. Weren't there multiple occasions of idle speculations from their forums ending up on the news pages of many websites? I thinks so.

I remember reading about an imminent release of the famous G800 there in late 2000.

By now it seems to me that Matrox rumours have a certain annoying Bitboys "quality" to them - there's never anything coming out for real.
 
Probably the most interesting rumor is displacement mapping, however, I feel that these are probably based on Matrox's presenation at Meltdown. Not everything presented at Meltdown goes into hardware, but could in fact, been something that they were prototyping in a simulator. We all remember Talisman. :smile:

I have a feeling the first implementations of displacement mapping in hardware are going to be horribly broken (lots of visual artifacts, discontinuities, etc) and not very flexible. DX9 looks like a good start, but I bet they won't be usuable until DX10.
 
First of all I dont think everyone remembers Talisman :) Secondly, are you sure Talisman never made it to silicon?
 
heh...I remember Talisman!

I can still remember in the nVidia NV1, 3D Labs GiGi "Gaming Glint", (remeber that one?) and S3 Virge days, people saying "wait until we have Talisman cards...and they'll also support that new thing from Intel called "AGP" that will eliminate the need for on-board memory! ;)
 
Displacement Mapping. Hmmm.
I don´t know the meltdown presentation, but if there are not heavy optimisations running in the background, i cannot see this anywhere near possible (read: in any usable form) with today polygon counts. Or, it will look horribly with a low poly count as DemoCoder already said.

Maybe a map for terrain deformations like bluntly craters or something like that, but what about a brick wall or even more detailed objects?
 
On 2002-02-11 20:09, DemoCoder wrote:

I have a feeling the first implementations of displacement mapping in hardware are going to be horribly broken (lots of visual artifacts, discontinuities, etc) and not very flexible. DX9 looks like a good start, but I bet they won't be usuable until DX10.

Do you think that ATI's first N-patch implementation is "horribly broken"? Because if misused it can create discontinuities?

As it turns out N-patches require some attention from the artist who does the model. The same is trivially true for displacement mapping. Like N-patches, DM has a crack when there is a normal discontinuity (edges). It can also crack when the mapping changes at a poly edge and the maps don't fit.

As for DX9 why do people imply that Microsoft invents new features and new cards implement these? It's actually more like hw manufactures create the new features in their new cards, and Microsoft exposes these through new APIs. So DX9 or DX10 has nothing to do with the Parhelia doing this feature buggy or correctly.
 
Talisman:

Yes, I remember that very well, in early 1997 I fully expected it to be released at some point, with Microsoft throwing their weight behind it and all...

BTW, :smile: Your mentioning of it has inspired me to take a look at my old CD-Rs and I indeed found an old document (from early 1997) detailing what Talisman was supposed to be. If you're interested, I've uploaded it to the webspace that comes with my internet account: http://chklassen.bei.t-online.de/Talisman.zip (it's a 316 KByte zipped Word document)

Interesting and... somehow familiar ("chunking"), isn't it? ;)

Displacement mapping: I have a little trouble understanding how they want to do that. The methods of displacement mapping that I know require the geometry that is to be displaced to be already there, leading to very complex models with thousands of additional vertices. I somehow don't think this would be a usable solution on consumer hardware today. So how do they plan to solve this problem, generating the necessary geometry on-chip?
 
On 2002-02-11 20:46, Bentarr wrote:
i cannot see this anywhere near possible with today polygon counts.

May I ask what are those poly counts?
Because I think today games have hopelessly low polygon counts.

Even a GF2MX can support 300 000 polys well above 30 fps. Yet most of the actual titles use far less than 50 000 polys. Quake 3 has like 10 000 (according to JC). When Epic said their demanding test scene uses as many as 100 000 polys, many peaple said "Wow, that many? That will surely be geometry limited!". Guess what? It's fillrate/bandwith limited even at 640x480! :smile:
 
So how do they plan to solve this problem, generating the necessary geometry on-chip?

I don't know for sure. I believe they are going to create new geometry till screen projected edges are below a fixed length.

ciao,
Marco
 
ChrisK,

Wow...now that's a blast from the past! Thanks for the post. I was searching the 'net looking for that very same article. (I remember reading it years ago). I remember seeing the "required bandwidth" graphs and wanted to see how things panned out.

This is great:

A full up SGI RE2—a truly impressive machine—boasts a memory bandwidth of well over 10,000 MB per second. Its quite clear that SGI has nothing to fear from evolving PC 3D accelerators, which utilize traditional 3D pipelines, for some time to come.

Heh...looks like "Some time to come" ended up being 2002, with GeForce4 being the first to break the 10.0 GB/sec bandwidth barrier. ;)
 
That's interesting...

I can see another case here that shows ATI and Matrox "supporting" one another. (A previous case being ATI and Matrox promoting a single pixel shader model for OpenGL.) If I understand this correctly, one way for developers make their models / geometry "displacement map friendly", is to make them N-Patch (Tru-Form) friendly.



<font size=-1>[ This Message was edited by: Joe DeFuria on 2002-02-11 21:46 ]</font>
 
I dont know if i fully understand your question.
What i reckoned was the following:
If you load up a Q3 map, for example, one with a bit heavier overdraw, so that you´ve got ~35.000 tris on the screen, a GF3 is slowing down to, i presume 50fps. May it be because of fillrate or geomety, the performance is limited even with this low poly counts.

What would happen if you throw a 800.000tris scene( 4 walls; corroded brick, ervery wall 200.000 faces, ceiling and floor bumped) on a NV30 like gpu? In any case i dont know but i presume the performance will drop like a stone.

Thanks for the link, btw.
 
I guess I'm just a RenderMan <bleep>, but I dream of a day when hardware tesselates to micropolygons and the shading language is powerful enough to allow procedural displacement mapping.

Moreover, I dream that the hardware is efficient and tesselates adaptively so that more vertices are created nearer than farer, and that this runs so blazenly fast that you can use it all over the place.
 
On 2002-02-11 21:47, Bentarr wrote:
What i reckoned was the following:
If you load up a Q3 map, for example, one with a bit heavier overdraw, so that you´ve got ~35.000 tris on the screen, a GF3 is slowing down to, i presume 50fps. May it be because of fillrate or geomety, the performance is limited even with this low poly counts.

Q3 is fillrate/bandwidth limited so the overdraw is the most likely problem in this case. That's why kyro2 got such high scores.

What would happen if you throw a 800.000tris scene( 4 walls; corroded brick, ervery wall 200.000 faces, ceiling and floor bumped) on a NV30 like gpu? In any case i dont know but i presume the performance will drop like a stone.

On Q3? Well, did you see the nv15 map?
I stopped in one of the very low poly corridor and loaded a bot. I was always able to tell where the bot was just by looking at the framerate. :smile: Read: Q3 gets CPU limited with high polycounts.

It's because the high polys was done with BSP, and the game is likely died in routines like collision detection. If high polys would be added just like additional "eye candy" models (not affecting gameplay directly), than no it wouldn't affect framerate much.
 
Back
Top