Parhelia memory interface

pascal said:
jb

I understand but I am talking about the engine not the game.

The original Unreal engine was in the same level of graphics technology as the Quake3, and sometimes more advanced (remenber the detail textures and compressed textures). But this time the id decision (probably a hard one) was everything DOT-3 and it will have some good compensation.

Pascal

sorry to nit pick here :( BUT IIRC, the unreal engine was designed way back in the voodoo 1/2 time frame. Tim S. wrote the game with a software render in mind. It had pretty good HSR built into the game. But it lacked the support for any decent poly loads. Even with todays Gf4 card once one get over 400 world polys in a scene you start to slow down. Worse yet, throw any dynamic lighting effects and watch the frame rate tank. Unreal also was locked to 256 color on the textures when you were making things. I dont know how many times I cringed when I saw how horrible my texture looked after convering it to 256 colors and imported them into the game. There were other minor limitation sof the engine as well. Again sorry to nit pick and pull this thread off topic.....
 
We all are very far from the original topic (Parhelia memory interface).
I am sure Unreal 2 will have great gameplay. Any idea when we will see UT2003?

Mfa:
If middleware 3D engines were codeveloped with the hardware we'd have a lot prettier games :)
Who will do it? M$? Please no thanks.
edited: it is an interresting idea but we will need a extremally flexible and versatile middleware above the hardware API.

If the API is DX9 then I see only Microsoft (I have personall restrictions). If the API is some metal API then it is the chip manufacturer but will work only with this specific chip.
 
Got the link from MURC :D
ut2003_large.jpg
 
Someone said that Parhelia is like Aston Martin: expensive, individual choice, with best "Image Quality" and performance that normal luxury competitors don't have. Plus, excelent technical support and waranty.

I have nothing more to say really. :)
 
It's still missing a F"S"AA. I personally think that the Radeon 8500's FSAA has the best image quality, better than the V5500, IMO.
 
Someone said that Parhelia is like Aston Martin: expensive, individual choice, with best "Image Quality" and performance that normal luxury competitors don't have. Plus, excelent technical support and waranty.

Unfortunately not everyone could afford and Aston Martin, and the need to technology in a car reached the point where even researching something as relatively commonplace as an Airbag mechanism would have put Aston Martin out of business – they needed the R&D dollars that only the purchase by Ford could allow.

If you wish to continue the analogy then draw your own conclusions… ;)
 
DaveBaumann: that "someone" wasn't me so, I'll stay on my statement: "I have nothing more to say really."
 
muted said:
well it has FSAA 4x ..

but with FAA 16x working on most games , why would you want FSAA 4x
That is MOST games not ALL games,big difference. ;)


A more interesting question is what sort of supersampling method they have chosen to fallback on ordered-grid or rotated-grid ?
 
Ascended Saiyan said:
muted said:
well it has FSAA 4x ..

but with FAA 16x working on most games , why would you want FSAA 4x
That is MOST games not ALL games,big difference. ;)


A more interesting question is what sort of supersampling method they have chosen to fallback on ordered-grid or rotated-grid ?

I included that one in B3D's list of questions to Matrox.
 
Pshaw.

Personally, I WANT more games to come out aimed at the high-end, because as it is, I can only think of two games o_O that have come out in the past, oh, five years (?) that were aimed directly at the high-end of PC gaming, those being Unreal, and Morrowind. People complain about how Morrowind runs so slowly, and I love it! :devilish: Morrowind blows my mind, and though it bogs on most systems, I have to say that I'm quite pleased with the performance of it.

Granted, I might say this because I'm a spoiled little brat :p , but it truly does irritate me how games coming out even now are optimized for cards with less than half the raw capabilities (fillrate, bandwidth, and the like) of what's available now.

And yes, I'm aware that they do that so as to ensure a wider audience, but really - who cares? I would confidently assert that a good 70% or so of gamers have mid-to-high-end systems; the people with the Geforce 2 MXes and Radeon 7000s aren't gamers, they're just average users who happen to game. I really would like to see some numbers on just how much of PC game sales go to these people who 'happen to game'; I'm willing to bet it's a lot smaller than people think.

I suppose it could also have something to do with the fact that I'm tired of having to convince ignorant people that high-end PCs are, on the whole, more capable than consoles.

Just the unsolicited .02 from a not-fully-here lurker. :oops:
 
On the original topic, specifically concerning the Matrox memory controller, I'm going to take a guess (after reading around the web, seeing the diagrams, and general guessing.. ;)):

The chip has the dual channel bus between itself and main memory (much like what the G400 series had, but with the wider buses).

Internally, there are several distinct controllers for the various parts of the chip, that can request reads or writes from / to memory as needed. This is coordinated using registers for caching and the sophisticated controller Matrox mentions briefly. This sophisticated controller queues up reads and writes and then performs a burst read / write from the memory itself. Ideally, the reads and writes would make the fullest use of the 512bits total bandwidth (256 bit read, 256bit write, simultaneously capable of doing both, just like g400 was able to do).

This is sort of a hybrid on the nVidia crossbar idea, except that the actual crossbar is internal, and makes use of the larger dualbus to main memory as needed.

Of course, since I am guessing, I could be completely wrong (this type of idea would require speculative caching of read data too, right? Not to mention very good management to insure that something didn't stall out waiting for data when the read cache data is out of date.)
 
Back
Top