Parhelia memory interface

Would like your opinion:

  • Does anyone think the 512 bit AGP interface offers a significant speed increase over 256? As in should they have bothered with that wide a interface?
  • Will 20GB/s be enough taking into account there's cross bar nor z-optimisations?
  • Looking at the Parhelia block diagram, there seem to be no DX7 hardwired T/L engine. What's your opinion on the Vertex engine? We seem to have on specs on Triangles/Sec.
Thanks.
 
About the 20GB/s:

-Crossbar is old technology. My guess Parhelia has a 2-way memory controller and Nvidia has a 4-way, then the advantage is not soo big.

-At 350MHZ DDR it means 22.4GHz bandwith which is a lot of bandwith.

-One side effect of 512bits with 2-way memory controller will be lower latency in some cases.
 
Tim Sweeney, Epic's chief 3D guru working on next-generation Unreal engine technology had this to say about it: "We've had our hands on a Parhelia for the past few days, and have Unreal Tournament 2003 up and running in triple-monitor mode -- it's very immersive, and surprisingly fast (rendering at 1280*3 x 1024)." The UT engine has had adjustable FOV for quite some time now, and UT 2003 obviously does too, so when that title ships this summer, it will in all likelihood support Surround Gaming.
It looks like that the memory interface is fast enough to handle a 1280*3x1024. Can you do that with 128bits?
http://www.extremetech.com/article/0,3396,apn=15&s=1017&a=26865&app=13&ap=14,00.asp

Got the link from NVnews.net
 
-Crossbar is old technology. My guess Parhelia has a 2-way memory controller and Nvidia has a 4-way, then the advantage is not soo big.

My hunch is that they're using a setup very simillar to their G200 and G400 series. So they probably have an elaborate internal connection to the memory interface and a simple memory interface, which is likely lighter on the traces.
 
Hey Saem, where have you been :)

My hunch is that they're using a setup very simillar to their G200 and G400 series. So they probably have an elaborate internal connection to the memory interface and a simple memory interface, which is likely lighter on the traces.
I agree but maybe something like two 256bits bus internal (total 512bits) and 2 128bits DDR external. See the Parhelia diagram, it has 2 connections between the internal bus and the memory controller.
 
Hey Saem, where have you been

School, Mon -> Wed I get skooled from 830 till about 2030. So I'm pretty busy then, I hop on the site every now and then.

I agree but maybe something like two 256bits bus internal (total 512bits) and 2 128bits DDR external. See the Parhelia diagram, it has 2 connections between the internal bus and the memory controller.

After examining the diagram, I think their memory controller is a really complex beast. They probably rely far more on a deep buffer to extract as much performance as possible with their bus.

Though, I really don't care either way, they should have some ocllusion capabilities. I'm guessing this is one of the big reasons why they have a half-assed FSAA, sorry FAA. Besides real estate and yes I realize ocllusion circuitry doesn't come free.

Oh well, I'm leaning towards 3D Labs, unless ATI impresses me yet again by putting out cards with more features than you can shake a stick at. ;) Yeah, I know, they have to be present in both hardware and software by way of drivers.
 
Here's a quote from The Tech Report's preview -


TR: Does the Parhelia chip have any provisions for memory bandwidth conservation? If so, which techniques are implemented—Z compression, occlusion culling, fast Z clear? I see it has a "depth acceleration unit for advanced Z processing," but I'm looking for more detail.

DW: The Depth Acceleration unit and Depth Cache deals with the Z-buffer and managing access to the Z-buffer in an efficient way. This area includes logic to perform fast Z clears and also sophisticated logic to queue up Z-reads and Z-writes so that they are always done in burst access. And more generally, while Parhelia-512 has a great deal of raw memory bandwidth, it is an intelligent memory controller whose architecture allows granular access of data and also optimizes the access from the intensity, depth, fragment and texture buffers through multiple independent sub-controllers.
The overall architecture of the entire chip is extremely complex with various optimization techniques. Some topline optimizations are the inclusion of fast Z clears and multiple large caches to hide page breaks and to maximize burst efficiency. If you look on the chip block diagram you will see that the depth unit, Fragment AA unit, pixel unit, texture units and the display units all interact with the 512-bit Memory controller array. Each of these sub-units has specific logic to optimize memory efficiency, and the memory controller array itself then arbitrates between all of the different requests sent by these different units. There are multiple independent controllers in this array and they can access different information simultaneously.

http://www.tech-report.com/etc/2002q2/parhelia/index.x?pg=2
 
School, Mon -> Wed I get skooled from 830 till about 2030. So I'm pretty busy then, I hop on the site every now and then.
Pretty busy you are.
Usually I get home by 10:20pm from monday to friday, and sometimes some I have meetings on saturday. I need a vacation and I will have four days in the end of the month.

Oh well, I'm leaning towards 3D Labs, unless ATI impresses me yet again by putting out cards with more features than you can shake a stick at. Yeah, I know, they have to be present in both hardware and software by way of drivers.
This Xmas will be great, specially because we will have really new technology games. I hope my GF3ti200 will not dissapoint me. I am excited about all this new technology but when I have to buy something I am very conservative and like cheap things.

Back to the topic in some way this brute force approach is elegant because it probably has some good reasoning behind it, it does not look like a brute force done by a brute mind.
 
What an inferior memory controler design.....

EVERYONE knows it sux bercause it is not a *Crossbar* contoler like Nvidia :rolleyes:
 
...Each of these sub-units has specific logic to optimize memory efficiency, and the memory controller array itself then arbitrates between all of the different requests sent by these different units. There are multiple independent controllers in this array and they can access different information simultaneously.
Interresting McElvis, this thing is more complex than I was thinking.
 
Lay off, Hellbinder, pretty please, crossbars are kewl. But Matrox seems to have way dandier memory control than generally assumed here and elsewhere. Why haven't they advertised it more? Good quote and linkage, McElvis!
 
Good point, why didnt they orginally explain z-culling & memory controllers? Its not what you say, its what you dont say that's important.
 
The more I think about it, all I really seem to care about right now is raw texturing power and bandwidth to support it. I want high res and the rest can goto hell. I'm still of the mind that though the special effects -pixel shaders and all- are kewl, the current resolutions at which games are playable make them hard to appreciate, however. Just my take on things, at this time of day - Matrox seems to deliver. Then again, after playing a fair number of games that "take advantage" or T&L, I don't feel impressed in the slighest, mind you most of that is Dx7 static transforms.
 
True, 3D hardware is lightyears ahead of game engines.
The only game promissing all per pixel lighting is Doom3 after almost 4 years :(
Unreal 2 will be good, but not in the same graphics level.
Imagine how much technology/programming will be needed by some true all pixel shaded/programmed DX9/OpenGL2.0 game engine.
 
The only game promissing all per pixel lighting is Doom3 after almost 4 years.

Yeah, but that's the IHVs fault, we should have had far more general purpose GPUs by now, right now we have 0, until 3DLabs hits the scene.

Unreal 2 will be good, but not in the same graphics level.

Looking at it, the engine test figures that Anand has been putting out and what I've been hearing from a source close to Epic, I have a bad feeling about it. It'll be another Unreal, it'll run like ass on the current generation, unless you have the latest and greatest.

Imagine how much technology/programming will be needed by some true all pixel shaded/programmed DX9/OpenGL2.0 game engine.

For sure, I think this is what will bring HoS support to games, besides trueform. Besides a larger installed base of hardware support, we need to reach a point where artists can't generate the content fast enough and content delivery systems CDs start to become clausterphobic. Personally, I think we're already at the CDs cramp phase, I don't like more than one CD, it's annoying.

Definately. I think that's expected though, I mean, regardless of what people might think every Id engine is a BIG jump from the previous, IMO.

Your last comment got me thinking, it would require an incredible amount of programming and think about how much time it'll take to make the content, I think things like this will make dynamic content creation and rendering even more prevelent than current.
 
Unreal 2 will be good, but not in the same graphics level.

I will tend to dis-agree in part. Yes Doom will offer more capability but that dose not mean it will look better than UT2003, U2. Why? It really depends on how well those effects are used in the maps that the game comes with. No offense but the maps in Q3 were not as good as the ones in UT. They were too bland with the same style of textures used over and over again. Thus even though Q3 had a "better" engine, a majority of the people enjoyed the variety that UT offered vrs the same style over and over that the standard Q3 maps did. Again not trying to nitpick as I know Q3 was more advanced in many areas. Just saying that it will be up to the designers to make it look better than UT2003. I think UT2003 is really gonna to provide a nice leap from what we have today. As a member of a UT mod team I can not wait to play around with it...
 
Humpf. I liked Unreal Tournament, but I loved Unreal. I won't judge anything by UT2 -- it will be Unreal 2 that decides for me. (And don't try to bring that one sorry ass Irish band into this.)

Saem, what? Do you want a gme that surpasses anything your hardware can do, or do you want a game that your hardware can do at 60 fps? You don't make that clear.
 
jb

I understand but I am talking about the engine not the game.

The original Unreal engine was in the same level of graphics technology as the Quake3, and sometimes more advanced (remenber the detail textures and compressed textures). But this time the id decision (probably a hard one) was everything DOT-3 and it will have some good compensation.

The Unreal 2 will have a lot of polys with only 2 textures. Yeah maybe good textures but the graphics level will not be the same.

Maybe the next Unreal Warfare.

Saem:
Your last comment got me thinking, it would require an incredible amount of programming and think about how much time it'll take to make the content, I think things like this will make dynamic content creation and rendering even more prevelent than current.

Lets say id decide to do everything programmed pixel shaded (DX9 level) with many multiple passes for the next engine, I dont see it done in less than 3 years (end 2005).

In one side you have large companies with hundreds of hardware and software developers (3D chips companies), in the other side you have these small 4 or 5 programmers developing a new game engine. This thing is getting out of proportion.
 
Back
Top