Ati's Technology Marketing Manager @ TechReport

Lots of great info for ATI.

Summary:-
1. Catalyst drivers will be updated monthly.
2. Supports for FP32 needs 33% more transistors.
3. No F-buffer in SM2.0b (directx9).
4. Shaders in the Ruby demo kind of range are what you can expect to see running at decent frame rates in real time on an X800.
5. How useless is SM3.0
6. Temporal AA will be exposed in future drivers for the older Radeon.
7. The adaptive Trilinear filtering on X800 is same or better IQ than R3xx full trilinear filtering.
8. ATI satisfied with its OpenGL performance.
________
K Engine
 
Last edited by a moderator:
On OGL i would desagree, and un F-buffer, it's not exposed... Otherwise, i need to read more in detail what he said, but he tried to show the advantages of X800s which is normal.
 
I found this interesting
As far as I know, there's no purpose that's served by trilinear filtering beyond eliminating the particular artifact that results from bilinear filtering where you have a boundary between mip map levels.

So basicly that is saying brilinear is fine.
 
At the bottom of page 5 it implies that the R300 derived chips also support two pixels per clock provided that MSAA is enabled. Does this mean that the R3x0 based chips can do a maximum of 16 Z/Stencil ops per clock (even when color writes are enabled) when AA is turned on? Is this a correct assumption?
 
bloodbob said:
I found this interesting
As far as I know, there's no purpose that's served by trilinear filtering beyond eliminating the particular artifact that results from bilinear filtering where you have a boundary between mip map levels.

So basicly that is saying brilinear is fine.
Sorry? That's not how I understand this quote.
 
CJ said:
At the bottom of page 5 it implies that the R300 derived chips also support two pixels per clock provided that MSAA is enabled. Does this mean that the R3x0 based chips can do a maximum of 16 Z/Stencil ops per clock (even when color writes are enabled) when AA is turned on? Is this a correct assumption?
Two samples per clock. Not two pixels. But otherwise, it's a correct assumption.



One advantage, I think, that we have in this capability over the GeForce 6800 series is that we can expose this capability even when color writes are enabled, so it's not limited to situations where you're doing Z-only or stencil-only reads and writes.
This part is clearly misleading. GeForce 6800 can also output 32 samples per clock with color data. But only 32 pixels without color data.
 
This part was pretty interesting:

TR: We've heard that ATI started work on a chip code-named R400, and then decided to change direction to develop the R420. Why the mid-course correction, and what will become of the remains of the R400 project?

Nalasco: When we generate our roadmaps, we're always looking multiple years ahead, and, you know, circumstances obviously are going to change over that course of time. If you look at the development cycle for a new architecture, you're talking in the vicinity of a couple of years. One of the things that happened in our case is that we had these additional design wins or partnerships that we've developed with Nintendo and Microsoft, and that obviously requires some re-thinking of how the resources in the company are allocated to address that. So I think that's what you're really kind of seeing is that we had to make sure that we were able to continue with the roadmap that we had promised to keep producing for our desktop chips while also meeting these new demands, and we're confident that we're going to be able to do that.
 
On OGL i would desagree, and un F-buffer, it's not exposed... Otherwise, i need to read more in detail what he said, but he tried to show the advantages of X800s which is normal.

I got the impression here:

In OpenGL, it's easy for to address that, because in OpenGL, we basically write the compiler for GLSL, which breaks it down into our hardware-level shading language. That's one of the reasons it's targeted at OpenGL.

The other thing is that the F-buffer is really of most benefit when you're running very long shaders. Shorter shaders shouldn't have to multipass on this hardware. These long shaders generally don't run in real time, and are therefore much more applicable to a workstation or digital content creation type market, where real time is not as big of a concern as just getting a very good quality image.

that it's available, or will be available, in the GLSL compiler.
 
bloodbob said:
So basicly that is saying brilinear is fine.
No. "Brilinear", à la Nvidia, is applied across the board, regardless of whether it causes visible boundaries or not. And it does.

Not the same as a dynamic, adaptive algorithm.
 
Well, since Wavey has told us they are rewriting their OGL drivers from the ground up it would seem they can't be too satisfied.
 
Xmas said:
This part is clearly misleading. GeForce 6800 can also output 32 samples per clock with color data. But only 32 pixels without color data.

I still don't think it can do anything other than 32 Z or stencil *ops* per clock, with still a maximum of 16 *pixels* per clock rendered to screen. As far as pixels themselves go--pixels rendered to screen--black & white pixels are as much "color" pixels as any shades of red, green, or blue pixels. To get a maximum of 32 pixels per clock rendered to screen you'd need 32 pixel pipelines operating in parallel. But you've only got 16. Ops are not pixels--huge difference--you can't see an "op" on screen, as pixels are the smallest rendered screen elements there are.
 
madshi said:
bloodbob said:
I found this interesting
As far as I know, there's no purpose that's served by trilinear filtering beyond eliminating the particular artifact that results from bilinear filtering where you have a boundary between mip map levels.

So basicly that is saying brilinear is fine.
Sorry? That's not how I understand this quote.

Brilinear removes the "boundary between mip map levels" artifact.

So therefore it trilinear serve's no greater purpose then brilinear.
 
madshi said:
bloodbob said:
I found this interesting
As far as I know, there's no purpose that's served by trilinear filtering beyond eliminating the particular artifact that results from bilinear filtering where you have a boundary between mip map levels.

So basicly that is saying brilinear is fine.
Sorry? That's not how I understand this quote.

Brilinear removes the "boundary between mip map levels" artifact.

So therefore it trilinear serve's no greater purpose then brilinear.
 
Bolloxoid said:
bloodbob said:
So basicly that is saying brilinear is fine.
No. "Brilinear", à la Nvidia, is applied across the board, regardless of whether it causes visible boundaries or not. And it does.

Not the same as a dynamic, adaptive algorithm.

Your point is? whats that got to do with the time of day or my quote.
Okay I'll rephrase "So basicly that is saying brilinear is fine for a trilinear replacement".

I think you find the the nvidia doesn't create visible boundaries as their but it creates a uniform LOD band but that same thing happens with bilinear away from the mipmap transistion.
 
geo said:
Well, since Wavey has told us they are rewriting their OGL drivers from the ground up it would seem they can't be too satisfied.

Actually they are satisfied, currently. The X800 can handle all current OGL games at perfectly reasonable framerates. The 6800 may run them faster but 180 fps in Call of Duty doesn't really matter if the X800 can run it at 150 fps. As was mentioned in the interview, Doom 3 will be the first truely demanding OGL game to come out in awhile and I'm betting the driver set with the updated OGL code will be soon to follow once the game is released (if not before or at the same time).
 
Evildeus said:
That will be a wait & see like for a long time with Ati & OGL :(

ATI still might work something out with GLSlang support. They have some supposedly talented guys working on the compiler and if they can get it to work good it will be great. The thing is the dependant texture reads might end up killing the preformance as they can only handle 4 per pass by the looks of the DX 9 2.b shader specs.
 
Back
Top