Interview with Epic's Tim Sweeney

Nvidia has never allowed competitors to use their demos on hardware. And as far as I know ATI hasnt since the X800 came out. R300 demos work still. Unless theres a workaround that I am unaware of.

Chris
 
Well, except for Cascades, AFAIR all nV demos have been on OpenGL with proprietary extensions, so it makes sense for them not to run on other HW. But Cascades is DX10, so the limitation in this case is wholly...umm...unwarranted:)I`d have loved to mess with it on a 2900.
 
Nvidia has never allowed competitors to use their demos on hardware. And as far as I know ATI hasnt since the X800 came out. R300 demos work still. Unless theres a workaround that I am unaware of.

Chris

Wasn't there some chameleon demo, with benchmark built in too, that worked on ATI (and others?) cards too?
 
I wish they had asked about the performance of DX10 vs. DX9, this is still an open question I think. ie is UE3 in DX10 faster than DX9? and if not, is that just a driver optimization issue or is there something more to it?
 
MrBelmontvedere, the fourth reply in tEd's post indicates higher rendering efficiency and so framerates: "DX10 has a great impact on performance."
 
-->

Epic said:
Unreal Tournament 3 will ship with full DX10 support, with multi-sampling being the biggest visible benefit of the new graphics interface. Additionally, with DX10 under Vista we have the possibility to use the video memory more efficently, to be able to display textures with a higher grade of detail as it would be possible with the DX9 path of Vista. Most effects of UT3 are more bound to the fillrate than to basic features like geometry processing. That's why DX10 has a great impact on performance, while we mostly forgo the integration of new features.
 
That sentence was kind of confusing, regarding the "higher grade of detail as it would be possible with Vista DX9". Is he referring to the newer driver model by mentioning Vista? Or is he saying that DX10 gives more detail than DX9 Vista, where mentioning vista is just for an apples-to-apples comparison.
 
What I don't understand is why this accumulation buffer can't be the regular hardware supported MSAA buffer?

This is simple. I'm going to over-simplify it, because if I try too hard I'll say something stupid. So, I'm going to make hi ssound really stupid anyway.

Look at it this way: MSAA makes lines look pretty. The accumulation buffer doesn't actually know where lines are though, only texels from the g-buffer.

That's just my understanding. I still can't sample my multisampled VSM, though, and I imagine this is much harder.

Wow, MSAA will really smack UT3's performance if it has to multisample each pass of the g-buffer. Good thing G80 is really good at this.
 
Lol thanks for the attempt jbizzler but I hope my understanding of hardware AA is a tad above the "makes lines pretty" level. My earlier confusion stemmed from not realizing that all subpixel geometry information is lost during the initial render to a non-MSAA (DX9) G-Buffer.
 
Hey guys - thanks for all the attention.

To clear any double translation issues we decided to publish the whole interview on our website in english. You can find it in a few minutes on:
http://www.pcgameshardware.de/?article_id=602522

Please be aware that the "leaked" interview was done in english and had to be translated for our german language magazine and website and was translated back into english in yestersdays copy.
 
Last edited by a moderator:
If the below translates to most UE3 games as well it might be a rough couple of months for AMD in the GPU benchmarking wars.

PCGH said:
What is your experience with Nvidia's and Ati's next generation graphics hardware? Could you already make a statement which card will be better for UT 3, the 8800 GTX or the Radeon 2900 XTX?

Tim Sweeney said:
The relative performance scores between NVidia's and ATI's best cards vary from day to day as we implement new optimizations. But, for the past year, NVidia hardware has been ahead fairly consistently, and a few months ago we standardized on Dell XPS machines with GeForce 8800 GTX's for all of our development machines at Epic.
 
It was also the best card available at the time (and still), and you are right, Ati can't do anything with this ;)

What you write sounds as a good reason ,but when r300 released twimtbp dev's not choice the r300 route, they choice slow down game developent, this is why we see no real dx9 games for years.

They can do something when starting the same money hungry program, ops i forget amd has no money left, than the only hope left is they own driver team (and r650 when its not just a shrink), but driver optimisation need time, so its a loose-loose situation.
 
Last edited by a moderator:
What you write sounds as a good reason ,but when r300 released twimtbp dev's not choice the r300 route, they choice slow down game developent, this is why we see no real dx9 games for years.

They can do something when starting the same money hungry program, ops i forget amd has no money left, than the only hope left is they own driver team (and r650 when its not just a shrink), but driver optimisation need time, so its a loose-loose situation.

You have to be kidding me...this cannot be true...not on this board-COME ON!
 
Back
Top