Talking nasty with Tim Sweeney

CosmoKramer said:
Yes of course SS is costly, primarily in fill-rate (including shader fill-rate) but it gets the job done while reducing all types of aliasing mentioned in this thread.
No, it doesn't. Supersampling doesn't even solve the aliasing problem with alpha textures as well as changing to an alpha blending algorithm (though this does require depth-sorting all alpha-blended surfaces).

Put simply, supersampling is a dumb, brute force method. There are pretty much always going to be more efficient methods to antialias surfaces than supersampling.

Now, it would be nice to allow a per-surface supersampling/multisampling toggle, as this would be much less performance-intensive if a game doesn't have too many surfaces that need the extra anti-aliasing to look good. This would probably be the best solution for most shaders that, like alpha textures, would require depth sorting for antialiasing in the shader.

But global supersampling? No, we definitely don't want that.
 
Chalnoth said:
But global supersampling? No, we definitely don't want that.

Who are "we"? Nvidia and ATi? Sure, I know they want to cheat as much as possible in order to look fast, but the fact is that SS looks much better than MS. And it doesn't have all the annoying exceptions and limitations that MS has. It just works.

Say what you want but the only antialising that for a fact has been in a consumer 3d-card that reduces aliasing in alpha textures is SS.

You see, with MS you really need high resolution to begin with in order to not puke over all the alising within alpha textures. But using very high resolutions negates the purpose of anti aliasing in the first place! With SS even 800x600 and 4x RGSS looked good. Not so with MS.

And last but not least - aliasing within shaders. This is also reduced by SS.

Supersampling? We definitely want that.
 
CosmoKramer said:
Who are "we"? Nvidia and ATi? Sure, I know they want to cheat as much as possible in order to look fast, but the fact is that SS looks much better than MS. And it doesn't have all the annoying exceptions and limitations that MS has. It just works.
No, by we, I mean people who play games. Supersampling will always lower performance much more for the increase in visual quality than multisampling + antialiasing in the shader.

Say what you want but the only antialising that for a fact has been in a consumer 3d-card that reduces aliasing in alpha textures is SS.
Actually, I made a very slightly modified renderer for the original Unreal Tournament that basically changed the alpha test to an alpha blend. Due to the way UT renders, this just worked automatically for pretty much every in-game case. And it dealt with the aliasing much better than supersampling.
 
Chalnoth said:
No, by we, I mean people who play games. Supersampling will always lower performance much more for the increase in visual quality than multisampling + antialiasing in the shader.

That's nice and all but for those (not so few) games that don't work with MS or that don't perform AA in the shadercode (=nearly all) it would be nice to have the option of SS. It doesn't have to be an either/or situation.

Actually, I made a very slightly modified renderer for the original Unreal Tournament that basically changed the alpha test to an alpha blend. Due to the way UT renders, this just worked automatically for pretty much every in-game case. And it dealt with the aliasing much better than supersampling.

I'm sure you are very very smart but unfortunately most developers don't pay the same attention to this problem. Of course, the optimal solution would be if developers actually seriously started using the geometry units all the way and stopped using alpha textures all together.
 
CosmoKramer said:
I'm sure you are very very smart but unfortunately most developers don't pay the same attention to this problem. Of course, the optimal solution would be if developers actually seriously started using the geometry units all the way and stopped using alpha textures all together.
Heh. It required the changing of like three lines of code (alpha test off, alpha blend on, set alpha blend function). And going all geometry doesn't fix the problem, either. Once geometry gets too small on the screen for your favorite antialiasing algorithm, you get significant aliasing. City of Heroes was the most recent game I've played that really pushes the limits of antialiasing algorithms.

You're right, though, it would be nice to have the option for supersampling, for old games. For new games the performance hit will pretty much never be worth it. That's a much better stance to take than the one you implied in your first post here. For the most part, multisampling is a much better antialiasing implementation for the reason that it has much better performance characteristics, and typically has comparable image quality to supersampling, particularly when anisotropic filtering is used.

This is becoming less and less the case as more complex shaders are used, but it's really up to game developers to start picking up the slack here. But, I suppose IHV's could aid game developers to some extent for antialiasing within the shader. One possible option would be to designate a block of code in the shader as being executed per FSAA sample.
 
geo said:
Yikes! Life without AA would be pretty ugly. Somebody fix it, quick!

Here's an interesting back-down. .

In an interview in 1999 (http://archive.gamespy.com/legacy/interviews/sweeney.shtm), I predicted that CPU's and 3D accelerators were on a collision course that would be apparent by 2007.

But that was before programmable shaders, high-level shading languages, and floating-point pixel processing existed! So, I don't think many people would take that prediction seriously today. But from time to time, developers do need to evaluate the question of whether to implement a given algorithm on either a CPU or GPU. Because as GPU's increase in generality, they are more capable of doing things beyond ordinary texture mapping, while CPU's have unparalleled performance for random-access, branch-intensive operations.

Btw, could someone explain to me why greater programability, robust programming tools, and floating-point procesing for GPUs make this *less* likely? The intuitive conclusion would be just the opposite --that these things are proof of the convergence.

Not that I ever thought there would be a collision --it always seemed to me that special purpose hardware would always be better/quicker at what it does than general purpose hardware, and a design that lets me throw more hardware resources at a problem (i.e. CPU *and* GPU) will give better results. Maybe waaaaay down the road, after we've reached some nirvana of near-image perfection with the combination, and CPUs continue to get magnitudes more powerful beyond that point you could think of something like this. But it doesn't seem to me that we are all that close to that point yet.

So I respect and believe he's changed his mind --I just don't understand how the evidence he presents for why he changed his mind would lead him to do so. . .
 
Daliden said:
Which anti-aliasing system was it, again, that blurs on-screen text in MMORPGs, for example?
I believe any SSAA method can do this without proper support (or am I just thinking of the Voodoo5's AA?), but you're probably thinking of nVidia's qrazy Quincunx.
 
geo said:
Btw, could someone explain to me why greater programability, robust programming tools, and floating-point procesing for GPUs make this *less* likely? The intuitive conclusion would be just the opposite --that these things are proof of the convergence.
Because GPU's are designed for parallel processing where you run the exact same program multiple times on independent elements whose only difference is different initial conditions.

CPU's are designed for serial processing, and are much better-suited to data that requires lots of branching and has a high degree of interdependence.
 
Chalnoth said:
geo said:
Btw, could someone explain to me why greater programability, robust programming tools, and floating-point procesing for GPUs make this *less* likely? The intuitive conclusion would be just the opposite --that these things are proof of the convergence.
Because GPU's are designed for parallel processing where you run the exact same program multiple times on independent elements whose only difference is different initial conditions.

CPU's are designed for serial processing, and are much better-suited to data that requires lots of branching and has a high degree of interdependence.

Either you answered the question I didn't ask (which might have been, "What would have been a better answer for Tim to give on why he's changed his mind?") or I'm too thick to get why your answer answers my question. But thanks for trying. . . :)
 
Well, I'll try to answer it a bit more simply: CPU's and GPU's aren't going to converge because they're designed for different tasks.
 
Reverend said:
Humus said:
I don't get the reasoning. Just because edge aliasing isn't the only kind of aliasing it makes multisampling useless?
Tim said multisampling is useless?

Well, at least he's reasoning quite backwards. Multisampling is still doing its job just fine, but if he's having internal aliasing, it's his app that's not doing its job.
Edge antialiasing is a problem that's suitable for IHVs to solve. Internal aliasing is something for the ISV to solve. It's that simple.
 
CosmoKramer said:
Yes of course SS is costly, primarily in fill-rate (including shader fill-rate) but it gets the job done while reducing all types of aliasing mentioned in this thread.

The problem is that supersampling does NOT get the job done. Multisampling does exactly what it's supposed to do, no more, no less. Supersampling either does a helluva lot more than it needs to do (which sometimes leads to visual flaws such as blurry text) or it does too little, leaving plenty of aliasing in place. Take for instance the problem with sparkling per pixel specular with normal maps. Applying supersampling will not solve this problem. Yes, the artifacts will be moved a tiny bit further away, but you'd probably need something like 64x supersampling or even more to make it really go away.

Supersampling simply isn't a general solution. There is no general situation-independent solution for internal aliasing at all, unlike edge-aliasing which can gracefully be handled automatically. The only solution in the general case is that the developer who knows all the facts about the shader's behavior deals with any aliasing it may create himself.
 
CosmoKramer said:
Say what you want but the only antialising that for a fact has been in a consumer 3d-card that reduces aliasing in alpha textures is SS.

The hack here is the use of alpha testing for transparency, not the use of multisampling.
 
Chalnoth said:
Well, I'll try to answer it a bit more simply: CPU's and GPU's aren't going to converge because they're designed for different tasks.

Well, yes, I got that. I said that in my opinion special purpose hardware would be better/quicker at what it does than general purpose hardware.

So let me try one more time at what I'm pointing at before amiably agreeing to miss each other.

Sweeney offers as a reason he changed his mind the development of high-level shading languages.

Well, in my view the development of high-level shading languages is --in the long run-- an enabler of convergence rather than a disabler. That's what high-level languages do that low-level languages can't. The more distance you put between the programmer and the back-end, the more backend agnostic both the programmer and the application become.

Now, you might say that a backend for HLSL that generates code for an X86 backend would have atrocious performance (compared to NV/ATI's current pride-and-joys) right now. And I would agree. Nevertheless, in the *long run*, high-level languages make convergence *more* likely not *less* likely. So it is a puzzlement to me that Sweeney offers it as a reason to back away from his previous stance.
 
Except it doesn't matter, because HLSL's are nothing like the languages designed for CPU's. They may share similar syntax, but that's where the similarities end. The programming paradigm is utterly different.

I believe the reason he felt CPU's and GPU's would converge so long ago was that he felt that at some point in the future, CPU's would be fast enough that people would feel that the limitations of specialized hardware would be more debilitating than the speed drop from moving to a CPU.
 
Well, at least I got it that time. :LOL: At this point the old COBOL guy will retire from the field, muttering about newfangled stuff. ;)
 
Chalnoth said:
Except it doesn't matter, because HLSL's are nothing like the languages designed for CPU's. They may share similar syntax, but that's where the similarities end. The programming paradigm is utterly different.

I believe the reason he felt CPU's and GPU's would converge so long ago was that he felt that at some point in the future, CPU's would be fast enough that people would feel that the limitations of specialized hardware would be more debilitating than the speed drop from moving to a CPU.

At the same time Sweeney is saying that GPU programming languages need to become general purpose (turing complete).

Personally I think he's off his head.

Jawed
 
Jawed said:
At the same time Sweeney is saying that GPU programming languages need to become general purpose (turing complete).
It doesn't matter how general purpose GPU's become: they're still designed with a stream programming paradigm in mind. CPU's aren't.
 
Chalnoth said:
Jawed said:
At the same time Sweeney is saying that GPU programming languages need to become general purpose (turing complete).
It doesn't matter how general purpose GPU's become: they're still designed with a stream programming paradigm in mind. CPU's aren't.

That's a problem for the guy who writes the compiler. And if he has enough multi-core backend brute force available to him, at some point it disappears as a problem. We're just not that close to that theoretical compiler writer having that kind of brute force available to him. But 20 years from now? Mebbee. In my view it depends on when we start to plateau on new features on the GPU side. . .some years after that the brute force might be there on the CPU side.
 
Back
Top