nVidia going for 16 sample FSAA?

Well, hopefully after NVIDIA perfects their 16-sample AA, they might actually then focus on delivering trilinear in games like UT2003 and others. :)
 
Sharkfood said:
Well, hopefully after NVIDIA perfects their 16-sample AA, they might actually then focus on delivering trilinear in games like UT2003 and others. :)

320x240 will look great with 16xAA and 8xAF. ;)
 
DiGuru said:
akira888 said:
However, over the years I've come to realize that there are hundreds of men at each of the firms in the industry who have had years of training in this field and do this all day long. Whatever idea I could ever have is sure to have been either thought out already or (more likely) dismissed outright already. But that's life for you... :cry:

Don't think like that! Those trained and hired people work under different constrains than you. All great things started out as 'some whacky idea I had'!
Yes, definitely don't let it worry you. For example, I once came up with a 'great' new way of doing AA which worked wonderfully well..... except for one little case where it all just fell to pieces <sigh>

Sage said:
what we REALLY need is 32x schotastic (sp?) sampling.
It's "Stochastic."

Why 32x? 16x with a jittered grid looks pretty damned good - almost indistinguishable from 10k samples per pixel. (There's a simple example image on my home page).
 
Well I once tried motocross madness 2 at 4x4 supersampling on my geforce SDR 32mb @ 512x348x16 :LOL:


was quite playable actually.
 
pcchen said:
K.I.L.E.R said:
320x240 will look great with 16xAA and 8xAF. ;)

On your PDA or cellphone, of course :)
The funny thing is, millions of people watch 768x576 or so resolution moving pictures every day on 21"+ displays, and they arent complaining. Percieved image quality still beats any realtime CG that PC can display by a huge margin.
Is it lack of resolution or framerate ? Nah ..

Of course, the lighting models, both temporal and spatial antialiasing methods, content detail, animation and other aspects of graphics are leaps and bounds beyond what is currently doable on PC. Yes, raw rendering power can help some of the aspects, but i think raw power has to climb exponential curve to achieve linear improvement in perceived quality.
 
davepermen said:
i like flipquad and fliptri samplings.. :D
I like them for the cleverness of the idea, but they cause too much blurring IMO. The samples pictures I've seen look "ok" until you take a closer look and realize that the blur is overdone.
 
no_way said:
Of course, the lighting models, both temporal and spatial antialiasing methods, content detail, animation and other aspects of graphics are leaps and bounds beyond what is currently doable on PC. Yes, raw rendering power can help some of the aspects, but i think raw power has to climb exponential curve to achieve linear improvement in perceived quality.

Carmack was talking about this a few years back. He was saying something along the lines of low res and frames (like TV) would give you better images if you could have high levels of AA and texture/lighting/shadow quality, etc.

Of course, increasing raw power is the way to go, but that still leaves a very high target to reach for realistic rendering. For instance, acouple of months ago in the "Wired" article on the rendering techniqes used in "Matrix Reloaded", they talked about the capturing technologies used model the facial textures. They used several digital cameras with the error correction disabled in order to capture 1 gig of visual data off each camera *per second*. And that was not for realtime use.

We're going to need a *massive* jump in power to match that kind of thing realtime, and then the sheer overhead of generating the kind of content (textures, models, etc) to use a game that detailed will be insane. I think we're going to need a real paradigm shift in computing and some help from AI/expert type systems in order to be able to pull off truly realistic, realtime rendering games.
 
I don't think that low resolution with much FSAA is the way to go. When I look at objects and see all the fine details at even quite big distances I just begin to wonder what resolution a 21" monitor would need to provide the same resolution my eyes can see. No game shows me the texture details in the distances that I do see in real life. No, lets not go lower resolution in the future. ;)
 
sonix666 said:
I don't think that low resolution with much FSAA is the way to go. When I look at objects and see all the fine details at even quite big distances I just begin to wonder what resolution a 21" monitor would need to provide the same resolution my eyes can see. No game shows me the texture details in the distances that I do see in real life. No, lets not go lower resolution in the future. ;)
I think the point was that even with lower resolutions, good AA and a good lighting model look better than high resolutions with semi-decent AA and a poor lighting model. As I've said before, and as else said before me, I was watching typical analog broadcast TV with the coax literally trying to fall out of the TV (so bad that if you move the tv a few milimeters the whole thing turns to static, or the cable will simply fall out) just a few minutes ago and it looked a hell of a lot more realistic than those half-life 2 videos! Humans identify objects mostly by how light interacts with them.
 
no_way said:
The funny thing is, millions of people watch 768x576 or so resolution moving pictures every day on 21"+ displays, and they arent complaining. Percieved image quality still beats any realtime CG that PC can display by a huge margin.
Is it lack of resolution or framerate ? Nah ..
People complain after getting used to HDTV, but the main difference is people don't generally sit 2 feet away from their TVs.
 
3dcgi said:
People complain after getting used to HDTV, but the main difference is people don't generally sit 2 feet away from their TVs.

I have difficulty watching the digital cable and/or normal Directv channels anymore. Too much compression.
 
Why 32x? 16x with a jittered grid looks pretty damned good - almost indistinguishable from 10k samples per pixel. (There's a simple example image on my home page).

I still prefer the "many samples" part of that pic (presupposition that it's actually usable) :D
 
sonix666 said:
I don't think that low resolution with much FSAA is the way to go. When I look at objects and see all the fine details at even quite big distances I just begin to wonder what resolution a 21" monitor would need to provide the same resolution my eyes can see. No game shows me the texture details in the distances that I do see in real life. No, lets not go lower resolution in the future. ;)

1600*1200 with a good jittered or sparse sampled 16xAA algorithm would be plenty for the forseeable future.
 
actually, after further reflection, I realize that high resolutions are nessesary if you are going to be doing stereoscopy because when you get depth perception the edges are MUCH more noticable. still, i think more accurate lighting models are much more important than stereoscopy and high resolutions
 
no_way said:
The funny thing is, millions of people watch 768x576 or so resolution moving pictures every day on 21"+ displays, and they arent complaining. Percieved image quality still beats any realtime CG that PC can display by a huge margin.

720x480 is 4 times better than 320x240. Don't you think DVD is much better than VCD? :)

By the way, I saw BS-1 (Japanese satellite TV) on a HDTV once (720p), and it beats every DVD I've seen.
 
I think the NVidia presentation is not arguing what is needed for CG quality, but are arguing what sample resolution is needed to match up with the human visual system's limits to be close to from reality. This is a tractable problem that can be readily calculated in a power point.

On the flip side, trying to calculate the amount of power needed for each sample's lighting to approach reality (let alone physics of movement), is not easy to calculate. That is, we don't have a "theory of The Matrix" yet on which to base solid calculations.

Now, as I have argued in the past, I can readily discern the "real" from the "rendered" on even NTSC video, so sample resolution alone is not the primary component we use to detect fake imagery. Lighting is much more important (e.g. better pixels)

On the other hand, if you hooked up two NTSC resolution displays to my eyes via a VR headset, tied into a camera attached to a robot head (telepresence) that moved as I did, I still would not be fooled. I think it's clear that in order to give me the same experience I get in the real world, I need a wide FOV, stereoscopic display, high foveal density, and lack of spatial aliasing, coupled with low latency (if I move my head, there better not be a perceived lag in the screen update)

The Nvidia powerpoint is more or less like a Feynman-style nanotech proclaimation: He's saying "look how much farther we have to go on resolution alone" and "if we use brute force, and assume exponential growth in circuit density, here's how many years it would take to achieve it"

I wouldn't take it as any more than an Nvidia employee waxing philosophical about what can be done in 10 years based on certain assumptions. No different than other manifestos on how much room is left in traditional semiconductors, or wishlists for quantum computing, or RSFQ.

As for why they didn't include any estimates for what's needed to do physically correct illumination for each pixel, it's a much harder and debatable problem (just what is the "correct" lighting equation?) And you can take this further: Even with correct lighting, and sample resolution/frame rate, humans won't be fooled by inaccurate physical simulation. Just look at the Matrix Reloaded Agent-Car-Jump-Highway-Sequence, which despite a year of postproduction and probably lots of tweaking, stilled looked "fake"

A 36,000X gap of improvement is just the lowerbound needed. This presentation really doesn't say anything about what future plans the company has. It's just an observation of how much further we have to go, and what is possible within the bounds of Moore's-like growth.
 
DemoCoder: Well spoken.

I'm also a bit surprised that 16xFSAA was taken from that pdf as something nVidia will implement (soonish?). Even though it stood right next to things like multi-focal displays.
 
Back
Top