NVIDIA Are the Industry Standard Shader Driving Force?

This thread hurts my brain

Same here but it's improved now. I managed to force myself to stop banging my head on my computer desk.


("What is great with IHV X's Anisotropic Antialiasing is the T&L Shader Precision Mapping implementation which brings 4 times the performance for the same quality, making real-time Wireframe Shadowing through Per-pixel Stencil Occlusion a possibility for the first time")

I'll keep my eyes open for that one.

someone with a clue (even a very small one)

& now I'll return to my Lurking & learning mode. ;)
 
What is great with IHV X's Anisotropic Antialiasing is the T&L Shader Precision Mapping implementation which brings 4 times the performance for the same quality, making real-time Wireframe Shadowing through Per-pixel Stencil Occlusion a possibility for the first time

I thought that was the R400? ;)

...ok I'll shut up now.
 
CorwinB said:
I'll second that. I especially loved the part about doing AF through MSAA...

While learning a few techno-babble buzzwords and randomly throwing them around to look like you know what you are talking about ("What is great with IHV X's Anisotropic Antialiasing is the T&L Shader Precision Mapping implementation which brings 4 times the performance for the same quality, making real-time Wireframe Shadowing through Per-pixel Stencil Occlusion a possibility for the first time") may bring you a long way in many forums, someone with a clue (even a very small one) would have guessed by now that it's not possible in the B3D forums...


Well, he's just stuck in that awkward stage of learning how to separate the technical wheat from the marketing chaff...;) Not knowing how to separate technical jargon and marketing spin from real technical information is a malady usually cured by experience with the products in question, most of the time. (Hopefully...;))
 
ROFLMMFAO~~~~

Doomtrooper said:
This thread hurts my brain, it is bad enough when arm chair experts troll the fan sites, but when they try to talk 'techno' on probably the most advanced forum for 3D graphics on the net, it is down right laughable :LOL:
That sums it up perfectly for me too! 8)

That's one thing I really like about the "lurk & learn at B3D" school of continuing education; you got some folks here who when they make comments you KNOW that you can pretty much treat it as straight from the textbook, 'specially considering a number of the members here pretty much write those textbooks. :LOL:

Please never change folks, this forum really is top-notch in every way I can think of. :)
 
radar1200gs said:
I'd certainly like to see some examples of anybody at nVidia, Kirk included saying anything negative about ATi since the launch of NV30.

nVidia has talked about and hyped their own products, not ATi's.
Jen-Hsun's "hallucinogenic" comment wasn't particularly positive, though I don't remember when it was printed (in that Wired article, IIRC).

As for nV making it a practice of disparaging other cards, take a trip in the history machine back to their Kyro whitepaper.

You are wrong. NV3x is capable of both adaptive and full blown anisotropic filtering.
I'm sorry, since when did bilinear = "adaptive" and trilinear = "full-blown?"

radar, are you a game developer or a hobbyist? If so, are you a recent one (meaning, post-3dfx)?
 
radar - I've got both a 5800 Ultra and a GF4. I can tell you that I can't shut off the 5800's "adaptive" aniso. I don't know if the 5900 Ultra is any better but I can say this: The GF4 looks much better than the FX card in Aniso / or AA. The FX textures look blurry (as if AF is not working)

If you run those filtering tests too, the GF4 patterns look like the old "application" mode while the FX doesn't. Don't know if this means anything but there is lower filtering going on with the FX in just normal modes. (but again when you are playing without AA or AF I can't really tell)

If you really like NVIDIA so much I'd wait for the NV4x series to see if things improve.

-lar2r
 
nVidia, ATI, Kyle, life, the universe, and everything.

Whew!

Reading through this thread reminds me of why they are called threads...they weave in and out of the fabric, continuously changing direction.

HardOCP. I emailed them asking why they have not reported on the nVidia cheat thing (before they said it was not a cheat). I was nice and respectful. At the time I honestly thought it was a good site and I based many decisions off their positions. I was flamed as a reply, cussing and name calling and many derogatory remarks. I started looking around at other sites, and at their own forum, and found that [H] is actually only good for the links to OTHER people's reviews. Good thing to learn!

nVidia, ATI. I used to like them. I recommended them (as I am sure most people did). I used ATI because I have bad eyesight, and needed the crisper images. I put up with bad drivers and slower performance due to my need for good images. Now, ATI has surpassed nVidia in every department except cheating. They used to be ahead in that one, but as nVidia has tried to dominate everything, they definately dominated the cheating department, and ATI is slowly but surely becoming honest. Wow. And the drivers are stable and good now too! (cat 3.5 released today, folks).

Does it matter how well ATI runs Dawn? Not until the naked version is released. ;)

What was my point? I am not sure anymore, I forgot. Hmmm....maybe it was, "So long, and thanks for all the fish."
 
Shoot it just occured to me that Nvidia could compete with the r420 (playfully referred to as Snoop Dogg by the engineers) by overclocking their next core and putting a BongFX(tm) water cooler on it.
 
Re: ROFLMMFAO~~~~

digitalwanderer said:
That's one thing I really like about the "lurk & learn at B3D" school of continuing education; you got some folks here who when they make comments you KNOW that you can pretty much treat it as straight from the textbook, 'specially considering a number of the members here pretty much write those textbooks. :LOL:

Please never change folks, this forum really is top-notch in every way I can think of. :)
Yep, more power to b3d! 8)
I learned more on 3d tech the last 4 month than the last 4 years.
Ironically, I guess I have to thank nVidia and their itches for that.. ;)

Cheers,
Mac
 
RussSchultz said:
Yeah, was talking about the off angle thingie dingie.

Of topic, a bit: I think if you've got a wide angle of view (think fisheye), you could have a plane perpendicular to the Z axis in front of you that requires more sampling on the outer reaches of the screen, even though the depth never changes.

I think anyways. ;)

Not sure how either hardware would deal with such a situation.

russ, you've let you real-world experience with optical phenonena smear you otherwise firm perception of the math fundamentals of the synthetic world that surrounds us. spit out that red pill you've taken today :)

perspective transformation, of the type we commonly use today, does not account for real lens effects such as fish-eye. simply because with the computer graphics projection you're dealing with projection planes, and there's no such effect as real refraction going on (otherwise caused by the lens). IOW the perspective projection of a long planar object which is parallel to the view plane in camera space would yield a still parallel-to-the-viewplane object in screen space. i.e. no change in the depth values of the vertices of the object realtive to each other would occur. thus nothing to bring up surface anisotropy into the picture. see?
 
Hrm.

So, if I'm 1 "cm" from a piece of paper (and its perpendicular to my eye), if my viewport is such that I can see the entire piece of paper, wouln't there be a need to take more samples at the edge? (completely ignore lenses and refraction and such).

Code:
|    
|    
|.    
|    
| .
|    
|..<---
|    
| .
|    
|.    
|    
|
assume the eye is the arrow. Wouldn't you need to take more samples at the top and bottom of the plane, than the middle in order to avoid texture aliasing?
 
RussSchultz said:
Hrm.

So, if I'm 1 "cm" from a piece of paper (and its perpendicular to my eye), if my viewport is such that I can see the entire piece of paper, wouln't there be a need to take more samples at the edge? (completely ignore lenses and refraction and such).

assume the eye is the arrow. Wouldn't you need to take more samples at the top and bottom of the plane, than the middle in order to avoid texture aliasing?

Not as long as you are looking straight ahead, because the view-space Z of all the edge vertices of the sheet of paper are the same as long as you are looking straight at the paper. As soon as you alter your eye direction and look at any angle across the paper then the view-space Z of the vertices are no longer equal and you will introduce a perspective effect. So in the diagram you show with the eye looking straight ahead, the number of samples must remain the same over the whole surface.

To quote Foley and Van Dam on perspective projections:

"The perspective projections of any set of parallel lines that are not parallel to the projection plane converge to a vanishing point."

Lines that are parallel to the projection plane do not converge, so the ratio of sampling must remain constant.
 
RussSchultz said:
Hrm.

So, if I'm 1 "cm" from a piece of paper (and its perpendicular to my eye), if my viewport is such that I can see the entire piece of paper, wouln't there be a need to take more samples at the edge? (completely ignore lenses and refraction and such).
<snip>
assume the eye is the arrow. Wouldn't you need to take more samples at the top and bottom of the plane, than the middle in order to avoid texture aliasing?

you cannot ignore lenses and refraction. imagine for a moment that your eye on that picture instead of being spherical-lens-based was a flat (say, CCD-based) projection panel (w/o a lens infront of it, that is). now, what would you see? /* pause */ you'd see exactly that area of the paper sheet which corresponds to the footprint of your 'planar'-eye on it (for simplicity we assume there's only directional light and the reflective properties of the sheet of paper are perfect.). something like that:

Code:
| <- sheet of paper    
|    
|    
|    
|.......| <- your eye
|       |
|       |
|       |
|.......|
|
|    
|    
|

as you can quess you'd get the image sans a single bit of anisotropy on the paper surface. now, that would correspond to 'parallel projection', which is nicely modelled by the, well, parallel projection in computer graphics. now enter the perspective projection: some smart guys decided they could account for the non-planar nature of human vision and still keep the flat projection plane. so they got things to the following state

Code:
|.    
|    
| .    
|    
|   .|
|    |  
|....|
|    |  
|   .|
|    
| .   
|    
|.

what the above means is that the sheet of paper is shrinked so it fits your planar eye now (sort of as if you had a lens eye w/ peripheral vision) but the image is still undistorted, i.e. you still get a constant ratio paper_area / sensor_panel_area throughout the image you receive.

now, to actually get your fisheye effect but keep your planar eye, it would take a transformation which would distort the paper sheet as this:

Code:
\...................|
  \                 |
   |                |
    |               |
     |              |
     |              |
    |               |
   |                |
  /                 | <- this being your planar eye
/...................|

see now?
 
For being a person that's usually quite spatially apt, I'm having difficulty wrapping my head around this.

It would seem as the 'planar eye' moves closer to the surface, and you're consequently enlarging the field of vision, you're going to have to sample the surface differently--simple geometry should tell us this:

A 1 degree arc in your vision will cover more surface area of the plane on the outer edges than in the center,.

Or am I just bumping up against the edges of the approximation where the mathematical model ceases to be adequate?
 
Back
Top