Tony Tamasi Interview

About this gamma-corrected AA debate:

Please correct me if I am wrong, but don't you need more than 8 bit precision/component in the first hand to make gamma-corrected AA possible. I mean isn't the idea of gamma-corrected AA is to bring the detail in the darker portion of the image by only amplifying the color slightly in that region using a gamma-curve function. So if the image is only available at 8 bit precision/component, it means the details in these dark regions are already lost and it is not possible to enhance it. Of course, NVidia can do it with FP16 buffers, but I am not sure if they risk the performance hit they would take by rendering to a x2 buffersize + another shader pass.

I guess ATI manage to do that without a performance hit by doing it in FP24 just before converting everything to 32 bit precision and writing to the framebuffer.
 
Rugor said:
I love the whole partial precision bit in the talk. Doesn't he understand that for ATI there is no such thing as "partial precision." R3xx cards run everything at FP24 because it's a single precision architecture. You hand it a shader (with or without a PP hint) and it runs it at FP24. I don't think he understands what single-precision means.
Oh Tamasi understands perfectly well what single precision is. The exact quote is
Tamasi said:
for Shader Model 3, the required precision is FP32, so you don't get any artifacts that might have been due to partial precision. You can still get access to partial precision, but now anything less than FP32 becomes partial precision. Essentially, the required precision for Shader Model 3 is FP32.
Couple that with
Tamasi said:
And I think lastly, the big issue is that there is no standard for FP24, quite honestly. There is a standard for FP32. It's been around for about 20 years. It's IEEE 754. People, when they write particularly a sophisticated program, they kind of expect it to produce precision that you're somewhat familiar with
and what you have is a very clever attempt on TT's part to paint ATI's product with the bad results that nV hardware generated (i.e. FP16).

Think about it. The market has a bad taste in its mouth about bad FP precision, a result of what nV did, and now TT is using that bad result on ATI. It's really quite good. TT missed his calling - he should be a politician.

p.s.And as far as I know, there was no provision in SM2 for FP16, the baseline was FP24. FP32 was acceptable. Is that the case?
 
silhouette said:
About this gamma-corrected AA debate:

Please correct me if I am wrong, but don't you need more than 8 bit precision/component in the first hand to make gamma-corrected AA possible. I mean isn't the idea of gamma-corrected AA is to bring the detail in the darker portion of the image by only amplifying the color slightly in that region using a gamma-curve function. So if the image is only available at 8 bit precision/component, it means the details in these dark regions are already lost and it is not possible to enhance it. Of course, NVidia can do it with FP16 buffers, but I am not sure if they risk the performance hit they would take by rendering to a x2 buffersize + another shader pass.

I guess ATI manage to do that without a performance hit by doing it in FP24 just before converting everything to 32 bit precision and writing to the framebuffer.
You're partially correct, but I think you're missing something. You probably need a least 10 bits of internal precision to make a gamma corrected frame buffer worthwhile. Both Nvidia and Ati have more internal precision than this. So when these 10+ bits need to be mapped/compressed to an 8 bit frame buffer the gamma correction algorithm does the mapping and chooses to keep the bits that will yield the best image.
 
3dcgi said:
You're partially correct, but I think you're missing something. You probably need a least 10 bits of internal precision to make a gamma corrected frame buffer worthwhile. Both Nvidia and Ati have more internal precision than this. So when these 10+ bits need to be mapped/compressed to an 8 bit frame buffer the gamma correction algorithm does the mapping and chooses to keep the bits that will yield the best image.

Thats what I am trying to say.. Even if you do not have the hardware to do it before writing the pixels to MSAA buffer, and you can still emulate it with another shader pass after rendering everything to MSAA buffer and before decimating to the framebuffer, but you need that much of precision (10+ bits). FP16 buffer will give you this but "does it worth to do it with that much of performance loss?" is another question.

Alternatively, for each shader that writes the values to the framebuffer, they can add a small piece of additional code at the end of the main shader just for this purpose. However, it has to be done carefully because if the color values are read back for multi-pass shaders, the colors have to be corrected again... Seems doable to me... 8)
 
scarlet said:
P.S. And yeah, XP does suxor compared to 98SE. I spent the last two days trying to break XP out of an install loop when it decided it didn't like something. I finally had to wipe the partition and all the data stored therein. 98SE never did that to me.

That's Microsoft's anti-piracy silliness for you. But in all fairness XP is superior to 98SE in every way once you do a bit of tweaking.
 
Scarlet said:
...
Tamasi said:
And I think lastly, the big issue is that there is no standard for FP24, quite honestly. There is a standard for FP32. It's been around for about 20 years. It's IEEE 754.
This is clever FUD no matter how you look at it. The issue isn't whether or not you have a standard but rather whether or not you consistently handle precision in a way that is logically coherent and reasonably representative of infinite precision within a limited number of bits. Anyone who has ever designed floating point ALUs knows there is more than one way to design them, and generally speaking they are all good. What we have as an IEEE standard is what was convenient for Intel to do way back when. :)

I won't belabor how Tamasi used one arguement just one month ago for FP16 and uses a completely different arguement today (it seems like several others have already done that). Folks should take that into account every time Tamasi opens his mouth.

What I really resent about the "fp32 is IEEE754 standard" comments is their inherent dishonesty. IEEE754 may have been around in cpus for "20 years" as he states, but fp color precision in current 3d gpus, as found in their pixel pipelines, is not even remotely comparable, and of course that's why nVidia (or anyone else) hasn't been making fp32 gpus for the past ten years...;)

The type and kind of "fp32" instructions are much different between the two, and IEEE754 cpus don't have pixel pipes and are not concerned with calculating color precision for pixels to fp levels of precision.

And as you say, even if one didn't understand this, how would Tamasi's "fp32" remarks have applied to fp16 in nV3x? Does Tamasi suggest that developers ask, "Whose fp16 is this?" since fp16, like fp24, has no "32" in its description?....;)

I don't think these guys really understand how this kind of nonsense affects the perception of their products in a negative sense. nVidia's got enough of a credibility problem as it is, and things like this, along with inaccurately and dishonestly representing ps2.0 rendering quality as ps3.0-specific, is not a good way of getting the ball rolling with nV40. Such tactics do not inspire confidence that nVidia is wholly confident here.

In relation to fp32, why is it not sufficient to simply state that fp32 renders to a higher level of color precision than fp24? I can't see how the IEEE754 marketing-guy gibberish about fp32 is anything but far less effective on several levels. Tamasi doesn't seem to be doing his job very well in these respects.
 
Unfortunately the majority of Graphics card consumers (at a retail level) do not come to these boards and have the understanding nor do they do the homework that you and I do. A lot of them will see a cool dragon or a cool mermaid on the box and things like 16 pipelines, 32 bit precision and 256MB of memory will drive their decision.

But I do agree with you that this type of BS does not sit well with the educated consumers or the resellers which is what Tamasai should really be focusing on.
 
Scarlet why don't you respond to OpengL guys point? Or would you rather just continue your crusade oblivious to reality?
 
Sxotty said:
Scarlet why don't you respond to OpengL guys point? Or would you rather just continue your crusade oblivious to reality?

Based on OpenGLGuy's post:

OprnGL guy said:
"Given that the GeForce HW has built-in MSAA downsampling in the RAMDAC, you'd have to do the gamma correction before then if the hardwired circuitry didn't do it for you....

I believe Scarlet did indirectly address it:

Scarlet said:
In the context of what nV HW will do,I don't doubt his accuracy. However it is the way Tamasi states it as an absolute implying there is no other solution, that I take issue with. If he had said, "On nV hardware, blah-blah-blah" I would have had no objection.
 
Stryyder said:
Unfortunately the majority of Graphics card consumers (at a retail level) do not come to these boards and have the understanding nor do they do the homework that you and I do. A lot of them will see a cool dragon or a cool mermaid on the box and things like 16 pipelines, 32 bit precision and 256MB of memory will drive their decision.

I would say that the majority is FORTUNATE not to come here. There's more ego boosting and fanboyism in here than anything else. But of course, ever so often we get a nice tidbit of insightful and useful info. :)

But I do agree with you that this type of BS does not sit well with the educated consumers or the resellers which is what Tamasai should really be focusing on.

Not sure about the resellers....don't they care more about Nvidia's ability to get Joe blow interested in their product than the nitpicking nonsense that goes on on these boards?
 
trinibwoy said:
...
Not sure about the resellers....don't they care more about Nvidia's ability to get Joe blow interested in their product than the nitpicking nonsense that goes on on these boards?

Joe Blow is typically going to pick up a $100-$200 3d card, most of the time, precisely because price is the one thing about it all that he can readily understand. A person contemplating a $500US expenditure on a 3d-card, however, is apt to derive far more from technical "nitpicking" than you might imagine...;) I mean everybody who spends $500 on a 3d card isn't a dunce, right?...;) (Although I will stipulate that there are a few "Joe Blows" around who do indeed have more money than brains, I think the truth is that one man's nitpicking is another man's treasure.)
 
WaltC said:
trinibwoy said:
...
Not sure about the resellers....don't they care more about Nvidia's ability to get Joe blow interested in their product than the nitpicking nonsense that goes on on these boards?

Joe Blow is typically going to pick up a $100-$200 3d card, most of the time, precisely because price is the one thing about it all that he can readily understand. A person contemplating a $500US expenditure on a 3d-card, however, is apt to derive far more from technical "nitpicking" than you might imagine...;) I mean everybody who spends $500 on a 3d card isn't a dunce, right?...;) (Although I will stipulate that there are a few "Joe Blows" around who do indeed have more money than brains, I think the truth is that one man's nitpicking is another man's treasure.)

Hehehe How many cards are sold at Best Buy at full price instead of someone doing some research and buying it less at Newegg.com or somewhere else. This should give you an idea of how many dunces are buying $500.00 cards.
 
Randell said:
trinibwoy said:
I would say that the majority is FORTUNATE not to come here. There's more ego boosting and fanboyism in here than anything else..

Bye then :devilish:

:devilish: :devilish:

WaltC said:
Joe Blow is typically going to pick up a $100-$200 3d card, most of the time, precisely because price is the one thing about it all that he can readily understand. A person contemplating a $500US expenditure on a 3d-card, however, is apt to derive far more from technical "nitpicking" than you might imagine...;) I mean everybody who spends $500 on a 3d card isn't a dunce, right?...;) (Although I will stipulate that there are a few "Joe Blows" around who do indeed have more money than brains, I think the truth is that one man's nitpicking is another man's treasure.)

Hey I nitpick from time to time too but less than some here since I wasn't burned by the FX fiasco :) My last NV card was a 4200 that I loved so dearly and really wish I hadn't sold the bugger for a measly 60 bucks.

I was just trying to put the discussions on these boards in the context of Nvidia's perspective. I think someone implied that they should care about what we think of their marketing practices, but should they really?
 
Stryyder said:
Hehehe How many cards are sold at Best Buy at full price instead of someone doing some research and buying it less at Newegg.com or somewhere else. This should give you an idea of how many dunces are buying $500.00 cards.

Actually...I like buying from my local Best Buy because I like the 30-day, no-questions-asked, satisfaction-guaranteed policy--a lot. I've bought from NewEgg as well, but what I like about it is that if I want to return it for an exchange against another like product, exchange against different products, or a full refund (no restocking fee), or some combination thereof, I've got 30 days from my local BB in which to do so , and I can do it instantly without phone calls and shipping delays or other red tape. I've returned unsatisfactory products to online vendors in the past, and while I'm eventually satisfied, it's been nowhere near as simple and fast a process. Generally that kind of convenience is worth a few extra bucks to me, and while I seldom use it, I have used it on more than one occasion in the past, and consider it well worth it.
 
trinibwoy said:
....I was just trying to put the discussions on these boards in the context of Nvidia's perspective. I think someone implied that they should care about what we think of their marketing practices, but should they really?

You might want to try the shoe on a different corporate foot, then. Do you think that companies like General Motors or Intel or Merrill Lynch would be completely unconcerned if they perceived that a substantial number of their potential customers were passing over their products because they perceived the company's PR was dishonest in some fashion? I think it would be a major issue for them, and something they'd want to correct immediately. IMO, the most valuable commodity a company has is its veracity, or at least its perceived veracity, in the public eye. Lose that and you've lost it all, as it's an issue which affects every facet of your business, from your potential investors to your potential customers, to your suppliers and resellers.

Is it really rational to expect nVidia (or any other company) to be unconcerned with how it and its products are perceived in the markets in which those products are sold? I can't see how. Rather, I think it would be irrational to ignore such perceptions when it becomes apparent that they exist.
 
WaltC said:
trinibwoy said:
....I was just trying to put the discussions on these boards in the context of Nvidia's perspective. I think someone implied that they should care about what we think of their marketing practices, but should they really?

You might want to try the shoe on a different corporate foot, then. Do you think that companies like General Motors or Intel or Merrill Lynch would be completely unconcerned if they perceived that a substantial number of their potential customers were passing over their products because they perceived the company's PR was dishonest in some fashion? I think it would be a major issue for them, and something they'd want to correct immediately. IMO, the most valuable commodity a company has is its veracity, or at least its perceived veracity, in the public eye. Lose that and you've lost it all, as it's an issue which affects every facet of your business, from your potential investors to your potential customers, to your suppliers and resellers.

Is it really rational to expect nVidia (or any other company) to be unconcerned with how it and its products are perceived in the markets in which those products are sold? I can't see how. Rather, I think it would be irrational to ignore such perceptions when it becomes apparent that they exist.

Well there are differences between a company like GM and a company like nvidia. I think image is a lot more important to a company like GM where people are making long term commitments to a relatively expensive product. I think the enthusiast market is much more interested with performance and bang for buck than image.

That said, I don't doubt that nvidia's image is having a somewhat negative imact on their sales, but I doubt it will really hurt them in the long run provided they return to producing quality/competitive products. If they have another bad generation or two however it will probably be what people cite as their downfall.
 
AlphaWolf said:
That said, I don't doubt that nvidia's image is having a somewhat negative imact on their sales, but I doubt it will really hurt them in the long run provided they return to producing quality/competitive products. If they have another bad generation or two however it will probably be what people cite as their downfall.

I tend to agree with this point from my personal experience. Not everyone was impacted by the failure of Nvidia's FX line. I know people that still have GF2's and GF4's and if they upgrade in the next few months they wont have a clue about the FX failure.

I think this horrible image Nvidia has is being overblown. Yeah people who follow the 3D scene (a miniscule minority) know the deal but everyone cares much less I must say. A friend of mine who has gotten wind of the FX fiasco and knows about ATI's current performance lead still went out and bought a 5900XT. His previous card was a GF2 which treated him well and the 5900 reviews have all been positive and that was good enough for him. When I tease him about AA quality he says that he can't miss what he's never seen :LOL:

Basically, I think the majority of video card purchases is along this line. It will take a string of failures by Nvidia for the general public to aquire the distaste for them that permeates some forums online.
 
Back
Top