State of 3D Editorial

stevem said:
JoshMST said:
...but rather because now that DX9 VS/PS 3.0 specifications are well known, both companies are playing on a much more level field

I'm sick of this red herring! I originally heard it from an OEM source as postulated by a particular sales rep when a 2nd tier VAR was about to pull an account...

What all people who quote this "level playing field" seem to forget is that ATi will also be developing new hardware. They've proved they can build a better moustrap than Nvidia for the last year, so it's more likely that ATi will continue to do this than Nvidia will magically come up with something better.

There is some sort of implication that Nvidia will automatically leapfrog ATI and that Nvidia were simply "not trying", and that ATI only beat Nvidia by accident, or because Nvdia let ATI into the lead. Fact is, that we've heard this same song ever since R300 that "Nvidia will beat them next time, next drivers, next chip". Yet Nvidia have not been able to do so, and can barely draw level in most cases even with cheats and reductions in IQ.

This idea that Nvidia will fix everything "next time" is just more FUD spread by Nvidia, and picked up on by the PR-extension websites and fan boys. There's no reason to think Nvidia will not have just as much of a hard time competing against R420 as they did against R3x0, mainly because Nvidia still have this attitude of thinking they are always in the right, rather than fixing the problems in their company that put them in the position they are in now.
 
Well if HALO is anything to go by then the r300 is not going to run DX9 applications very well at all. (of course halo isn't really DX9) Once real DX9 games come out, not ones with a few DX9 features thrown on top the r300 will look pathetic. That is how it always is, however if you want to say that the r300 schooled the FX line then you would be correct :).
 
Sxotty said:
Well if HALO is anything to go by then the r300 is not going to run DX9 applications very well at all. (of course halo isn't really DX9) Once real DX9 games come out, not ones with a few DX9 features thrown on top the r300 will look pathetic. That is how it always is, however if you want to say that the r300 schooled the FX line then you would be correct :).

What are you talking about? Halo runs very well on the R300. How do you define a real DX9 game?
 
Josh, your article really just showed your lack of understanding of the current state of viddy cards.
He at least argued with the backing of a reasonable degree of technical knowledge unlike those who argue on the basis of simple bias and received ideas.
 
Zod said:
Josh, your article really just showed your lack of understanding of the current state of viddy cards.
He at least argued with the backing of a reasonable degree of technical knowledge
No, he didn't. He put together some half-baked assumptions that sound like they came directly from nVidia's PR dept...I don't see your basis for saying it was backed with a reasonable degree of technical knowledge 'cause all I read in that article is a lack of said above. :(
 
I define a real DX9 game as one that is designed to run on dx9 as a minimum, not one that is designed to run on dx8 and then a few dx9 features are added.

Note this does not mean that only dx9 cards can handle playing it, but it means that integral parts are missing, not just a bump mapped floor or two. For example morrowind is a DX8 game, but it can run on dx7 video cards although not so great :).

And you should not take it badly current cards wil play game that are released in a year worse than the ones that were out when they were developed, this is hardly unusual.
 
digitalwanderer said:
Zod said:
Josh, your article really just showed your lack of understanding of the current state of viddy cards.
He at least argued with the backing of a reasonable degree of technical knowledge
No, he didn't. He put together some half-baked assumptions that sound like they came directly from nVidia's PR dept...I don't see your basis for saying it was backed with a reasonable degree of technical knowledge 'cause all I read in that article is a lack of said above. :(

Agreed.

I've quit reading articles @ PennStar because he's as bad as Anand and Tom's when it comes to nVIDIA.
 
Ok, here goes...

First off, let me say thanks to those in this forum that have lent a hand in helping me understand what is going on behind the scenes in both the PR world and hardware. I have been a big fan of ATI products since the Radeon 9700 Pro, as it truly was (and is) a marvelous piece of hardware. If there is any telling fact about myself and ATI, it is that I run a 9800 Pro in my main/working machine. Why? Because it works really well in all situations as compared to the FX 5900 Ultra that sits in the lab machine.

I think my term "brute force" architecture should have been referred to as "focused". Yes, the R3x0 series is a smart design, it works very well with PS/VS 2.0 code, as well as all of the shader versions before it. It does what it does very well, and doesnt waste transistor space on features and things that it doesn't need. I do not think that the R3x0 series is perfect though, however I do believe that it is honestly the best product out there right now.

NVIDIA has made some very interesting tradeoffs with their architecture, and I am very confused as to some of the hardware decisions that they have made. I do not think the engineers at NVIDIA are in any way second class. I think this group has proven themselves again and again in the past, and I do not think that they will fall down again like they did with the NV3x series of products. Now, whether the NV4x series of chips will overcome the R4x0 series, that is a different story. We all know that the engineering ability at ATI is also excellent, and they do not underestimate what NVIDIA will do. My overall feeling (and hope) is that there will be a lot more parity between the companies this next spring in terms of features and performance. This parity will certainly help out consumers, developers, and the different vid card manufacturers.

Now, my question to you guys, is if we will truly see FP32 implemented across the board in the next generation of products. From reading here and elsewhere, it would seem that NVIDIA should just concentrate on FP32 and drop FP16 all together. It was nice of MS to add the partial precision hints for NVIDIA, though FP24 is designated as the minimum. But will we see NV go total FP32? Will ATI also decide to go with FP32? If that is the case, will there be much of a performance hit? Or will any kind of hit be offset by the fact that it will be a next generation part running faster?

Thanks for all the input guys, this is certainly making things a LOT more clear. Too bad there aren't any NV engineers hanging out here to help clarify some of the things with their hardware.

Again, I have great respect for what ATI has done, and continues to do. They have the strongest DX9 offerings at the different price points (except value). When will we see R3x0 parts hit the under $100 level?
 
JoshMST said:
Now, my question to you guys, is if we will truly see FP32 implemented across the board in the next generation of products. From reading here and elsewhere, it would seem that NVIDIA should just concentrate on FP32 and drop FP16 all together. It was nice of MS to add the partial precision hints for NVIDIA, though FP24 is designated as the minimum. But will we see NV go total FP32? Will ATI also decide to go with FP32? If that is the case, will there be much of a performance hit? Or will any kind of hit be offset by the fact that it will be a next generation part running faster?
What does that matter? It isn't going to happen during the productive life of the FX nV3x product, so it kind of is totally irrelavent.

Thanks for all the input guys, this is certainly making things a LOT more clear. Too bad there aren't any NV engineers hanging out here to help clarify some of the things with their hardware.
Yeah, you didn't mention in your article how reticent nVidia is with the info while ATi communicates with the community and answers questions...BIG difference.

Again, I have great respect for what ATI has done, and continues to do.
You sure don't seem to show it in your article. :(

They have the strongest DX9 offerings at the different price points (except value). When will we see R3x0 parts hit the under $100 level?
A couple of weeks. When will we see a dx9 offering from nVidia below $100 that can actually run dx9 stuff?

I'm glad you're taking the time to address this Josh, that article is a bit of a travesty the way it stands. :(
 
DW, the guy went to a lot of effort to write an article with a new slant. You, on the other hand, just trash it with cheap shots.
 
Zod said:
DW, the guy went to a lot of effort to write an article with a new slant. You, on the other hand, just trash it with cheap shots.
No, he wrote a badly researched infomercial heralding the "advantages" of nVidia which are non-existant.

I really am not trying to slam him, I'm trying to point out his errors...if I was trying for a slam I'd be saying "sucks". ;)
 
Well, I must be hitting some kind of middle ground with the article, as I have been accused by NVIDIA fans of having an ATI slant with the article.

As for the FP32 comment I was asking about, I wasn't talking about the current generation of products, so I am unsure what you are talking about with the nv3x comment? Yes, it does support FP32, but it also supports FP16. The question I was trying to ask is if the next gen parts from ATI and NV will only support FP32 (and not FP24 and FP16). Now, I know that DX9 doesn't really have FP32 spec, but it is supported (as full precision). FP32 is the IEEE standard, so I was just wondering if anyone in the know has anything to comment about the next gen parts transitioning to a full FP32 precision and not supporting FP16 and FP24 anymore.

My communication with NVIDIA is really good. I write the guys there, they write back. I write PR dept. at ATI and I hear nothing. I really have only met the ATI guys here in the past couple of days (though I did converse a bit with OpenGL Guy last year, but I never kept in touch and I didn't stick around B3D). My basic point is that when one side communicates with you, while the other side doesnt' give you the time of day, what information a person has most likely is colored by the side that actually talks to you. Now, I know that ATI can't address every single hardware site around, but dammit I have been doing this since 1999 at PenStar, and since 1997 at other sites. And due to outside pressures of real life, I haven't had the chance in the past year and a half to really concentrate on 3D graphics, and the new technology that we are seeing. I used to be pretty keen on this stuff, but when you don't have the time to get into the new generation as fully as you like, you will make mistakes as I have. So basically it is lovely to be slammed down when all I am trying to do is get good information.

As for the FX 5200... it can certainly run DX9 stuff better than the Radeon 9000/9200. I am not trying to apologize for the performance of the FX 5200 in DX9 apps, but at least the option is there to view the slideshow.
 
JoshMST said:
Now, my question to you guys, is if we will truly see FP32 implemented across the board in the next generation of products.

Actually, that's not a pariticulary exiciting question, nor would knowing the answer give us any idea on which future part might be "better" imo.

From reading here and elsewhere, it would seem that NVIDIA should just concentrate on FP32 and drop FP16 all together.

Yes and no. nVidia, imo, should concentrate on getting a part out that runs well while meeting high-end API specs. Period. Whether nVidia accomplishes this by dropping FP16 and throwing more transistors at their current FP32 pipes, or trashes it all and goes more with an ATI like (all fp24) approach...I don't really care.

But will we see NV go total FP32? Will ATI also decide to go with FP32?

I would guess no on both accounts.

I think nVidia's NV4x00 would have too much borrowed from the NV3x to be a drastically different approach. This doesn't mean there can't be significant improvements in FP32 performance, but I don't see the FP32/FP16 duality leaving nVidia until the DX10 era. (Likely NV50 generation.)

Similarly, I don't see ATI moving up to supply full support of FP32 pixel shaders until the same time. (DX10 era / their R500 generation.) AFAIK, PS 3.0 still has FP24 set as "the spec", and ATI stnads to gain virtually nothing by committing resources to FP32. ATI would be better off committing resources to just making their current core faster, and supporting whatever new PS/VS3.0 functionality is required.

To be clear, I really don't think ATI will have any significant challenge moving up to FP32. It's just that in the list of priorities for ATI, I don't see FP32 coming near the top of the list of "improvements to the R300 core needed for the next generation part."

If that is the case, will there be much of a performance hit? Or will any kind of hit be offset by the fact that it will be a next generation part running faster?

I don't think when ATI implements FP32, there will be any "performance hit per pipe" simply due to precision support. (Just like how R300 has "no performance hit" for running FP24 vs. inetger.) Performance hits are going to come from shader execution in general...scheduling, dealing with flow control, etc.

Again, I have great respect for what ATI has done, and continues to do. They have the strongest DX9 offerings at the different price points (except value). When will we see R3x0 parts hit the under $100 level?

I think spring '04 will see the first real sub $100 tergeted DX9 parts from ATI. Coming soon we'll have some 64 bit Radeon 9600 boards that will probably sell in that market though.

By fall '04 at the latest, ATI should also have DX9 in their integrated chipsets....making DX9 support really cheap.
 
JoshMST said:
Well, I must be hitting some kind of middle ground with the article, as I have been accused by NVIDIA fans of having an ATI slant with the article.

Out of curiosity, can you give some examples of where you're accused of having an ATI slant? ;)

My communication with NVIDIA is really good. I write the guys there, they write back. I write PR dept. at ATI and I hear nothing.

And that explains the nVidia slant. ;) Not that it's entirely your fault...ATI does need better PR communication.
 
Dave could tell you with perfect accuracy i guess but i'll try to share what i have learned here.

From my readings i think the next ATI cards will still be FP24 and will be more focused on the PS VS part of the chip. But as PS3.0 and VS3.0 are already exposed in DX9.0, i guess they are fine with FP24.

I read somtime ago one ATI guys saying that they will move to FP32 only when needed and i dont see why it would be needed before R500.

About NV40, no idea but unless they make a tottaly new architecture with a lot of more power, it could be hard to be full FP32 with no fall back and still be competitive in performance area.
 
JoshMST said:
As for the FP32 comment I was asking about, I wasn't talking about the current generation of products, so I am unsure what you are talking about with the nv3x comment? Yes, it does support FP32, but it also supports FP16. The question I was trying to ask is if the next gen parts from ATI and NV will only support FP32 (and not FP24 and FP16). Now, I know that DX9 doesn't really have FP32 spec, but it is supported (as full precision). FP32 is the IEEE standard, so I was just wondering if anyone in the know has anything to comment about the next gen parts transitioning to a full FP32 precision and not supporting FP16 and FP24 anymore.
I'm really confused with your fixation with FP32, although upon reading that you mainly talk to nVidia folk I guess it makes sense. ;)

Lemme try and explain this as simply as possible.

FP32 is a moot point right now, period. Nothing needs it, nothing asks for it, it's un-necessary. nVidia took a big gamble with their "cinematic computing" bid and lost, anything you hear from them otherwise is pure FUD.

The big question is how well do the current crop of cards perform FP24 since that is what all the games will be using. It wasn't "nice" of M$ to give nVidia the partial_precision hint option in dx9...it was absolutely and totally life-saving for nVidia to have them do that.

See, the FX cards can't run FP32 well at all. Yes they can run it, but not well. :)

The whole FP32 decision of nVidia's is the root of most of their current woes, honest.
 
JoshMST said:
Now, my question to you guys, is if we will truly see FP32 implemented across the board in the next generation of products. From reading here and elsewhere, it would seem that NVIDIA should just concentrate on FP32 and drop FP16 all together. It was nice of MS to add the partial precision hints for NVIDIA, though FP24 is designated as the minimum. But will we see NV go total FP32? Will ATI also decide to go with FP32? If that is the case, will there be much of a performance hit? Or will any kind of hit be offset by the fact that it will be a next generation part running faster?

Hi Josh,

Glad you visited again.

Want to make a couple comments about the FP32 route, that you are "touting".

1. You say in your article that PS 3.0 will be in DX9.1, when it's more likely to be in DX 10. Now... whether DX9.1 PS 2.0+ (2.1?) is FP32 or not, I don't know, but I highly doubt it.

2. EVEN IF DX 9.1 with PS2.1 uses FP32, there are NO cards, currently, that can handle it, at present. NONE of the NV3x line is fast enough to DRIVE FP32. That is NOT Future proof. So Everyone that bought FX5200-FX5950's are SCREWED. Because they'll still run slow in FP32.

3. DX9 is going to be around for QUITE a while. The difference between PS2.0 and possibly PS2.1 would be the same as a Geforce 3 using PS 1.1 and a Geforce 4 using PS 1.3, in terms of eye candy. There's not going to be much difference... meaning Current FP24 cards have a LONG life for a DX9 Life, just as my Geforce 3 Ti500 last all the way through DX8 and a year of DX9 (before DX 9 games started coming out). TRUELY Future proof. :p

My point is... IF DX 9.1 supports FP32 somehow... it's NOT going to make any difference to the CURRENT ATi Line.... it's not going to help the CURRENT nVidia line... and until both companies can actually "Drive FP32 at a decent speed", it's not going to help the NEAR FUTURE cards.

(Now, whether NV40 and R420 can drive FP 32 well enogh, I don't know).

Regards,

Taz
 
digitalwanderer said:
JoshMST said:
Now, my question to you guys, is if we will truly see FP32 implemented across the board in the next generation of products. From reading here and elsewhere, it would seem that NVIDIA should just concentrate on FP32 and drop FP16 all together. It was nice of MS to add the partial precision hints for NVIDIA, though FP24 is designated as the minimum. But will we see NV go total FP32? Will ATI also decide to go with FP32? If that is the case, will there be much of a performance hit? Or will any kind of hit be offset by the fact that it will be a next generation part running faster?
What does that matter? It isn't going to happen during the productive life of the FX nV3x product, so it kind of is totally irrelavent.
Perhaps he's just curious, and would like to know what people think. I think that's allowed isn't it? He is just asking a question - that question can be treated on its merits rather than trying to tie it into some other subject and label it irrelevant.

Thanks for all the input guys, this is certainly making things a LOT more clear. Too bad there aren't any NV engineers hanging out here to help clarify some of the things with their hardware.
Yeah, you didn't mention in your article how reticent nVidia is with the info while ATi communicates with the community and answers questions...BIG difference.
If you're not a regular on Beyond3D or one of the other message boards that we frequent then you wouldn't necessarily know that this is the case. I'm sure that he understands more about our involvement with the community now than when he penned the article.

I'm glad you're taking the time to address this Josh, that article is a bit of a travesty the way it stands. :(
This being the case perhaps witch-hunts with respect to people who are trying to improve things could be viewed as a bit anti-productive? Constructive criticism and allowing people a chance to address the issues that we raise seems a more appropriate approach.

As an aside - sorry if this seems like an attack on your post DW, but you just happened to be the most convenient example at the bottom when I chose to reply, and I really mean my comments above as a more general impression on the way things seem to be going on these boards at the moment. I don't particularly like it when new posters turn up from other sites, apparently interested in taking the time to get feedback and information, and find themselves dragged continually over the coals for perceived mistakes.

It's happened at least twice recently - crazipper got a similar reception when he turned up here after his recent article. I would hope that Beyond3D can be a place where enthusiastic people can turn up, debate and learn about 3D graphics without coming under continual attacks. I personally post here because this is a site with highly intelligent and informed people, but at the moment knee-jerk criticism seems to coming too easily to people, and consistently polite and constructive criticism is becoming much more difficult to find. I'm finding it a bit tiring to be honest.

Because moderation is relaxed here it is up to us to provide a friendly environment for discussion if that's what we want.

Is that what we want or not?

I'll get off my soap box now. o_O
 
Back
Top