State of 3D Editorial

DaveBaumann said:
Uttar said:
Of course, NVIDIA has other minor advantages in niche markets requiring extreme flexibility, such as their dynamic branching support in the Vertex Shader. All of these are excellent reasons for which NVIDIA is still the uncontested leader of the workstation market. ( Although ATI's design win with SGI does seem to be a move in the right direction for them ).

Personally I woudl contest that. IMO, the workstation mrket is even slower to wake up to shaders than the game market is due to its reliance on OpenGL. Even though Cg has been around to some extent and now OGL1.5 has ratified shader extenstions and language, I suspect that it won't be until OpenGL2.0 is finailsed and we see a widespread hardware adoption of it that the workstation market will evolve a little more rapidly.

Yep, I got to agree with you when you say the workstation market is extremely slow. Actually, I wanted to tell that in my original post, but perhaps I wasn't sufficently clear;
I said "niche market" most of the time and not "workstation market" because I'd guestimate that perhaps 5% of the 3D workstation market currently benefits from this flexibility, and possibly less.

But if you look at workstation reviews, what they'll often do is compare DX7-era performance, while noting NVIDIA is more flexible, with no mention of their awful performance in non-DX7/DX8-era type technology.

So I'd say that due to these often very poor and misinforming reviews, NVIDIA's flexibility and excellent DX7 performance in the NV3x certainly plays in their favor for the overall market, which is a much bigger one than the "niche" market this flexibility is really useful for.

Plus, even developers currently doing DX7-era work might think having a "future-proof" card might be useful. Those same developers are unlikely to look much at DX9/OGL2.0. era benchmarks, assuming they look similar to the DX7 ones.

All IMO there, of course, mostly speculation based on fact :)


Uttar
 
arken420 said:
Hey this is also my first post on this site.
<snip>
Anyways, I'm tired and want to go to bed, but I hope I pissed a bunch of people off with this post and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.

Laterz

I will give you a technical response. That was your second post not your first. ;)

Welcome
 
nelg said:
arken420 said:
Hey this is also my first post on this site.
<snip>
Anyways, I'm tired and want to go to bed, but I hope I pissed a bunch of people off with this post and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.

Laterz

I will give you a technical response. That was your second post not your first. ;)

Welcome

No it wasn't, he replied to our comments later in the thread, that *really* was his first post ;)


Uttar
 
Something i was wondering lately is if the "time to market" impact for R300 is not underestimated.

Imagine if NV30 hitted first the market and that developpers had to discover DX9 with that architecture and limitations. Don't you think if DX9 softwares have been primarily made on NV30, the R300 would not seem so good as i guess the software would just not use all its potential ?



Edit:typos
 
PatrickL said:
Something i was wondering lately is if the "time to market" impact for R300 is not underestimated.

Imagine if NV30 hitted first the market and that developpers had to discover DX9 with that architecture and limitations. Don't you think if DX9 softwares have been primarily made on NV30, the R300 would not seem so good as i guess the software would just no use all its potential ?

:oops: CATCH HIM! It's a nVidia employee in disguise! :devilish:


If I'm saying that, it's because when I talked a bit with Brian Burke a few weeks ago, he said pretty much the exact same thing: "If only our card had been out before the R300, developers would have based their work on our hardware instead, and we'd have the lead." ( paraphrase )

Worse part is, he seems to sincerly believe that sort of stuff. Err, I'm sorry guys, but that doesn't matter. The DX9 standard is FP24, and developers can do whatever they want, they won't be able to make the NV30 beat the R300 with FP32 or even FP16.

Unless they use more cos/sin/rcp/rsq than MADs :LOL:
"Introducing: Wind World!
It's all flat shaded, but we swear the position of ALL objects and the related lighting is physically correct, thanks to NVIDIA's next-generation special-purpose FP32 COS/SIN engine*.
Such an amazing design could have never been possible on pre-Cinematic hardware!!!

*: Calculations may be innacurate due to usage of FP16 registers to be able to run above 25FPS."


No, but seriously, if the "current" NV30 had come out first, the difference might be smaller, due to a more aggressive usage of FP16 hints by programmers. However, only way for it to run really fast would have been to use FX12 - and then, developers would have been forced to use PS1.3., reducing today's games quality. Pretty lame IMO.

But then again, that was never expected IMO; when the NV30 was still scheduled for Spring 2002, the R300 also was, and when the NV30 was scheduled for SIGGRAPH, the R300 was scheduled for late July, eh.


Uttar
 
Lol uttar, no i am not working for nvidia :p

In fact i am still angry about the lithography talks that happened some time ago, and i would like to emphasis that i think people underestimated what a smart move i was from ATI to launch a DX 9 product even fefore DX9 launch, and that no being a 0.13 product was in no way a proof of weakness.

I bet that, with what happened with DX9 in mind, we will see DX10 cards ready way before DX10 launch from both HIV.
 
Uttar said:
PatrickL said:
Something i was wondering lately is if the "time to market" impact for R300 is not underestimated.

Imagine if NV30 hitted first the market and that developpers had to discover DX9 with that architecture and limitations. Don't you think if DX9 softwares have been primarily made on NV30, the R300 would not seem so good as i guess the software would just no use all its potential ?

:oops: CATCH HIM! It's a nVidia employee in disguise! :devilish:


If I'm saying that, it's because when I talked a bit with Brian Burke a few weeks ago, he said pretty much the exact same thing: "If only our card had been out before the R300, developers would have based their work on our hardware instead, and we'd have the lead." ( paraphrase )

Worse part is, he seems to sincerly believe that sort of stuff. Err, I'm sorry guys, but that doesn't matter. The DX9 standard is FP24, and developers can do whatever they want, they won't be able to make the NV30 beat the R300 with FP32 or even FP16.
:LOL:

Thanks for posting up my sentiments, I hadn't heard about the BB story.

Is that article done yet? Did I mention I'm really looking forward to it? 8)
 
I'm telling you Uttar, you can really cash in on dig. Just look at him all sweating and shaking there! It's like he's going through withdrawl symptoms... ;)
 
PatrickL said:
Something i was wondering lately is if the "time to market" impact for R300 is not underestimated.

Imagine if NV30 hitted first the market and that developpers had to discover DX9 with that architecture and limitations. Don't you think if DX9 softwares have been primarily made on NV30, the R300 would not seem so good as i guess the software would just not use all its potential ?

Edit:typos

Being first gives you advantages. If ATI had screwed up instead of Nvidia, and NV3x had a lead of almost a year, then that would have been a massive advantage.

However, being by far and away better than anyone else in the market also has advantages. Even if Nvidia had been first to market, they still would have lost out by R300 being so much better in every way. Anything NV3x can run, the R300 can run better.

Unfortunatley for Nvidia, R300 was not only better, but it was first to market by a long, long way in the fast moving graphics card market. Suggesting that if Nvidia had got to market on time it would have solved all their problems is just wishful thinking at best - it still wouldn't have been able to hide it's lackluster performance when compared to R300.
 
PatrickL said:
Lol uttar, no i am not working for nvidia :p

In fact i am still angry about the lithography talks that happened some time ago, and i would like to emphasis that i think people underestimated what a smart move i was from ATI to launch a DX 9 product even fefore DX9 launch, and that no being a 0.13 product was in no way a proof of weakness.

DX9 was supposed to launch at the same time as R300, but at the last minute Microsoft delayed it for several months. Obviously by then ATI was comitted to launch, and had to go to the trouble for writing DX8 drivers for a DX9 card.
 
cthellis42 said:
Just look at him all sweating and shaking there! It's like he's going through withdrawl symptoms... ;)
No, no...that's just from doing trick-or-treating with the kids by myself yesterday. It'll clear up in a bit. ;)
 
Uttar has mentioned that NV30's original target was Spring 2002. I actually think that was never realistic, but wouldn't have been surprised to see it Fall 2002. I think Uttar's accurate in that being the target though. It's likely that R300 was late as well. Execution is a very important component to having a successful product and it's been my experience that most products are not out on time because it is very hard to estimate how long complicated tasks will take.

I agree with Dave that Nvidia's strong OpenGL performance is likely due to minimal use of shaders in OpenGL benchmarks. I still wonder though if there are certain OpenGL features used in workstation apps that Nvidia handles in hardware and R300 falls back to software. I only say this because the last time I looked at workstation benchmarks NV30 was way ahead of R300. That was a while ago though.
 
Uttar said:
If I'm saying that, it's because when I talked a bit with Brian Burke a few weeks ago, he said pretty much the exact same thing: "If only our card had been out before the R300, developers would have based their work on our hardware instead, and we'd have the lead." ( paraphrase )

Worse part is, he seems to sincerly believe that sort of stuff. Err, I'm sorry guys, but that doesn't matter. The DX9 standard is FP24, and developers can do whatever they want, they won't be able to make the NV30 beat the R300 with FP32 or even FP16.

...

No, but seriously, if the "current" NV30 had come out first, the difference might be smaller, due to a more aggressive usage of FP16 hints by programmers. However, only way for it to run really fast would have been to use FX12 - and then, developers would have been forced to use PS1.3., reducing today's games quality. Pretty lame IMO.

Question is: Did you actually set him straight? It always comes across as though people who talk to the likes of BB or Derek are fed all this crap, but don't actually try (pointless as that may be) to correct them, instead preferring to laugh about them behind their backs on sites like this. Not saying that's the reality, but just the impression I've gotten.
 
3dcgi said:
Uttar has mentioned that NV30's original target was Spring 2002. I actually think that was never realistic, but wouldn't have been surprised to see it Fall 2002.

I'd say it was realistic before the 3DFX acquisition. But the 3DFX influence ( 4x2+4x0 being part of it ) delayed the whole thing quite a bit, with NVIDIA not expecting them to delay anything AFAIK.
Remember the NV20's target was H2 2000. That means if you considered 18 months, the NV30 was to be launched in H1 2002, and the NV40 in H2 2003.

That's where the Comdex 2003 target for the NV40 came from and NVIDIA in their typical "We're god and can manage absolutely any deadline, even though it's not physically possible" stand, hadn't originally accepted to delay either the NV30 or the NV40.
The result for the NV30 was complete panic, and for the NV40, well, I don't know :D

Although looking at the little NV40 tape-out info we got, two very reliable tape-out dates exist, and from that I can only conclude the first tape-out failed.
That's still better than the NV30's hundreds of millions of failed tape-outs ;) :p

Dig: For the last time, ULE has been complete for over a week now. I'm STILL waiting for the edited version, which I assume I will receieve in either 24 hours, or in over 72 hours.


Uttar


---
EDIT
---

Paul, FYI, I did try to tell him he was being overly optimistic several times. I wish I still had the logs, it all went down the trash can in my recent Windows XP reinstallation. Eh.
Among the last amazing time I've passed talking online with BB, I've been insulted of "chickenshit", "having an unethical behaviour", and quoting "off the record stuff" all of which while him "not giving a shit".

While my relations were pretty good with him for a while, I once said "Okay, so I'm warning you in advance, I'm quoting some of what you said in ULE." ( paraphrase ).
He was SO pissed off at me thinking ( and quoting that ) he should know that FP32 performance is not 50% of FP16 performance that I don't expect to ever be able to talk with him anymore.

His conclusion?
"I don't give a shit. I'm great at my job."
( I don't have the exact logs anymore, but I'm pretty sure that was what he said, maybe with one word of difference )

Welcome, friends, to the wonderful world of NVIDIA.


( He was so angry I decided not to quote him in ULE - now, I'm just paraphrasing it :LOL: )
 
Uttar--

I may be out of line here. . .but I've read what you've written publically about your reasons for bailing out of GPU: Rumor Watch. Then I read your posts here. I get a different impression of what might be driving you to give it up: getting sick of wading thru the muck, and possibly concern that if you wade thru the muck long enuf you might end up. . .well, mucky, yourself.
 
bloodbob said:
Saist said:
bloodbob said:
It wouldn't have to have FP16 only FP32.

Nvidia are trying to raise the minium precision requirement to FP32 because it is VERY likely that the R420 will still only have FP24 precision there for nvidia can advertise they are the only DX9.1 complaint product.

Aside from the fact that there is yet to be any proof of any kind that there will be a Microsoft DirectX 9.1, and counting the fact that Microsoft DirectX 9.0b already covers Pixel and Vertex shaders 3.0, as well as covering the 32bit precision (although it is not a required part of the spec while 24bit is), and tossing onto that the fact that the Microsoft DirectX team has stated publically that there will be no updates to the DirectX standard until the time of Longhorn...


Makes me wonder there Bloodbob were exactly your pulling this out of?

Well first thing first I never said there was gonna be a DX9.1 I was replying to someone ( I can't find the post now ) who said that DX9.1 / PS3.0 was gonna require FP32. I said the obvious reason behind this if it was true as we know nvidia has been pushing for FP32 all along.

Now of course a company says there will be no more updates but what happens if they found something that REALLY stuffed up the standard at a late stage would they leave the standard completely broke I doubt it. And atleast on a binary level DX9 is gonna be updated because the DX9.0b does not yet support PS3.0 with HLSL.

Don't confuse the DX9 runtime with the SDK. Microsoft have stated their intention to release updated versions of the DX9 SDK over time, specifically to include updated versions of the HLSL compiler. The runtime is not expected to be updated in terms of functionality (i.e. no DX9.1) but future revisions for bug fixes etc. (9.0c) can never be ruled out.

GP.
 
geo said:
Uttar--

I may be out of line here. . .but I've read what you've written publically about your reasons for bailing out of GPU: Rumor Watch. Then I read your posts here. I get a different impression of what might be driving you to give it up: getting sick of wading thru the muck, and possibly concern that if you wade thru the muck long enuf you might end up. . .well, mucky, yourself.

Well some stuff did annoy me: I must admit the ONE thing that disgusted me the most was something a reviewer told me...
"Really, you should continue working on this stuff - it's the greatest industry in the world. You can make millionaires fear you." ( paraphrase )

The last sentence just completely, utterly disgusted me.
Plus, at the time I stopped GPU:RW ( and most of my dealings with sources, what I'm giving you here is mostly rethinking of old info or things which were FYI and is now okay to post ), I began realizing just how serious rumors were to NVIDIA and ATI's eyes.

I knew about ATI's misinformation campaign for a while ( not that they seem to take it very seriously *grins* ), and around the 15th of September, certain NVIDIA stands on rumors were communicated to me.


While ATI employees freely post on Beyond3D, NVIDIA consider forum posts as a crime. Their information security department ( and no, that's not just a myth ) never hesitate to use that against target employees.


Uttar

P.S.: Once ULE is published and I responded to all the feedback, expect to see me visit 3D forums as much as between 75 and 90% less.
 
"Information security department"; that has a rather Orwellian taste to it.

Ever get any heavy breathing from anyone about lawyers, personal liability, etc?
 
geo said:
"Information security department"; that has a rather Orwellian taste to it.

Ever get any heavy breathing from anyone about lawyers, personal liability, etc?
"Vee are vith nVidia ISD and vee vould like you to cease ansvering any qvestions..." :oops:
 
Back
Top