Asking Tim Sweeney about NVIDIA and more

Discussion in 'Beyond3D News' started by Reverend, Sep 29, 2003.

  1. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    When looking at the NV3x chips the following ideas leap to mind.

    1) Nvidia expected to release them much earlier than they actually did, with the plan being to have them supplant the Gf4 Ti series while ATI was still fielding R200 based products.

    2) Nvidia expected that the majority of games during NV3x's lifespan would be DX8.1 based, with only a few DX9 games coming out as they got ready to transition to NV4x based products. Good DX8.1 performance with basic DX9 support was therefore far more important than raw DX9 performance. Poor PS2.0 shader performance wouldn't have been important if we'd seen GfFX cards in quantity in mid-2002.

    3) Nvidia expected ATI's R300 to also be a transitional part, and to be released later and with inferior performance to the GfFX series in current games. The R100 had not been able to outperform the Gf2, and the R200 ended up coming in between the Gf3 Ti500 and Gf4 Ti4200. ATI's previous performance indicated that ATI's most likely new part would have similar performance to the Ti4600 and a more advanced featureset, and arrive at about the same time or later than the NV3x based cards. NV3x would have been able to handle that. Unfortunately for them they got hit with a strong DX9 part that came out well before NV3x, and the rest is history

    Unfortunately they guessed wrong on all counts.
     
  2. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Walt, at some point, both companies were making their 'vision' of DX9.

    At some point, Microsoft took all the IHV inputs and came up with the final API.

    At that point, one was closer to what was chosen than the other.

    And that, would put one company at a disadvantage. You discount the time it takes to make changes to ASIC designs, particularly architectural ones. Its not a few months, its many months.


    Its not like its a novel concept. Parhelia suffers from somewhat of the same fate, as did the VSA100.

    There's no conspiracy involved, no diabolical switch-a-roo, just facts of doing business where multiple competitive parties are operating in parallel attempting to influence a standard.
     
  3. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    Re: Microsoft/Nvidia Fallout

    What if the "choosing" was based on capability?

    What SDK features that are the NV3x strengths has MS failed to expose? SINCOS isn't a R3xx strength, nor ddx/ddy, nor FP16.

    What R3xx strengths did MS "unfairly" exploit?

    Not slowing down when using significantly more registers than PS 1.3? Why doesn't the ability to handle more registers simply demonstrate that the R3xx is more capable in that regard?

    Using FP24? Why does the R3xx still compare favorably to FP16, and why can so much still be done with FP24...why can't it be that FP24 really is good enough to be used, to do the...err...things it is being used for? Why when FP16 fails to be be useful, can't it be the fault of FP16?

    Being able to do texture ops independently of arithmetic processing? Why isn't that just plain "being able to do more instructions per clock per pipe"? How is that unfair?

    Vec3/scalar co-execution? Wasn't explicit coissue removed in the move to PS 2.0?

    ...

    That seems to leave one item:

    Floating point textures as render targets? Well, there is Direct3D's apparent expectations for what can be done with textures serving to preclude simple exposure of what the NV3x is capable of...having a class of capabilities be associated with textures as part of the API expectation. I do think it is unfair that what the NV3x can do isn't exposed at all, I'm just not so sure that this is inherently the fault of MS, rather than a result of other hardware being capable enough to allow this expectation to continue to make sense (i.e., ATI took this expectation into account when designing, and nVidia did not).

    Is this what you were thinking about? What about the other factors?

    Is it MS's fault for not expending even more effort specifically for the NV3x, and altering API expectations to add the complexity for special casing based on texture type (which, IIRC, is one aspect of why the solution isn't simple)? Is it nVidia's fault for simply not making their hardware flexible enough for Direct3D's expectation on floating point textures? One factor is whether nVidia had reason to be surprised on the API behavior in this regard...did something change for DX 9? Or, could it be, that nVidia failed to execute properly, and this "unfairness" is an unfortunate effect of that being not good enough?

    Considering nVidia's efforts on promoting Cg and their other performance issues, nVidia's delay in releasing the NV3x, and this while MS was working on (and has shipped, now) the ps_2_a profile (along with the other points above being exposed), seems to indicate that MS has demonstrated more dedication to getting the API working for NV3x than nVidia has. If you have some reasons for disagreeing, please share, but it doesn't seem to make sense to discount that effort and say it was down solely to MS's simply choosing one IHV or another. :shock:

    Overall, it looks to me like MS simply didn't "choose" to limit the spec to those NV3x weaknesses that simply were not "good enough". I think register limitations and computational deficiencies fit this. I also think it is evident that NV3x strengths are indeed exposed. For the floating point texture behavior: was the choice for universal texture capabilities made at the last minute, or not? It being the result of deficiencies and nVidia's mistakes seems clear, but MS's blame might or might not be.

    This doesn't seem to make sense as a comparison at all. The ps_2_a HLSL profile and PS 2.0 extended spec are still alive, and growing, right now. FP32 isn't summarily dismissed as an option because it just a "bit more" than required in general. The NV3x had and continues to have more effort directed towards it's architecture by developers, including MS.

    How more effort being directed at a particular hardware is unfair to that IHV, I do not see.
    It seems fairly simple: the NV3x's woes follow from its deficiencies, and its strengths, such as they are, are exposed. It still comparing unfavorably is because it, fairly evaluated, should. :?:
     
  4. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    I never said "unfairly". Well, ok, I didn't say "well, its not an unfair burden, but it is a burden", but I certainly didn't mean to suggest that "woe is NVIDIA, they've been unfairly treated".

    Its a fact of life MS made choices toward the end of the product cycle development as to what it thought needed to be in the API, and all IHVs had to gamble that their little 'twist' would be "in" so that the other companies would be put "out".

    NVIDIA's product is obviously inferior in processing power, though it does seem to excel in some other features (longer shaders, higher precision, branching).

    Had ATI's product been required to implement some of those features late in the game, we'd likely be looking at a different landscape. Not completely different, of course, but different enough. (As in the VSA100, for example. Its pixel shaders ended up being unused entirely because it didn't meet the generally accepted minimum spec)

    I think its telling that the ATI product nearly perfectly implements the spec, but there's aspects of the NVIDIA product that goes beyond (or to the left or right, or below, if you prefer), which suggests to me that either:

    a) Both companies knew what it'd be all along, but only ATI is smart enough to actually do it
    b) ATIs product was (somewhat) ex post facto made the golden rule, and NVIDIA's product had to have engineering resources redirected to meet that golden rule as best they could late in the game. (The FP16 vs. FP24 thing is along those lines)

    But you're right, there's no excusing the lower performance of the NV3x parts except mis-judging what would be the bar.
     
  5. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Russ, the whole point of the API standard is that it's a standard of *co-operation* not competition. nV30 was late, Parhelia was late, and so was the VSA-100, but that's neither here nor there.

    Perhaps, though, that characterized nVidia's mistake--in seeing the standard as a competitive lever it might use to influence the market positively for itself, instead of a co-operative effort to advance the API and support it. I mean, I'll wager that nVidia knew everything ATi knew about DX9 support, and learned it at the same time as ATi. Really, I just don't see how we can get past the point that ATi simply designed and shipped a much better DX9-supporting chip than nVidia did in exactly the same time frame (only nVidia shipped much later than ATi.) Ati's pixel shading performance, fp performance, and pixel-per-clock performance are all much better than nV3x, even if we don't count all of the other DX9 feature support the R3x0 either has that nV3x doesn't, or does better than nV3x. Those are matters of *architecture*, Russ, not matters of DX9 compliance. ATi simply built a much better chip around a much better architecture.

    You know, if nVidia's ASIC development program is so stilted and inflexible, so that it can ship its DX9 chip 7-8 *months* after ATi shipped its own DX9 chip (longer if you don't count the aborted nV30 as "shipping"), and yet still have been so far behind the curve, it's a testament to some kind of internal failure in nVidia's gpu design process, wouldn't you say? I don't think the ASIC line of reasoning is much of an excuse, accordingly. Even if the design was stuck in the ASIC process, it's still an inferior architecture.

    Basically, I think there's no conclusion but that jumping to nV3x from nV2x has caused nVidia lots of serious problems, not the least of which is inferior DX9 support for nV3x. The other problem the company has had all year is .13 micron yields for its chips. Neither architecture or yields are covered by the M$ DX9 spec. The spec simply says, "Here's what we're supporting for DX9." It didn't tell ATi or nVidia how to go about designing a chip to support those specs. That's a different process entirely.
     
  6. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    I think a big part of the problem is that there are some people to whom it is utterly inconceivable that Nvidia could have an inferior part. They cannot believe that on this product cycle, for whatever reason, ATI produced a generally superior part. They believe Nvidia is simply better than any competition, and any evidence that disputes this must be wrong.

    They explain the 9700 Pro's defeat of the Ti4600 by shifting all blame to TSMC and their difficulty transitioning to .13micron. Had the NV30 come out on time the United ATI World Conspiracy would have been unable to bury it and their plans would have been foiled.

    It's that mindset which is the problem, because that is what has allowed Nvidia to get away with as many tricks and false claims as they have.
     
  7. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Nvidia the next 3DFX?

    Interesting as this all is I find fault in none of the players:

    1.)Nvidia felt they had the market cornered and could do what they wanted, created the FX code path as a way to create real time CG. Setup the "Way its meant to be played" program to encourage developers to use their code path and created FX demos to show off what they could do if the used their code paths.

    2.)ATI did a great job of showing up when Nvidia didn't need them, ATI also produced a card that followed all the rules of DirectX 9. Also certainly played their best card at a time when Nvidia was just playing the market and hoping the had the time to make money off of their offering. Course ATI didn't let of on the pressure and Nvidia played catchup.

    3.)MS wasn't going to let Nvidia push them around. (course what if they had, would things be different, better?...hmmmm) As for opengl, from what I hear those games that are working closely to use the FX code path perform pretty well (Doom3).

    4.)Game developers (Valve) is not going to change so late in the game, they fosued on DirectX 9 as a goal long before Nvidia wanted to push everyone to FX. With a game already in development for 5 years, course game developers will scramble to make the Nvidia card at least function as best as possible, course it's already too late to use the card the way Nvidia had intended.

    5.)Consumer, led to believe anything and everything that every reviewer, news caster, company says and gets confused. Everyone wants to know how Nvidia could lose to ATI. I don't want to blame anyone since to me it looks as if things just played out poorly. Who knows what would have happened had the FX path become a standard? Does anyone care? Course not now, it's all about playing those games and if those games needs directx9 thats what we want.

    We shall see if Nvidia can save face with all this, ATI certainly is loving the hype and at least we have competition where it was needed in the first place. Nvidia being the larger player may just pull out the big guns like Intel does to AMD.

    I'm on the fence at this point.
     
  8. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    Oh goody, you brought up Doom.

    Yes the NV3x cards do perform very well in that game using their special code path. However, this path is game-specific rather than engine specific and the performance gain does not necessarily transfer over to any other game that would use that path. It also uses a lot of partial-precision shaders and Nvidia specific OpenGL extensions.

    Unfortunately, when you run Doom3 on an NV3x card using the standard OpenGL ARB2 rendering path with no IHV specific extensions, it runs it at about half the speed of the R300.

    So just like in Half Life 2, the NV3x is significantly slower on the API-standard path than its competitor.
     
  9. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    I like the IDEA of a community-run benchmark, but I think it misses the point. First of all, and slightly irrelevant, RightMark doesn't work at ALL for me yet, even the config thing crashes so far. But I'll pore over the readme more to see what's up with that. More importantly, what the past year has proven is that industry standard benchmarks are passe. What I am enjoying seeing is all the review sites using different benchmarking tools from all over the place. I've seen demoscene stuff, sellout demoscene stuff, games, betas, and some healthy application of FRAPS, and I'm loving it. Now, it makes life very difficult for the average gamer who isn't going to be reading several sources for their info, but screw them. They usually don't even read ONE source first-hand. Their info can't GET any worse. So for us, and I use "us" liberally since I am just a lurker/casual gamer, I say we continue to use publically available benchmarks, compare and contrast, and USE OUR BRAINS. If the whole reviewing world used "OpenSourceMark", for example, then you'd just see IHV's writing adaptive drivers for that 'mark. Plus, I don't give a rat's ass how GenericMark2k5 runs on my radeon. I want to see games, and maybe some 'licious demoscene demos ;9~

    FRAPS, FRAPS, FRAPS, I say. Just FRAPS everything. I want to see how every game this year performs, if I can. Just my relatively uninformed pennies.

    PS I guess we'll need some OGL benchmarking, too. But you know what I mean?? Keep THEM on THEIR toes. Why should we pick ONE program to let them optimize for? If B3D writes some shader code, I trust them somewhat, but it honestly won't carry as much weight in my book as FRAPSing all of this year's game demos. Maybe someday FRAPS will be unreliable, too, or maybe it already is and i dunno it, but the point is, I want reviewers to benchmark whatever we can get our hands on, not whatever is THEORETICALLY the best test of one possible game feature.
     
  10. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    Then why did you respond at all to WaltC asking about an "unfair burden"? Things like this are confusing...did WaltC dispute that nVidia was at a disadvantage? It seems clear that disputing that the burden being "unfair" was the entire point, not whether there was a "burden" at all.

    Remaining points of disagreement besides the above confusion:

    Both IHVs seem to be "in" to a reasonable degree, as my discussion on exposed strengths covered. The link between one IHV being "in" intrinsically requiring that another IHV be "out" that you (seem to) propose would appear fallacious. It would seem nVidia is "out" solely due to, and as far as, failing to perform. In any context that doesn't consider only nVidia's interests, this eventuality is fair, as we've covered.

    You are listing things based on an IHV having them, not on how well the IHV delivered them, or how much of a necessity they are. For real-time usage (gaming):

    • Longer shaders: doesn't make much sense for this generation and performance capabilities.
      In any case, nVidia isn't prevented from exposing this.
    • Higher precision: a nice possible advantage, but not a necessity for this generation of functionality and performance. As I discussed, FP24 is logically shown to be "good enough", where FP16 might not be.
      In any case, nVidia isn't prevented from expsoing FP16 and FP32.
    • Inferior processing power: in order for the above to make sense, being able to offer enough processing power to take advantage of them. Being deficient sort of precludes the possibility of making them part of the minimum requirement. This is aside from a discussion of how the "shorter" shader length and "lesser" precision being demonstrated to be quite excellent for DX 9.
    • Branching: in terms of the pixel shader branching support the NV3x has, what is the application of this that simply cannot be done by the R3xx? We aren't prevented from finding out (except maybe by the hardware problems of nV3x), as branching is exposed in the API.

    Again: All of the items you list are exposed in the DX 9 API. I don't understand how they relate to nVidia's burden at this point, but I think the next statement is the key to that:

    Do I understand you correctly as maintaining that the NV3x "not unfair burden" is due to ATI not having its capabilities prevented from being exposed? I wouldn't disagree, I'd just wonder what point there was in mentioning it in a conversation involving "fairness" without some clarifications that seem absent.

    This seems to make it clear: you are proposing that nVidia is under the burden you mentioned you meant to communicate, due to ATI not suffering the fate of the "VSA 100 pixel shaders".

    Well, I'd agree that competition is a burden if you fail to successfully compete. :?: This seems to be the gist of the conversation? If this was your entire point, no clarifications are necessary.
     
  11. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Being picked the golden example makes "getting it right" that much easier. You presumably have expended exactly as much effort as required (by definition). If you exceed the standard, or do things in an alternate manner, you spend more energy.

    If you disagree that being the "winner" of the standards war has benefits and by definition puts your competitors at a disadvantage, then there's not much to discuss.

    Actually, there's not much to discuss anyways. I've layed out what I think. You're free to disagree.

    edit: The more I think about this, I'm pretty sure PS1.0 was a rampage thing. I don't wonder if that isn't some of what killed Rampage and in someway helped kill 3dfx.

    The engineers propose their texture computer, and Microsoft likes it enough to call it PS1.0. Only at some later date to become enamored with (or determine) that PS1.0 wasn't "good enough" and required PS1.1 as offered by some other IHV. Back to the drawing board for some major revisions.
     
  12. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    :?: Huh? What on earth have I posted that got this response?

    (1) I have to post my entire "long-winding, blabbering" email to Tim because the fact that Tim didn't answer all those questions of mine should give us hints... and probably tells us just exactly how part of this 3D and gaming industry is shaping up.

    (2) Telling folks that I correspond regularly with Tim (or anyone else for the matter) is important info, IMO. I don't always "advertise" the fact that Tim and I trade emails regularly but when it comes what I personally regard as important matter (like the point of this news post... btw, did you get the point why I posted this as news?), I see no harm in telling folks Tim and I correspond on a regular basis. It lends credence to opinions I express on the matter. It does not mean I think I'm somewhere up in the clouds and need to show it to everyone.
     
  13. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Russ, reading your post rasied some questions for me. Do you think that M.S. Had enough information to estimate the level of Dx9 performance of the nV3x ? If the info that they had suggested that they in fact would have had good performance then they should have chosen that implementation for the added benefits that you outlined. If the information that M.S. had suggested that the performance would not be good with Dx9 then they made the only decision they could. Why mess up ATI’s design when all that it would do is give two poor performing Dx9 level architectures ? Of course this brings up the question, has nV known all along what the performance level of its nV3x line would be?
     
  14. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    This statement states as a given that "getting it right" followed specifically from "being picked the golden example".

    I disagree with this, and propose that "being picked the golden example" followed specifically from "getting it right".

    This is not just switching word order in a sentence, this changes the meaning of the sentence completely.

    What definition? :shock: The (useful) effort expended is in how well you executed your goals...being "picked the golden standard" (fairly) just means where you aimed your efforts was the most useful.

    By this logic, the 5200 or 5600 was more "effort" than the R300..at least, if you ignore the capabilities the R300 has that the NV3x does not. Besides that, what about things relating to performance?

    For the NV30 or NV35 versus R300...what about 8 pipelines, vec3/scalar, texture ops, the now-revealed additional processing units per pipe? Did those take no effort? How did nVidia fail to execute them, then?

    I don't disagree that the IHV who happens to be the one you propose "won a standards war" has their current main competitor at a disadvantage.
    I disagree that such benefits, and having a competitor at a disadvantage, are "by definition" the result of being declared a "winner" of a "standards war". The latter is what you seem to be proposing, as per my "in" and "out" discussion.

    It seems to me that:

    • There isn't just one "winner" of the standards war (see my discussion about nVidia's strengths being exposed).
    • The "winner" (not of a standards war, but of a "good implementation war") that there is at this time "won" by delivering hardware that put the competitor at a disadvantage. Please note the causality..."hardware->winner", not "standards excluding competitor->winner".
    • It is fair that the "winner" receive benefits from this accomplishment.

    Your phrasings seems to disagree with this.

    If you don't think it disagrees with any part, please reflect on my first 3 sentences again. I can provide a further example if necessary for you, preferrably via PM if no one else finds it necessary.
    If you don't disagree, you simply didn't get that across to me...you can correct that or not as you see fit.

    As it stands, I've given my reasons why I think your phrasings seems to be in error.

    OKs.
     
  15. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Who the heck are you calling a Troll? Tim specifically hinted out that game developers are backed by different IHVs. HL2 is as optimised for ATI as is Unreal is for Nvidia.

    The so called "superior" hardware has more to do with financial powers moving in the dark. We shall see once FX optimised games are out. ATI fanboys are in for a rude shock. :wink:
     
  16. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Transition to 0.11 micron technology will probably start in the second half of 2004, about a year and a half after the first 0.13 micron chips left TSMC foundry. Hence, all next-generation ATI’s VPUs will be made using 0.13 micron technology, while the future generation graphics products, such as code-named ATI R500, will be manufactured using 0.11 micron technology. It is also possible that ATI will make a less complex graphics processor for mainstream or value market segment using 0.11 micron technology for evaluation the process in the second half 2004.

    http://www.xbitlabs.com/news/video/display/20030929004713.html

    The already well-known ATI R420 graphics processor probably boasting with a lot of new innovations, such as Shaders 3.0 and DirectX 9.1 support, will be announced at Comdex. ATI will try to ramp it up as quickly as possible in order to deliver the graphics cards for the pre-Christmas season, however, currently there are no indications if the company has chances to succeed or not.

    Moreover, there will be a mainstream graphics processor currently known as ATI RV420 intended for performance mainstream market. It will make its appearance after the R420 and will inherit its features and capabilities.

    In general, expect ATI R360 to come in August or early September, R420 to appear late this year or early next year, while ATI RV420 is to come in the first quarter 2004. Probably RV420 arrival depends on the launch of the higher-end R420 product.

    http://www.xbitlabs.com/news/video/display/20030710162332.html

    Just like NVIDIA’s representatives during the most recent conference call, ATI’s VP of Desktop division also did not said anything on timeframes of the code-named R400/R420 next-generation architecture product, but said that it contains hundreds of millions transistors and will be made using [now] mature 0.13 micron fabrication technology. We should note that the next high-end VPU from ATI expected this Fall and known as R380 will still be made using 0.15 micron technology.

    http://www.xbitlabs.com/news/video/display/20030812141326.html

    According to the information from the same web-site that watches all the rumours all the time from all around the web, the NV40 will have 8x2 architecture, include 175 million of transistors and will be equipped with 1500MHz memory. The target clocking of the chip is 600MHz. The part has not been taped out yet, according to the source, despite of the earlier claims.

    ATI’s R420 VPU is expected to have 12x1 architecture, blazing core-speeds and skyrocketing memory. There is no information about its tape out, but I would doubt about its successful silicon implementation at this point.

    The fourth generation graphics processors will support additional functionality of Pixel and Vertex Shaders 3.0 pushing the limits of 3D even further. But this will only happen next year, we should add...

    http://www.xbitlabs.com/news/video/display/20030911145704.html

    As we revealed some time earlier, ATI Technologies will have a top-to-bottom lineup of PCI Express visual processing units next year, including the brand-new code-named R423 designed for enthusiasts and RV380 as well as RV381 for mainstream and entry-level market segments respectively. The R420/423 utilise some new architectural achievements, whereas the RV380 and RV381 will still be based on the “good-old†R300 design principles.

    http://www.xbitlabs.com/news/video/display/20030916161930.html

    The Inquirer web-site claims that this is the well-known code-named ATI R420 Visual Processing Unit coming out next year. Some sources confirmed the claim, even though the project name for R420/R400 was “Lokiâ€, not “Viperâ€. The chip is installed on a graphics card for PCI Express x16 slot and currently you see no cooler on the graphics card.

    Other unofficial sources said that a powerful graphics processor from ATI was taped out very recently.

    It is rumored that graphics cards powered by ATI R420 “Viper†graphics processors will utilize GDDR3 memory from Micron at speeds of up to 1600MHz. This may be a reason why Micron’s logotype is on ATI’s upcoming VPU.

    http://www.xbitlabs.com/news/video/display/20030925033540.html

    The highly-anticipated Half-Life 2 game will have a major bug with current DirectX 9.0 hardware resulting in impossibility in enabling Full-Scene Anti-Aliasing, a popular feature that dramatically improves image quality in games. Apparently, there is a limitation in DirectX 9.0 and/or DirectX 9.0-compliant hardware that will not allow the function to be enabled on certain graphics cards if the workaround is not found.


    According a Valve officials quoted in forums at HalfLife2.net web-site, there are problems with the way that current hardware implements FSAA. If you enable it, you will see a lot of artifacts on polygon boundaries due to the way that current graphics processors sample texture subjects with FSAA enabled.

    Valve continued that this is a problem for any application that packs small textures into larger textures. The small textures will bleed into each other if you have multi-sample FSAA enabled.

    Currently both leading graphics chips designers use multi-sampling or hybrid multi-sampling + super-sampling methods to for FSAA.

    The developers of the legendary Half-Life game said that drivers are not likely to solve the problem, however, it still can be solved for graphics cards based on VPUs from ATI Technologies, such as RADEON 9500-, 9600-, 9700- and 9800-series. As for NVIDIA GeForce and GeForce FX-series, there are practically no chances to find a workaround, according to Valve.

    Some industry sources indicated that the problem with such FSAA is a known one and is to be addressed in DirectX 9.1 and next-generation graphics processors with Pixel Shaders 3.0 and Vertex Shaders 3.0, such as ATI Technologies’s code-named R420 and NVIDIA’s code-named NV40 VPUs and derivatives. Both next-generation products will come later than the Half-Life 2 that is expected to be available by October.

    http://www.xbitlabs.com/news/video/display/20030718155730.html
     
  17. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    That's rubbish...:) You obviously didn't read Valve's copious and well-detailed presentation as reproduced on several sites--Valve in fact stated the opposite of what you assert. They stated that *no optimization* was required for the DX9 path for R3x0, and that it took 20% of the time to code the DX9 path for R3x0 (which needed no specific code path) that it took to code an optimized, mixed-mode path for nV3x. Further, they stated that the benefit from doing an optimixed nV3x path was so scant they wished they'd simply have setup the nV3x to run the standard DX8.x code path in the beginning as it would have saved them a lot of wasted time. Valve's results simply mirror those seen in every other DX9-specific benchmark and game released to date--so they are not unusual in any regard.

    Also, the way I read Sweeny's remarks was that he was saying, simply, that if the Rev thought that developers with IHV deals were less than truthful or forthcoming as the result of those deals, probably the best way for him to test things was to write his own bench. I certainly did not see any inference whatever to him, or Valve, being "paid off" by IHVs to tamper with their code to favor specific IHVs.

    Don't be a dunce...:) Valve enumerated very well the reason most developers will not be doing the same type of nV3x code path they were able to do--at considerable cost in resources and time--because most developers don't have the resources and time to spare. Valve's advice to smaller developers was to avoid making their mistake, and just use the DX8.x code path for nV3x support, instead. I'm quite sure this is what we will see...:wink:
     
  18. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    First off: HL2 is not optimized for ATI. It is written to the DX9 spec and since ATI's DX9 hardware adheres more closely to the DX9 spec Half Life 2 performs better on ATI. It is true that different developers have signed deals with different IHVs, but that doesn't mean that one IHV can't have superior hardware to another. What Tim's hints meant was that while the deal with Nvidia means he shouldn't say anything negative about them, that doesn't mean he is blind to the limitations of their hardware. Every DX9 shader test yet devised indicates the Radeons have stronger shaders than the GfFX cards. Tim Sweeney knows that, and since he is telling us to write to the API (not FX optimized code) he is tacitly saying ATI has the better product without coming out and saying it in as many words.

    FX optimized games are either already out, or on the near horizon and it's not the ATI users who are experiencing a rude shock. Valve spent five times as long optimizing for Nvidia's architecture as they did writing the standard DX9 codepath. ATI uses the standard codepath just fine, but even with heavy optimizations Nvidia can't keep up.

    Why can't you admit that this time Nvidia hasn't won the round? Your posts read like the work of one of those people I described in a previous post who don't seem able to conceive of the possibility that ATI could have superior hardware to Nvidia.
     
  19. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Gabe can state all he wants, but IHVs and big marketing tactics are the dawn of the day.
     
  20. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    What's better 12x1 or 8x2 ???
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...