GeForce FX: 8x1 or 4x2?

Discussion in 'General 3D Technology' started by Dave Baumann, Feb 10, 2003.

  1. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I think it's rather funny that you put "Dropping support for rt-patches" in there. That was certainly dropped because developers decided not to use it (for whatever reason...but most likely performance).

    As for the performance of PS 1.4 and ARB2, the cards still aren't widely-available (there are a few FX Ultra's that have made it out, I guess?), and the drivers just aren't optimized yet. It just seems to make sense that it's easier to optimize for the shaders that are closest to the hardware.
     
  2. Mariner

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,288
    Likes Received:
    1,055
    Cripes - 500 replies! :shock:

    Is this the longest thread ever?
     
  3. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    I think it is the longest thread not predominantly consisting of flaming, atleast, if not the longest thread ever.
     
  4. Sharkfood

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    702
    Likes Received:
    11
    Location:
    Bay Area, California
    This has to be the most "fanboi"-ish thing I have read in a long time. Talk about taking the Apple-standpoint on things ("You dont want to know. You don't have to know.").

    So what you are saying is consumer's shouldn't care how many cylinders the engine in their car has, just as long as the manufacturer puts a little check-box on the window that says "Can pull trailers".. I can't think of a more absurd and delusional standpoint concerning consumer products.

    Odd texture layer gaming and heavily texture layered games (not to mention numerous shader conditions) cannot be perceived without knowing the architecture of the product. Obviously having sites like Beyond3D do exhaustive testing to unlock behaviors without the cold, hard facts can generally suffice for the consumer (i.e. they are actually attaching trailers and pulling various loads behind the close-hooded vehicles here) but the consumer shouldnt have to rely on Dave + Beyond3D to give the perception on how the product will deliver performance and under what conditions it excels/deteriorates at.

    It's especially telling when other IHV's/products freely and openly provide no-nonsense, unflubbered hardware specifications for their products. If NVIDIA has nothing to hide or possibly over-sell through exaggerated and false marketing, then what is the real reason? Not a single logical or reasonable explanation exists, unfortunately. They like to seem "mysterious"? And if the end results are so superfluous and unimportant, then why are they so closely and secretly coveted? Sorry, it makes absolutely no logical sense at all..
     
  5. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    I agree totally. It is just nice to know for B3D-types. Of course this is under the proviso that they disclose the capabilities of the architecture accurately, which they haven't - there would have been quite a lot less fuss thus far had nV marketed the FX in a "fudge-free" manner right from the very start. It's not as if the pixel/zixel relationship is conceptually out of reach for the average consumer - in fact, I'm sure they could have spun the idea to their advantage.

    Only a *very* small % of people in nVidia know the full architectural details, BTW - most board level guys don't have a clue, they just get the register spec etc.

    MuFu.
     
  6. jpaana

    Newcomer

    Joined:
    Jul 31, 2002
    Messages:
    154
    Likes Received:
    2
    Location:
    Tampere, Finland
    And what you left out:
    Which to me looks like it could do 4 component FMAD and an exotic scalar function in parallel.
     
  7. MDolenc

    Regular

    Joined:
    May 26, 2002
    Messages:
    696
    Likes Received:
    446
    Location:
    Slovenia
    Sireric also wrote on the next page in that thread:
     
  8. jpprod

    Newcomer

    Joined:
    Jun 2, 2002
    Messages:
    9
    Likes Received:
    0
    Location:
    Finland
    So far we've had very limited information on how NV30 really performs. All of the tests have been done with hardware that's not available to the general public, and AFAIK same goes with drivers. The only nVidia chip supporting PS1.4 is the NV30, so the same argument holds there.

    You're also implying that having standard extensions and shaders perform worse than optimized ones is a conscious decision by nVidia? I'd say it's only natural that native extensions are perform better.

    This could very well be a hardware-related issue. They chose not to support it in GF1 propably because the chip was too far in the design pipeline already. The GF2 did not add any features to the GF1, but was more of a tweak (trilinear pipeline -> dual-texturing bilinear capable) and a process shrink. The next nVidia chip, the GF3, supported EMBM.

    How is not supporting a very uncommon feature like this contribute to forcing something down developer's throats?

    Agreed here, I see no valid reason as to why they dropped RT-patches. Displacement mapping is a feature I see little use for, until it can be done fully via shaders. Matrox is the only one currently supporting it.

    Still, neither of these points seem to relate to your original argument.

    I don't see a problem, as nVidia supports both. CG can be used on any hardware, it's just a higher-level tool for creating shaders. Agreed though that nVidia would not mind seeing CG replacing either DX9 or OGL2 HLSL, but that's something that just won't happen.

    Strongly disagreed. Historically nVidia has embraced standards.
    - They were among the first ones to release a consumer video card with no propietary API, but only D3D drivers.
    - They were among the first ones with full OpenGL driver.
    - They were the first ones to support fixed-function hardware T&L in consumer graphics, a commonplace feature in professional OpenGL hardware at that time.
    - They were the first ones on the market with a DX8 shader specification complicant chip.
     
  9. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    I think it was probably due to performance. According to their paper, the Nvidia system used forward differencing - this is fast once you get going, but requires substantial processing to compute new control values.

    These new values have to change whenever you move a control point or vary the level of tessellation so, given that it appears it was the CPU/Driver that did these calculations, that might account for the performance issues.
     
  10. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Look a bit beyond the specifics:

    What does it say to you if a company doesn't support n-patches (the more widely accepted form of hardware patching), and slowly stops supporting rt-patches, followed by such limited support for displacement mapping? It's obviously an attempt at the gradual phase out of high-order surfaces. Granted HOS haven't always been the very good in the past, but there is still a lot of really cool stuff that could be done there if only enough hardware would support it.

    It may perhaps be a bit too early to dismiss their poor implementations of ps1.4 as trying to silently kill off/down play the competition's tech, I agree.



    No i'm not. I'm implying that putting all your focus into proprietary extensions and doing the standards half-assed is a way of forcing developers to use your proprietary paths. Virtually all companies are capable of making their own extensions run well, the good ones make the standard paths run well. The end result from the application point of view is that almost all cards except nVidia's will run on the standard path, while nVidia cards require a vendor specific, optimized path.


    EMBM was a bad example, I agree.


    It's less of a problem now that they are finally supporting non-proprietary extensions, but ps1.4 support is still completely absent. And CG compiled code always seems to result in significantly better performance on nVidia cards only than the HLSL compiler. Yet another example of mis-placed focus.


    Notice I said near past. I originally had a comment in about this that the only way nVidia overcame 3dfx was by adopting standards when 3dfx wouldn't, and now nVidia is the one not adopting standards. I left it out because it sounded too much like 'doomsaying' or implying nVidia is going to die out like 3dfx. They won't, but if they keep going against the standards perhaps they should (then again someone going against the current might ultimately prove to be healthy).

    Yes, I know I sound a bit fanboyish, but I assure you I am not. I don't even remotely care which company does best, or which company has the best products. All I care about is The Right Thing to do, and lately nVidia hasn't been doing that.
     
  11. jb

    jb
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,636
    Likes Received:
    7
    Whoaaa. Sorry good sir but I have to disagree on this. The issue is that nV did not disclose this to us. While your correct average joe sixpack won't care its still miss-leading. Some of the previews we saw suggested driver problems could be the reason why the scores were low. Now that we know that part of the time its an 4x2 instead of a TRUE 8x1 we know that no driver update can ever give us all of the prefromance of a true 8x1 card under those cases. Thus some folks out there were lead to beleive that mircle drivers will cure all. Again we know that driver updates will help.

    The point is we as consomers need to make nV and any other company that tries this type of stuf, to realize they have to be held accountable for their claims. By saying its not big deal you are almost saying that its ok for them to mis-lead you so long as the perfromance is there.
     
  12. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    I really don't think it is a big deal if a company doesn't "tell the whole story" regarding ASIC layout, pipeline & TMU organisation/associativity etc. That's their prerogative.

    Of course it would be nice - but half the fun is figuring it all out, huh? ;)

    MuFu.
     
  13. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Agree and disagree. Not communicating that info at all is one thing. Not telling the "whole story" is another thing....but communicating something that is borderline simply not true, is entirely different.

    In other words, I would consider if nVidia communicated "8 pipeline arcitecture" as not telling the whole story, and just woefully vague. However, communicating "8 pixel pipeline" architecture is to me, not being truthful at worst, misleading at best.
     
  14. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    Yep - sorry, I isolated that one point instead of addressing the whole of Chalnoth's post which was a bit misleading.

    One more time for the road:

    What I mean is they could say "our GPU is a small, boiled potato, sitting in a bucket of warm spit..."

    "...it is fully DX9-compliant, supports 4 pixel/8 zixel output per clock etc etc" and most people wouldn't give two hoots about the slightly soggy tuber at the centre of it all.

    I am crazy, BTW. [​IMG]

    MuFu.
     
  15. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    People, they won't disclose everything about their architecture. They may disclose 99.99% but not 100%. Will they ever disclose all the info 100%?
    Have they disclosed all the info about the Riva128 or TNT 100%?
     
  16. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
    i think the point is that we have come to expect them to be considerably more forthcoming and forthright than the latest trend. hence, it seems highly unlikely that things are going to improve much in that respect.
     
  17. Ollo

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    129
    Likes Received:
    1
    Look, it's fine if they choose not to disclose some facts. But it's not fine to claim things which are simply not true. As simple as that.
     
  18. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I don't blame nVidia at all for not supporting n-patches. They suck.

    As for stopping support of rt-patches, they were never used. That's why support stopped. It has nothing to do with not wanting to make use of higher-order surfaces.

    I still think nVidia wanted to create some sort of programmable primitive pipeline, but it just didn't make it into the NV30. Regardless, there will be programmable displacement mapping once we get chips that support PS 3.0. I don't buy for a moment that they're trying to get rid of higher-order surfaces. They just didn't feel it was the right time (for whatever reason).

    And PS 1.4 is still completely proprietary (only the Radeon 8500 and derivatives support it as the highest-supported shader level). As for Cg performance, where have you seen the figures? I haven't seen any.

    I have problems with anybody that claims to be unbiased. So many people claim it. The truth is that you are always going to be biased, for one reason or another. Your decisions may force you to think a certain feature is very important, or a certain bit of company behavior is very despicable, while another person thinks the opposite. Your experiences and particular situation will always make you biased.

    For me, right now the biggest reason I'm still sticking with nVidia products is Linux support. Right now I cannot use a video card that doesn't work under Linux, and I can't get the 9700 Pro to work under Linux. Granted, it is because there is no agpgart for the nForce2 chipset outside of nVidia's display drivers, but all I need is for ATI to support a PCI-66 mode, and I'd probably be using the 9700 Pro right now (the main reason I still blame ATI is that if I was having the same problem, no agpgart, with nVidia's drivers, they would still work with AGP disabled).
     
  19. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    "Bias sees only bias"..
     
  20. Ostsol

    Veteran

    Joined:
    Nov 19, 2002
    Messages:
    1,765
    Likes Received:
    0
    Location:
    Edmonton, Alberta, Canada
    N-Patches do not suck -- the 3d artists who are too lazy to orient vertex normals properly suck. If that weren't the case, every TruFormed model you see would look great. It's not even a TruForm specific optimisation, either. When the normals aren't oriented properly, lighting is inaccurate.
    Lookup the word proprietary. ATI certainly does not own the rights to PS1.4. They are currently the only ones who "support it as the highest-supported shader level", but anyone who wants to incorporate it into their hardware may.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...