crappy 3dmark score on NV31/34?

Discussion in 'Architecture and Products' started by DOOM III, Feb 28, 2003.

  1. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    NV31 has *some* software-based shader functionality, of that I am 100% certain. It was one of the main stumbling points duing R&D.

    NV34 is "DX9-compatible" - it quite blatently does not have a full DX9 H/W featureset.

    MuFu.
     
  2. PanzaMan

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    6
    Likes Received:
    0
    ..du u want to bet that the NV34 is a GeForce3 in disguise? AKA GeForce4MX = GeForce2 ??
     
  3. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    I bet you will turn out to be more right than wrong....

    That would also mean we can throw out any assumptions of NV chip "code names" and level of DX support....
     
  4. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    I think it'll be transparent to the developer though, i.e. the entire NV3x line will support all DX9 functionality, just some of it will be basitit-slow because of software hacks. Sure - official compliance demands that certain things be run natively in hardware, but perhaps in nV's eyes they are bending the rules slightly for the better. Surely it's a good thing for game development that the mainstream sector will be flooded with cards that officially "support" DX9, despite being unable to execute even moderately complex shaders at any kind of useable rate.

    I doubt the NV34 has much in common with previous generations apart from its memory controller and perhaps TMDS/RAMDAC stages. It is definitely a derivative of the NV30 core.

    MuFu.
     
  5. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    I hope there's sarcasm in there? ;)

    In any case, we could be getting ready to witness an interesting battle at the low end. NV34 might basically be a horrendous performer, though "supporting" DX9.

    RV280 doesn't support DX9, but it might be a better performer in general?

    If those two characteristics pan out....which part will be the OEM's "darling?"

    My gues....whichever one is cheaper. ;)
     
  6. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    It's hardly transparent to the developer if CPU-assist means shaders run an order of magnitude slower--it basically rules DX9 out.

    Everything we've heard about the NV3x so far (with the significant exception of the extended shader capabilities) seems like a disaster for developers: Doom III needs to use NV30 extensions to run well; 3DMark03 needs a hacked driver to run well; Microsoft and NVidia are in a dispute about precision requirements for PS2.0; and now (if this story is to be believed) NVidia's mainstream "CineFX" part will not run shaders very well at all.
     
  7. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    Since we're talking about OEM's, my guess would be the one with DX9 stickers all over it. At least if it's somewhat close in price and performance.
     
  8. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Hmmm....did DX8 stickers all over the Radeon 9000 make that the darling of OEMs compared to the GeForce4MX? (Honest question....)

    The key is "somewhat close" in price and performance. Down in this segment, a matter of a couple bucks is probably beyond the "somewhat close" barrier. :)

    We also have to consider overall branding: that is "Radeon" vs. "GeForce" brand. It "GeForce4" was a great brand for nVidia. "GeForceFX......." ??
     
  9. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    Maybe not, but Nvidia was the market leader at that time with the GF4 and you don't win over the OEM's with one generation of cards. ("Non 3D hardware freaks" friends of mine asked me what card to buy 2-3 months after the R9700 was released and was asking me what type of GF4 they should buy for their high end gaming machines. All they knew of was "GeForce", much like "Voodoo" a couple of years back :)).

    You're right about the price. All in all, will be a very interesting "battle" though. As for the brand, still says GeForce :)
     
  10. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Well, now you're mixing the retail consumers with OEMs...we're talking about OEMs. Curiously, which card did you recommend to your friends? ;)

    As for winning over OEMs with "one generation"....see MSI news.

    Could very well be correct. In fact, I believe nVidia was planning on using a different brand, but OEMs pressured them into keeping it.

    That may turn out to be the smartest thing nVidia did. Without the GeForce brand, the FX may really be in trouble.

    On the other hand, it's also possible for it to go the other way.....the FX line may be the first nail in the coffin for the GeForce brand....and that could be a really bad thing for nVidia....
     
  11. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    I was guessing that the OEM's was aware of how the people that they sold computers to were thinking :)

    As for my recomendations, let's just say that it wasn't a GF4 :)

    You're right about that. I don't think it's going to happen though but who knows. If the NV31 and 34 are poor performers then things might be tough for Nvidia for quite some time. Unless they can haul out the NV35 really soon as some rumours are suggesting.

    This of course assuming that the NV35 is at least as "good" as the R350.
     
  12. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    Interesting. Compared to our best guess for NV34, your GF3 has the following disadvantages:
    • Inability to run GT4
    • Inability to use PS 1.4 to single-pass GT2 and GT3 (although unclear if that's actually an advantage for NV3x)
    • 20 MHz slower core clock
    • 2400+ instead of 2700+ CPU

    On the other hand, it has the following advantages:
    • 4x2 instead of...4x1? 2x2??
    • 960 MB/s more theoretical bandwidth
    • Probably more memory/fillrate optimizations in hardware (GF3 has HierZ, Z compression and fast clears; NV34 has ??)

    All in all, the 879 score is looking sadly plausible, IMO. Of course these scores could be on drivers that don't "optimize" for 3DMark03.

    I always thought the official (marketing) definition of "DXn-compatible" was just that the part can run the DXn runtime. In many ways this is a silly definition, as it really means only that the IHV is still developing drivers for the part (i.e. a TNT is DX9-compatible under this definition), but it does make a certain degree of sense.

    "DXn-compliant", then, means that the part enables substantially all of the DXn featureset. Thus (assuming they offer >=FP24 support), NV31 and NV34 should be DX9-compliant, albeit not in hardware.

    Of course figuring out exactly what shader bits are to be executed in hardware and what in software is becoming an intriguing problem. The "obvious" answer is that NV34 does all vertex shading in software and that both do all pixel shading in hardware. But you're asserting that NV31 has some apparently new form of software-assisted shading, and it seems a possibility that NV34 might not even do full pixel shading in hardware.

    Except that it would seem very, very difficult to efficiently do any sort of pixel shading in software; I'm certainly at a loss to explain how that might be done. Of course, Nvidia should certainly be able to find whatever clever solutions to this issue might exist.

    As for NV31's lesser software shading...perhaps some mechanism to offload *part* of the vertex shading load to the CPU while still doing *part* on the GPU? That would be an interesting idea, and might conceivably cause problems during R&D.

    Hmm...
     
  13. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    They'd still likely have to develop a dumbed-down path for the NV34 then, because it's so slow, so I don't really see how that will help much.
     
  14. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    It seems to me the most likely description of the NV34 is PS 2.0-alike non-fp16/fp32 capable fragment shaders (i.e., NV30 with the fp fragment processing units removed, among other things...though I still remain very curious about my vertex/fragment processing unit sharing tests).
    This would be more functional than PS 1.4, but slower than PS 1.4 hardware when acting like it, i.e., as discussed about the nv30 being fast for 1 texture load (0 phase levels, PS 1.1-alike) but being equally slow for each additional dependent texture load as a tradeoff for the "Cinematic shading" (i.e., being faster for shaders too complex for gaming...though it would seem to behoove them to focus differently for the mainstream part).
    This is something that Cg could expose and that DX 9 HLSL could technically expose as well, but it seems it won't because the intermediate LLSLs do not expose the possibility (PS 2.0 includes the expectation of floating point support AFAIK).

    As we've mentioned before, this situation would also be reflected somewhat in the NV31 if it only supports fp16.

    IMO, barring any nasty surprises, this doesn't make them bad parts (in contrast to, for example, my opinion of the GF 4 MX in the context of shader adoption), though the NV34 seems likely to suffer from what I'll call "Radeon 9000 disease" :p.


    Based on these theories:

    Things should still be fine for nvidia in OpenGL, regardless of Cg adoption...in the HLSL atleast. I also have some thoughts floating around about how Microsoft's IP declarations may relate to nvidia's efforts to establish Cg independently.

    Nvidia's root problem, IMO, was in the apparent arrogance of their stance on Cg...the success of their design (the theoretical NV34, and maybe now NV31, design descriptions above) outside of OpenGL seems to depend on gaining popular support for Cg, and they seem to have assumed that their position as market leader at the time would allow them to mandate its acceptance by themselves. This would make their delay of the NV30 to boost clock speeds seem even more disasterous to me, as this would seem to have impacted their market share position even more negatively than being perceived to have lost the performance leadership would have (though OEMS, game developers, and game publishers who try to influence developer baseline targetting, may each have unique perspectives on the issue).

    This type of arrogance may also be reflected in their decision to design their hardware regardless of standardization (the ever-popular glide comparison, except that there are standards now unlike when glide was first offered), but maybe it was just engineering ambition and arrogance just came into the picture in their approach to fitting the resultant design into the marketplace (as they seem to have tried instead to fit the marketplace to their design). At this time, my own opinion of their past behavior and comments such as those Richard Huddy have made make me think the former.

    Also, their "pride" (or, less emotionally, their marketing dependence) in "performance leadership" seems to have been a big factor in their problems (though the proportional impact in the marketplace isn't necessarily accurately represented in a forum like this one), though maybe the NV35 and rumored R400 delay will change the picture later this year.
     
  15. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    More sadness than sarcasm, really. :lol:

    MuFu.
     
  16. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
     
  17. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    ~double post~
     
  18. MuFu

    MuFu Chief Spastic Baboon
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,258
    Likes Received:
    51
    Location:
    Location, Location with Kirstie Allsopp
    NV34 is referred to as "DX9-compatible" internally. The phrase "low-end POS" seems to crop up quite a bit as well, so it will be interesting to see how they market it! It is capable of 4ops/clk, so I guess 4 zixel/2 pixel output per clock. With clockspeeds of only 250-300Mhz you can see how it will quite possibly suck hairy donkey balls. No colour compression in NV34, no... one of the cost-saving differences between it and NV31.

    Yeah - that's a requirement of DX9, right? Is it the SiS Xabre that does the same? I can't remember...

    Yes. I believe the problems mainly involved getting the speed of "patched" functionality (i.e. things stripped from NV30) up when running them on CPU time. Dave B was able to help me out regarding this a while ago; I wasn't sure what they meant by "fastex" shader hacks. Pretty obvious now.

    I am quite hesitant to talk about it though, because it will inevitably seem as if I made a big fuss about something totally unapparent to the consumer! I'm not sure this will ever be proved/disproved, although I suspect we'll see some suspiciously crap shader performance out of NV31, given its hardware specs.

    The other big problem was the fact that many of the functional blocks inherited from NV30 were buggy. I wonder if we'll see the hardware fog bug rear its ugly head in NV31/NV34. :?

    MuFu.
     
  19. MrNiceGuy

    Newcomer

    Joined:
    Jun 21, 2002
    Messages:
    13
    Likes Received:
    0
    Mufu is off base...

    My developer friends tell me that nv31 & 34 are full dx9. No more 'mx' nonsense - only pipelines & performance features removed from nv30.

    I guess we will find out on Thursday when they launch it...
     
  20. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    That doesn't differ from what MuFu's been saying: "full" DX9, but partially in software, so as to render it useless b/c of unacceptably slow "performance."
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...