FCAT nonsense...& just plain dumb 3d-card reviews

Discussion in '3D Hardware, Software & Output Devices' started by WaltC, May 14, 2013.

  1. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Hey, man! Heh...Nothing is the same...Those were the days...! I had gobs more time then, and the industry was flying by the seat of its pants, making it much more fun. nVidia was caught cheating, then denying, then caught again, then denying, then caught--until finally, after nVidia's infamous, self-serving, "We don't agree with where Microsoft is taking 3d gaming" spiels--the company finally fessed up to the 3dMark camera-on-rails driver optimization!....cancelled nV30 after touting it to the heavens...and, oh, yea, you remember how vigorously nVidia's "Chief Scientist" (snicker) fought against FSAA? "It's a scientific fact that high resolution precludes the need for FSAA. nVidia's position is that high-resolution is where we intend to go," he said, even as nVidia was feverishly working on its own competent FSAA. And then his mumbo-jumbo about "the power of three" nonsense! (nVidia claimed that doing things "in three's"--as ATi was doing some things--wasn't "scientific" or "mathematical"...;)) Listening to nVidia's "Chief Propagandist" in those days was like listening to Baghdad Bob. What fun...

    I've always liked AMD (I'm a sucker for underdogs) and so I haven't had any problem with ATi being engulfed by AMD (I mean, after it was done, what's to do?) Besides, many of the old gang @ ATi are still there, and I've read that some have recently come back (or maybe that was AMD's K7 architects returning to the fold--I can't recall.)

    Truth be known, I just don't care for nVidia as a company--kind of like and for the same reasons I've never cared for Apple--even way back when it was "Apple Computer" and made and sold nothing except Macs. I am deeply skeptical of nVidia for I clearly recall nasty little things the company did back in the 3dfx days. I suppose that's why I always look and then look again, maybe more than once, anytime a situation involves nVidia.

    Take care, man...;)
     
  2. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    OK, I see...of course you are right--and actually, I am a happy person...;) Read my post to Sxotty as it sums me up pretty well these days--I wish my memory wasn't as clear and vivid as it is--lots of that stuff seems like it happened yesterday! Thanks much for the clarification--I still do not believe the Hardware Canuck's "reported versus displayed" conclusions--and I am mortified by how easily some "pundits" allow themselves to be mislead these days. To be accurate, though, that's always been the case for people who would be pundits, I suppose.

    Remember the Larrabee debacle? Intel never worked on anything close to what the pundits dreamed up, and I have to admit at feeling vindicated the day Intel announced the project was cancelled (prior to production.) I don't know where some of these ideas come from...
     
  3. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Well, both cards are 2GB...;) I'd already decided to go that route. I'm leaning towards the 7850 though, because at the moment at least that one HIS 2GB 7850 is priced at a bargain $170.
     
  4. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,707
    Likes Received:
    345
    Location:
    The colonies
    Ah yes, no WaltC topic is complete without an nv30 detour.

    None of this gibberish has any bearing on the fact that AMD has a well known issue with stunted frames.

    Your original wall of text is barking up the wrong tree, and none of this idle speculation about Nvidia, HC and Skymtl does anything to change that.
     
  5. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    Why don't you simply separate measured facts from opinion? It makes things much easier that way. It's fine not to like HC for being biased one way of the other (I've never visited their site and you may well be right), but the measurement method of FCAT is hard to dispute.

    These kind of runt frames should never ever happen in working system. It's really a case of extreme malfunction in their video source switcher. Remember: in some cases it switches from frame N to frame N+1, back to frame N and then back to N+1!

    There's nothing wrong in pointing this out. It's a bug. AMD will fix it.
     
  6. boxleitnerb

    Regular

    Joined:
    Aug 27, 2004
    Messages:
    407
    Likes Received:
    0
    And obviously it works as seen in the comparison video of Crossfire running Crysis 3.
     
  7. tmavr

    Newcomer

    Joined:
    Sep 2, 2010
    Messages:
    10
    Likes Received:
    0
    So has anyone measured the actual maximum-lag that is coming from implementing smoothing algorithms?
    Is it in the range of one or two frames?
    A simple 3d program with flying-balls running from left to right with different predefined speeds with time stamps should do the trick I guess.

    That way we would see the actual time difference between two displayed frames. And use the distance traveled from the flying-balls to aid with recovering the timeline data. Perhaps we could also gain some insight about completely dropped frames by the frame smoothing algorithm.

    Lag is important for gamers - more important in some simulation software I guess.

    And anyway I would be glad to find out more about the affects of such a smoothing algorithm, either nVidias implementation or AMDs.

    Second, I am not deep into DVI (I am not even sure that all DVI implementations arew the same with the one used in FCAT), but would n’t DP show different results?
     
  8. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    Nobody has measured it. But earlier discussions here show that you don't need add lag at all times to smooth it out. Just adding a one-time single delay to a steady state situation can fix the imbalance. In practice, nothing is steady state, and you'll need to add a little bit of delay to keep things balanced, but my belief is that it can be done with very minor delay spread over multiple frame. E.g. 1/10th of a frame lag. Definitely not multiple frames. How to measure this is something else entirely...

    See here and follow-ups.

    I wouldn't worry about that part: GPUs don't have large on-chip memories (ignoring L2 cache), which means that pixels are requested on-demand to be sent to the output. I'm sure there's some amount of buffering to make things easier for the MC and probably a couple of line buffers for output filtering, but that'd be a delay of uS, not mS.

    So given that, there should be virtually no difference in delay between DVI and DP. Both send pixels in a digital way. Just the encoding is different.
     
  9. Sxotty

    Legend

    Joined:
    Dec 11, 2002
    Messages:
    5,496
    Likes Received:
    866
    Location:
    PA USA
    My opinion is multi-GPU set ups are not worth the heartburn. :)


    It was much more fun then. All the 9500pros you could turn into 9700, the same on the other side. That stuff was fun. It seemed the cadence of progress was quicker then as well. I was an AMD fan myself since they were an underdog and Intel was the evil empire back then. Now I wish we could just have some decent competition in all aspects. Intel is dominating AMD in CPUs, and while AMD and Nvidia are close on GPUs I worry that it is basically a losing proposition for them as tablets and smart phones etc.. are adopted. I was bummed about the merger and I am bummed that AMD has the console wins b/c I like at least one to go to each.

    As far as the NV30 that was a hilarious time, but not something I got upset about. It was entertaining to me. You take care too it is good to see people back.
     
  10. tmavr

    Newcomer

    Joined:
    Sep 2, 2010
    Messages:
    10
    Likes Received:
    0
    Thank you for your answer.

    I guess someone will eventually set up an experiment to measure all the above.
     
  11. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Looks like some devs are taking notice of Microsstuttering
    [​IMG]

    ps: I dont understand the logic of this
    [​IMG]
     
  12. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    That probably not really microstuttering, but simple multi-GPU optimizations. In this instances I would guess they have some frame-to-frame storage of the foliage details that would degrade the mGPU rendering performance, so they cut those down to speed the operation up.

    What title is that?
     
  13. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
  14. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Maybe it's just the language barrier, but are you suggesting in the first part that a reviewer should cherry pick their tests in advance just to let a product shine? If so, where's your rant about all the incompetent GTX Titan reviews where no one really seemed to show how utterly important the addition of an extra 3 GiB of framebuffer for that product was.

    Regarding 1 vs. 2 GiB HD 7850's I'm sorry to disappoint you (if no one has done so yet), but 2 GiB of framebuffer memory was the standard configuration with which HD 7850 launched:
    http://www.pcgameshardware.de/Grafi...m-Test-Schnell-und-sparsam-dank-28-nm-870528/
     
  15. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    Do you have link for this? (not that I wouldn't believe there's reviewers that are self-admitted fans of company x, but the link would still make claims relevant)
     
  16. Thanny

    Newcomer

    Joined:
    Jun 1, 2013
    Messages:
    2
    Likes Received:
    0
    I had to register just to add my two cents to this.

    This recent FCAT craze is definitely a net loss to review usefulness, but not because the data is in any way rigged by the nVidia tool. It's absolutely true that nVidia has a habit of playing dirty (such as disabling PhysX when a non-nVidia card is present, or crippling CPU-based PhysX), but it's just paranoia to claim that the FCAT data is tainted.

    The problem is that the reviewers are completely wrong-headed in how they're interpreting the data.

    The concept of a "runt frame" not counting towards the frame rate is nothing short of idiotic. When you don't sync updates to the screen refresh, *ALL* frames are "runt frames". No screen is ever going to be one distinct rendered frame. Every screen update will be a combination of several different rendered frames, which is why you get tearing when you radically change the view without vsync - the seams between different frames don't mesh.

    The simple fact is, the absolutely maximum "practical" or "actual" frame rate is the refresh rate of the monitor. You will always be rendering frames that don't make it to the screen, to some extent or other, if your frame time is lower than the refresh interval, because the frame buffer is overwritten with different frames during the screen draw.

    The only difference between the even frame time spacing of nVidia and uneven spacing of AMD is where the seams in screen tearing will appear. So if you have three sequential frames - A, B, and C - generated inside one refresh interval, the difference is in what portion of each frame actually makes it to the screen. With even spacing, it'd be 33% A, 33% B, and 33% C. Mix it up, so it's 30% A, 10% B, and 60% C, and what do you get? Just different seam positions. And what if poor frame B is dropped entirely? Then the screen will be some percentage of A and some percentage of C instead, and you lose absolutely nothing of consequence.

    As soon as you turn on vsync, the entire picture changes. Every single frame will be complete, and if the frame rate is at least as high as the refresh rate, time between frames will be identical at all times. And contrary to popular opinion, you don't get "input lag", unless you happen to be playing Dead Space.
     
  17. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    I do think they should stop testing with vsync off. Who the hell plays with vsync off?
     
  18. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    All competition players?

    If people are willing to suffer TN LCDs for a few ms of latency reduction, there's no point in enabling VSYNC, which will, on average, guarantee a latency increase of 8ms.
     
  19. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    You're right in a way, yes. But taken to the extreme, you definitely loose smoothness with renderframes that are not displayed or only displayed for a few lines on the display. That's why the effect in Multi-GPU setups is called microstuttering, which can get really annoying, reducing the value of the higher Fps rate.

    Now, to which degree this value is reduced and from what no. of lines a frame should be called a runt frame and not counted toward "perceived smoothness" surely is debatable and will certainly differ from one person to the next.

    Me personally, I'm a vsync-off gamer, but I can't stand texture shimmering or edge aliasing, so I turn on supersample-AA whenever I get the chance. Other people don't care about anti-aliasing, but always play vsynced.
     
  20. tuna

    Veteran

    Joined:
    Mar 10, 2002
    Messages:
    3,550
    Likes Received:
    590
    I do!
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...