NVIDIA Fermi: Architecture discussion

Discussion in 'Architecture and Products' started by Rys, Sep 30, 2009.

  1. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983- Veteran

    If its infact SLI rendering. SLI has to make special profiles for specific styles of rendering. Turning off tessellation for instance could have a dramatic impact on the way the engine works and how the profile is setup.

    Chris
     
  2. willardjuice

    willardjuice super willyjuice Moderator Veteran Alpha

    I think having just one IHV is great for competition.
     
  3. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983- Veteran

    Its not that big of a deal to say that editors are getting more information than is being shown at CES on a specific date on the CES. So I would not be surprised to start seeing leaks shortly after. Hey, I could be wrong.
     
  4. spigzone

    spigzone Banned

    That Fermi contains ~ 3 billion transistors is not a 'pure' assumption. That Fermi is 500+ sq mm is not a 'pure' assumption. That Nvidia is months late is not a 'pure' assumption. That they have still not released any hard numbers on Fermi performance is not a 'pure' assumption. That Nvidia is primarily focused on Fermi as a GPGPU is not a 'pure' assumption. That a large chuck of those 3 billion transistors are allocated primarily for the GPGPU function is not a 'pure' assumption. That Fermi is on it's third respin is not a 'pure' assumption. That Fermi's design is simultaneously the most complex and massive gpu ever attempted, much less on a brand new node process, is not a 'pure' assumption. ALL of these FACTS are pertinent to Fermi's competitiveness against Cypress.

    What the heck kind of mind/thinking process comes up with a statement like 'It is all assumptions, god ones I'll you that, but nothing more than pure assumptions just the same' in the face of copious KNOWN facts to the contrary? How the heck do you GET there.
     
  5. trinibwoy

    trinibwoy Meh Legend

    Yes it is, as pointed out several times to you already in this thread.

    Yeah, Cypress is only 40% faster than GT200 and Fermi is over 2x the size of that chip => Fermi is much faster than Cypress. See, just like you I can make arbitrary assumptions based on little data.
     
  6. XMAN26

    XMAN26 Banned

    And all of that has 0, zip zilch info as to how many watts the card will consume or what the TDP of the card will be or how well or poorly it will perform. Things you have been making monster assumptions on now for several pages again with NO hard facts to back up your assecertions with like power draw, heat disappation. Just because it has a 6 pin and 8 pin connector does not atumaticly mean it draws and uses 300Ws of power.
     
  7. Ninjaprime

    Ninjaprime Regular

    So, you admit that it does consume more than a GTX 280, the hotest/highest power consumption single chip GPU ever? I know I had a BFG GTX 280 about a month after launch and it would overheat if I didn't manually adjust the fan to max, and they said nothing was wrong with it, it just runs that hot.
     
  8. Lonbjerg

    Lonbjerg Newcomer

    You just made me register.
    I have a BFG GTX 280 OC and have never had an issue.
    Sure you don't have a bad card (and thus should RMA it)?

    Don't make absolute statments based on a single experience.

    It's almost as bad as some of the fearmongering in this thread, but that I can ignore...but stating something as an "fact"...when it's cleary not, I cannot overlook.
     
  9. FrameBuffer

    FrameBuffer Banned

    The R600 itself didn't, no. However from there on out ATI really seemed to change how they approached many aspects, from development (of future chips) with emphasis on smaller chips (was not the R600 the last of the behemoth GPUs for ATI ?), to adopting more input from developers as to what they (developers) wanted to see. The 3870/50 to me was the 1st product (though admittedly was already in the works when the HD2000XT finally arrived. The HD2600 iirc, was among ATI's 1st endevours of using "lesser" gpus on a more advanced process (I think the X700 - RV410 was the 1st) to "test the waters", and this seems to have worked for them ever since (RV610 55nm->RV740 40nm). I'll repeat myself in that I don't think how well a company does defines a company's ability to compete, it's when things don't go right and that company's ability to recover that makes for compeition. Look at the R300-R400 vs NV30,.. ATI clearly outclassed nV and (imo) ATI sat on their laurels, milked the R300 architecture all the way through the R400 and while the R500 wasn't a flop ATI hardly stood a chance when nV (after fighting back the dismal FX5000 series, the anemic 6000 and improved 7000) launched the 8000 series (G80). The HD3000 was ATI fighting back but with the HD4000 they (again IMO) really struck back to the aging G80-92 architecture. Thus displaying the constant back and forth.. maybe it's my current red bias that says that the NV30 isn't comparable on a level of success that helped ATI spawn the HD series.

    /ALL IMHO
     
  10. Pete

    Pete Moderate Nuisance Moderator Legend

    Much of the last six pages looks like a weird typo. Sigh. :sad:

    Are a bunch of you just hastily finishing off a gross excess of eggnog? Is someone pumping testosterone into the water? Why has this thread degenerated into 70% impassioned limb flailing, 20% snark, and 10% legitimate attempt at discovery?
     
  11. trinibwoy

    trinibwoy Meh Legend

    Right, but the "back and forth" doesn't hinge on one party failing to execute. You can get the same effect from them continually besting each other which would result in faster innovation in the long run. Sometimes it's hard to tell whether one guy did poorly or the other just exceeded expectations but that's why it's useful to compare against each company's prior generation as well.
     
  12. A.L.M.

    A.L.M. Newcomer

    Given that we are here to speculate, how many cables you see in those pictures? :lol:
    'Cause to me they seem more than 14... It seems like there are 2 cables per pin at least in the two pins at the top.

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]
     
  13. ShaidarHaran

    ShaidarHaran hardware monkey Veteran

    The dual 2x3 solder point arrangement on the back of the card implies the use of dual 6-pin PCI-e power connectors to me.
     
  14. neliz

    neliz GIGABYTE Man Veteran

    How does one connect the pass-through audio cables when the cooler is covering them?

    Must be you, I see 6 and 8 solders there..
     
  15. fellix

    fellix Veteran

    It's probably a cable branch connector it's being plugged in the 8-pin contact.
     
  16. dnavas

    dnavas Regular

    Last two solder points look glossy rather than milky, making a 2x3 and 2x4 pattern, no?
     
  17. ShaidarHaran

    ShaidarHaran hardware monkey Veteran

    Wow, yeah, I totally missed the last 2 soldier points on the right-most connector. I think it's because the top one looks rather dim for some reason, almost black. My mind must've just glossed right over it.
     
  18. FrameBuffer

    FrameBuffer Banned

    oh noes it's the 448 Cuda Core Tesla..






    /runs
     
  19. air_ii

    air_ii Newcomer

    +1.
     
  20. Sinistar

    Sinistar I LIVE Regular Subscriber

    Is it me, or is there 3 sets of power connections going th that board?
     
Loading...

Share This Page

Loading...