NVIDIA Fermi: Architecture discussion

What does that mean? :asd:
I mean, why are you showing something which is not optimized? It also seems that the demo crashed at the end of the video...
I am wondering if this is in some way linked to clocks far too high for the gpu... Maybe to counterbalance the fact that the gpus shown are salvage parts (less cuda cores active)...

If its infact SLI rendering. SLI has to make special profiles for specific styles of rendering. Turning off tessellation for instance could have a dramatic impact on the way the engine works and how the profile is setup.

Chris
 
Since neliz is hinting that Fermi power consumption and/or performance might be known on the 14th (in his usual, "I know stuff" manner :)) I'm assuming there are NDA slides floating around with that date on it?

Based on other hints people are dropping said slides explain features and architecture but no specifics on final clocks? Although it baffles me how they can have performance numbers before they know what clocks are going to be.

Its not that big of a deal to say that editors are getting more information than is being shown at CES on a specific date on the CES. So I would not be surprised to start seeing leaks shortly after. Hey, I could be wrong.
 
Which is why you continue to post doom and dispair for Nvidia with no hard facts to support what your saying based on any real information about the chips themselves for power draw or TDP. It is all assumptions, god ones I'll you that, but nothing more than pure assumptions just the same.

That Fermi contains ~ 3 billion transistors is not a 'pure' assumption. That Fermi is 500+ sq mm is not a 'pure' assumption. That Nvidia is months late is not a 'pure' assumption. That they have still not released any hard numbers on Fermi performance is not a 'pure' assumption. That Nvidia is primarily focused on Fermi as a GPGPU is not a 'pure' assumption. That a large chuck of those 3 billion transistors are allocated primarily for the GPGPU function is not a 'pure' assumption. That Fermi is on it's third respin is not a 'pure' assumption. That Fermi's design is simultaneously the most complex and massive gpu ever attempted, much less on a brand new node process, is not a 'pure' assumption. ALL of these FACTS are pertinent to Fermi's competitiveness against Cypress.

What the heck kind of mind/thinking process comes up with a statement like 'It is all assumptions, god ones I'll you that, but nothing more than pure assumptions just the same' in the face of copious KNOWN facts to the contrary? How the heck do you GET there.
 
That a large chuck of those 3 billion transistors are allocated primarily for the GPGPU function is not a 'pure' assumption.

Yes it is, as pointed out several times to you already in this thread.

Let's try this ... can you give me one logical, physics based cost/performance advantage Fermi is known to have over Cypress?

Yeah, Cypress is only 40% faster than GT200 and Fermi is over 2x the size of that chip => Fermi is much faster than Cypress. See, just like you I can make arbitrary assumptions based on little data.
 
That Fermi contains ~ 3 billion transistors is not a 'pure' assumption. That Fermi is 500+ sq mm is not a 'pure' assumption. That Nvidia is months late is not a 'pure' assumption. That they have still not released any hard numbers on Fermi performance is not a 'pure' assumption. That Nvidia is primarily focused on Fermi as a GPGPU is not a 'pure' assumption. That a large chuck of those 3 billion transistors are allocated primarily for the GPGPU function is not a 'pure' assumption. That Fermi is on it's third respin is not a 'pure' assumption. That Fermi's design is simultaneously the most complex and massive gpu ever attempted, much less on a brand new node process, is not a 'pure' assumption. ALL of these FACTS are pertinent to Fermi's competitiveness against Cypress.

What the heck kind of mind/thinking process comes up with a statement like 'It is all assumptions, god ones I'll you that, but nothing more than pure assumptions just the same' in the face of copious KNOWN facts to the contrary? How the heck do you GET there.

And all of that has 0, zip zilch info as to how many watts the card will consume or what the TDP of the card will be or how well or poorly it will perform. Things you have been making monster assumptions on now for several pages again with NO hard facts to back up your assecertions with like power draw, heat disappation. Just because it has a 6 pin and 8 pin connector does not atumaticly mean it draws and uses 300Ws of power.
 
there is alot of things missing you your logic :oops:

power consumption isn't much more then a gtx 280.

So, you admit that it does consume more than a GTX 280, the hotest/highest power consumption single chip GPU ever? I know I had a BFG GTX 280 about a month after launch and it would overheat if I didn't manually adjust the fan to max, and they said nothing was wrong with it, it just runs that hot.
 
So, you admit that it does consume more than a GTX 280, the hotest/highest power consumption single chip GPU ever? I know I had a BFG GTX 280 about a month after launch and it would overheat if I didn't manually adjust the fan to max, and they said nothing was wrong with it, it just runs that hot.

You just made me register.
I have a BFG GTX 280 OC and have never had an issue.
Sure you don't have a bad card (and thus should RMA it)?

Don't make absolute statments based on a single experience.

It's almost as bad as some of the fearmongering in this thread, but that I can ignore...but stating something as an "fact"...when it's cleary not, I cannot overlook.
 
I honestly 100% believe that IF Fermi becane to nVidia what the R600 was to ATI, in the long run it would only benefit nearly ALL of us, it would (hopefully) foster changes at nV that would only increase competition, just as the R600 did for ATI.

That's an interesting position but I disagree wholeheartedly. Nvidia is MUCH friendlier all around when things are going well. Their shenanigans seem to emerge only when backed into a corner. I also don't see how R600 increased competition, its primary effect apparently was to send Nvidia to sleep. :)

The R600 itself didn't, no. However from there on out ATI really seemed to change how they approached many aspects, from development (of future chips) with emphasis on smaller chips (was not the R600 the last of the behemoth GPUs for ATI ?), to adopting more input from developers as to what they (developers) wanted to see. The 3870/50 to me was the 1st product (though admittedly was already in the works when the HD2000XT finally arrived. The HD2600 iirc, was among ATI's 1st endevours of using "lesser" gpus on a more advanced process (I think the X700 - RV410 was the 1st) to "test the waters", and this seems to have worked for them ever since (RV610 55nm->RV740 40nm). I'll repeat myself in that I don't think how well a company does defines a company's ability to compete, it's when things don't go right and that company's ability to recover that makes for compeition. Look at the R300-R400 vs NV30,.. ATI clearly outclassed nV and (imo) ATI sat on their laurels, milked the R300 architecture all the way through the R400 and while the R500 wasn't a flop ATI hardly stood a chance when nV (after fighting back the dismal FX5000 series, the anemic 6000 and improved 7000) launched the 8000 series (G80). The HD3000 was ATI fighting back but with the HD4000 they (again IMO) really struck back to the aging G80-92 architecture. Thus displaying the constant back and forth.. maybe it's my current red bias that says that the NV30 isn't comparable on a level of success that helped ATI spawn the HD series.

/ALL IMHO
 
Is it me or is this all about some weird typo?
Much of the last six pages looks like a weird typo. Sigh. :cry:

Are a bunch of you just hastily finishing off a gross excess of eggnog? Is someone pumping testosterone into the water? Why has this thread degenerated into 70% impassioned limb flailing, 20% snark, and 10% legitimate attempt at discovery?
 
Thus displaying the constant back and forth..

Right, but the "back and forth" doesn't hinge on one party failing to execute. You can get the same effect from them continually besting each other which would result in faster innovation in the long run. Sometimes it's hard to tell whether one guy did poorly or the other just exceeded expectations but that's why it's useful to compare against each company's prior generation as well.
 
Given that we are here to speculate, how many cables you see in those pictures? :LOL:
'Cause to me they seem more than 14... It seems like there are 2 cables per pin at least in the two pins at the top.

nvidia_geforce_100_2.jpg


S10523937.jpg


S10523912.jpg


S10523925.jpg
 
How does one connect the pass-through audio cables when the cooler is covering them?

The dual 2x3 solder point arrangement on the back of the card implies the use of dual 6-pin PCI-e power connectors to me.

Must be you, I see 6 and 8 solders there..
 
How does one connect the pass-through audio cables when the cooler is covering them?



Must be you, I see 6 and 8 solders there..

Wow, yeah, I totally missed the last 2 soldier points on the right-most connector. I think it's because the top one looks rather dim for some reason, almost black. My mind must've just glossed right over it.
 
Back
Top