NVIDIA Fermi: Architecture discussion

Does GPUBench count?
http://graphics.stanford.edu/projects/gpubench/results/

Compare this to 8800 GTX there

--------------------------------------------------------------------------
Video Driver Information
--------------------------------------------------------------------------
GL_VENDOR: NVIDIA Corporation
GL_RENDERER: GeForce GTX 280/PCI/SSE2
GL_VERSION: 2.1.2
Driver Version: 7772

[...]

--------------------------------------------------------------------------
Instruction Issue
--------------------------------------------------------------------------
512 70.1201 ADD 4 64
512 70.1401 SUB 4 64
512 94.9697 MUL 4 64
512 68.8565 MAD 4 64
512 143.0448 EX2 4 64
512 72.1732 LG2 4 64
512 68.0586 POW 4 64
512 136.2859 FLR 4 64
512 68.6366 FRC 4 64
512 70.7210 RSQ 4 64
512 143.7839 RCP 4 64
512 143.0575 SIN 4 64
512 141.5776 COS 4 64
512 137.1915 SCS 4 64
512 228.6909 DP3 4 64
512 158.8046 DP4 4 64
512 89.4942 XPD 4 64
512 69.8814 CMP 4 64

--------------------------------------------------------------------------
Scalar vs Vector Instruction Issue
--------------------------------------------------------------------------
512 289.1751 ADD 1 40
512 70.5597 ADD 4 40
512 282.9698 SUB 1 40
512 69.9241 SUB 4 40
512 385.1988 MUL 1 40
512 95.8897 MUL 4 40
512 279.8798 MAD 1 40
512 68.0741 MAD 4 40




Pics? pah-leeeease. :) The last "SLI-Fermi" picture seems not to prove your point (albeit not denying it either)



Hope you dont mind, but I'm gonna borrow you info you posted.
 
Pics? pah-leeeease. :) The last "SLI-Fermi" picture seems not to prove your point (albeit not denying it either)

30k4pzs.jpg
2x DVI, 2xHDMI
 
Scratching the die-size dilemma again, I took the liberty to do another GT215-to-Fermi die photo comparison, much more precise this time and it came almost exactly at 550mm² for the Fermi die, which comes close to ~23,4mm per side. Both the GDDR data pads and PCIe transmitters mutually matched in size and dimensions, for my surprise, between the two chips.
 
Looks like Quadro Cards. It's no GF100 card.

It's actually a Tesla board. Which is the same as the Quadro (and Geforce) boards. since it also has (unused) prints for SLI goldfingers which haven't been cut and polished.
 
Last edited by a moderator:
So who wants to stick four FX4800's in a chassis and call it S2070?
So we have three different PCBs:
- 9" Tesla C PCB (1xDVI, 1x SLI): http://www.computerbase.de/bildstrecke/27497/3/
- 10.5" Tesla S and probably Quadro PCB (2xDVI+2xDP, 1x SLI): http://www.computerbase.de/bildstrecke/27497/14/
- 10.5" GeForce PCB (2xDVI+1xDP?/HDMI?, 2x SLI): http://tweakers.net/ext/f/SokYR0qKsefO05zmtLteDmDQ/full.jpg

A 9" "Geforce 360" would be nice and would be an advantage over 11" HD 5870 (maybe partners will make it shorter with design kit).
 
Last edited by a moderator:
You sure about that? I read Kyle's post to mean that they're going to keep things closer to home this time around and that AIB's will be stuck with putting stickers on pre-built boards.

GT285s are almost all from NV directly, I haven't seen a price list in a while, but the top end chips are almost never available as a kit from either side. NV is pretty strict about that, but they do occasionally make exceptions. I was told things are the other way around this time, NV is going to try and force the AIBs to differentiate, which may be why BFG is having such headaches.

Ok, but what leads you to believe so strongly that A3 will do the trick and/or that clocks will get a boost? According to silent-guy a respin shouldn't impact clocks much. Is it possible that the metal spins also addressed variables that would make the chip play nicer in TSMC's current environment or would that all be at the silicon level too?

I don't know A3 will do the trick, and given the problems with A1 and A2, I don't know if NV knows either. We are ~2 weeks from A3 silicon, so that is when we will know. :)

I read what SG said, and I do agree with it. That said, if whatever NV is trying to change in the A2 -> A3 stepping was not possible, then we would be seeing either production A2s at low bin splits, or a B1.

B1s would take MUCH more time than A3s, if for no other reason than they couldn't use partially re-done A1/2 silicon to reduce fabbing time. If you notice what happened, A1 took about 6-7 weeks, and A2 was about 4. That tells me that hot lots of all the metal layers take about 4 weeks to force through the system.

If it was a full silicon respin, you would be looking at mid-January at the earliest, quite possibly later. I haven't heard that a Bx line will be needed, but then again, wait for A3.... If the clocks don't go up significantly for A3, things are looking mighty uncomfortable in NV land.

-Charlie
 
I take my sources accuracy over their guesstimates any day.


That wasn't directed at you, but at the people that have spent the last half year claiming the weirdest things, we'd have pictures in May, that it would be with us "soon" and for some magical reason shrinking from 55 to 40nm would magically double performance while also being a smaller die size, it does not compute.


That was layed to rest a few days later. But I'll give it to you as soon as you show me your GT300 pics from back in May.

in the meantime shouldn't we start to get worried about heat now the display connector setup is the same as on the RV8x0 cards?

They can't show pictures of the chip until the drivers are ready, how can you take a picture of a chip without drivers? That is the reason for the delay, they had to wait to get silicon back to start work on the drivers. Don't you read BS News Hardware.Info, and ATIiForum?

-Charlie
 
Back
Top