NVIDIA Fermi: Architecture discussion

Unless you literally have cash to burn, I don't see how 3D Vision Surround is even remotely feasible. 3D Vision requires each frame to be rendered twice, once for each eye. So you've already cut your framerates in half when only using one monitor. And now do that on three displays? So you've now got 1/6 the framerate you had when playing the game in non-3D on a single monitor.

S-L-I-D-E-S-H-O-W....

Uh, you're not saying that multi-monitor configs render multiple displays in series, are you?
I always assumed the rendering was simultaneous and can't imagine it being otherwise.
 
Err, I wouldn't go that far, and I wrote the article. Can the MUL be sortof kindof seen? Yeah, it can (do MUL MUL MUL MUL MUL ad infinitum ad nauseum, it'll probably be there, albeit it's still ). Can it be leveraged in practical workloads? Not quite. In this case, I'm not entirely sure it's not an artefact of how the test was setup, and I should've mentioned that in writing. What I am 99.9% sure of is that the missing MUL remains quite missing outside of toy scenarios like the one outlined above.
What difference between Int32 and float, outside of this supplemental MUL, could explain why the peak is higher in float?

Bandwidth doesn't change, instructions rate doesn't change either, so that has to be the MUL ALU offloading the regular ALU, and even if it's outlined in this particular synthetic workload, it will have an impact on real workloads, even a weak one.
 
Creig said:
So you've now got 1/6 the framerate you had when playing the game in non-3D on a single monitor.
You could also buy a GPU that's 6x faster. I'm sure NVIDIA will be more than happy to sell you one. Or three. It's a small investment after getting 3 monitors that can do 120 Hz.
 
What difference between Int32 and float, outside of this supplemental MUL, could explain why the peak is higher in float?

Because the Int test doesn't include transcendentals and therefore the SFU is left competely idle. Issue rate goes up in the float test because it can issue the transcendental on the SFU. You can't draw any conclusions about MUL issue on the SFU from that article.

I don't know why AlexV claims the 8800GT is above peak though. Seems he was just using the MAD pipeline to do that calculation but that would be inconsistent with his decision to include transcendental instructions in the test.
 
You could also buy a GPU that's 6x faster. I'm sure NVIDIA will be more than happy to sell you one. Or three. It's a small investment after getting 3 monitors that can do 120 Hz.
Have to admit, I'm intrigued to know if NVidia has a new SLI mode that is designed for multi-monitor and/or 3D...

Jawed
 
Nvidia to join Eyefinity bandwagon :

Wow! 3D + Eyefinity? I would say not in this generation of hardware, even in SLI I just don't see anything pushing the frame-rate and those resolutions for even reasonably recent games and not just the new stuff.
 
Not sure if it's Eyefinity like when not 3D gaming? The option would be nice even if the user doesn't want the added expense.
 
Have to admit, I'm intrigued to know if NVidia has a new SLI mode that is designed for multi-monitor and/or 3D...

Jawed

In the thread mentioned he claims that: "GF100 and GF104 feature full SLi capability and a new scaling method for rendering!"
 
Oh my goodness. To be perfectly honest, I had better chances of figuring out something useful analyzing those NV30 'Are You Ready' videos than you have reading that.
 
The supposed Nvidia engineer who was recently laid-off and is posting information about GT100 posted some follow up information on this forum:

http://www.overclock.net/nvidia/641156-guru-legit-nvidia-specs-benchmarks.html

FYI for those interested.

Apparantly GF104 = 2* Fermi, but is the same length as an HD 5870????

But remember this is the same source that came out with 4xSSAA, and how exactly is that comparable when ATI doesn't run SSAA on DX10 and even if its DX9 its not exactly a model you'd compare internally as an engineer as you'd be more interested in actual performance unless his job was to cook up figures for one of their famous power point presentation it simply doesn't make sense and honestly I would believe that soomeone is trying to punk us more than the possibility that this is legitimate.

Edit: The Rahja the Thief lists his location in Indiana but I tried googling for an Indiana Nvidia campus and I came up with zip.
 
Apparantly GF104 = 2* Fermi, but is the same length as an HD 5870????

But remember this is the same source that came out with 4xSSAA, and how exactly is that comparable when ATI doesn't run SSAA on DX10 and even if its DX9 its not exactly a model you'd compare internally as an engineer as you'd be more interested in actual performance unless his job was to cook up figures for one of their famous power point presentation it simply doesn't make sense and honestly I would believe that soomeone is trying to punk us more than the possibility that this is legitimate.
He/she did retract the SSAA statement on some of the benchies if I remember correctly.
He/she also goes in about what their job was at NV later in the thread.
Oh and NV released the GF100 card's length a bit ago on Twitter and it is actually a little smaller than what Rahja has stated for GF100.

Edit: The Rahja the Thief lists his location in Indiana but I tried googling for an Indiana Nvidia campus and I came up with zip.
I have the impression that he/she was laid off at some point during last year... and is talking now because his/her NDA expired.
 
Last edited by a moderator:
Nvidia to join Eyefinity bandwagon :



This is directly from CES :
http://www.hardforum.com/showthread.php?p=1035151113

are we sure this is a Fermi based initiative ? Or is this across all Geforce lines (9800/200 series as well) ? I have doubts that previous gen would be able to push the needed frames required for smooth immersion without resorting to SLI. (Just speculating afterall). Also if just speculation, what's chance of being called nFinity (and Beyond) ;-) ??
 
I've cleaned up a few posts on this thread that were decidedly OT. I understand that huge threads are bound to get loose after a certain point, but if we've got another month or three to wait for consumer Fermi cards, I thought I'd get a head start on housekeeping.

And this remains a technical forum, so please try to hew to facts and figures rather than hemming and hawing about motives and memories. Or something. ;P I know it's hard, but be strong.
 
Just teasing to get some order in advance, I guess.
Release is december for crysis 2, says the PC Gamer.
http://en.wikipedia.org/wiki/Crysis_2

Fun Fact: Our new 3D Vision Surround lets you span your gaming desktop across 3 monitors! Even better, 3D Vision Surround will work with GF100 and GT200 (and yes, it works in 2D mode too)! More info here:

3 Monitors even with GT200? How?!
 
Back
Top