NVIDIA GF100 & Friends speculation

According to the GF100 whitepaper, gaming with multiple monitors in 2D (ie. non-stereoscopic 3D) will be supported for resolutions up to 2560 x 1600 across displays that share a common resolution. The whitepaper says that 3D Vision surround is supported across three of the same 3D Vision capable LCDs and projectors at resolutions up to 1920 x 1080.

This is what i could find also.

It also requires a SLI to run either of them. Not just for performance issues but because of display ports. So the nvidia option will be much more expensive than the ati verison sadly. a 5770 isn't the greatest for eyeinfity but it does only cost $180 so if you play older titles and want enhanced viewing its pretty cheap. Nvidia's solution will most likely start at $800 (assuming $400 for the base gf100 card)

Hopefully in its next tech nvidia will set it up so it will work on a single card.
 
Yes, that is true. Most likely NVIDIA figured that gaming on 3 monitors caters to a very niche market right now, and this type of gamer would be a very likely candidate for SLI anyway due to need/desire for more graphics horsepower, so they didn't sweat too much about it for the first gen of GF100-based cards.
 
Yes, that is true. Most likely NVIDIA figured that gaming on 3 monitors caters to a very niche market right now, and this type of gamer would be a very likely candidate for SLI anyway due to need/desire for more graphics horsepower, so they didn't sweat too much about it for the first gen of GF100-based cards.

Sadly for nvidia there isn't much a 5870 can't play with eye infinty with 1920x1200 monitors or lower. The cost of higher res monitors will most likely keep higher res's away from the mainstream multimonitor users.

I plan on going eyeinfinty with just a 5870 or what replaces it this fall.


But personaly i've tried sli , crossfire , dual gpu cards and I'm done with all that.
 
Some FC2 numbers from a slightly overclocked Core i7-920 setup with a stock Radeon HD 5870 board, tested with the same benchmark settings as seen from the video leak:
Average Framerate - 73.92 vs. 84.18, Fermi wins by 12.2%
Maximum Framerate - 111.01 vs. 125.84, Fermi wins by 12.8%
Minimum Framerate - 54.55 vs. 65.21, Fermi wins by 16.4%
Kyle@HardOCP said:
"Rumored Fermi Videos" - Yes, these are in fact authentic videos of a GF100 triple card setup in action. 3 X GPGPU for Ray Tracing Demo, Single GPU for all others. The systems are all 3.2GHz Intel Core i7 boxes and the comparison box being used in the FarCry 2 benchmark shots are of a GTX 285. The videos were leaked accidentally by PCPer.com.

So 3.2ghz i7 (and probably not with slowest possible mem/uncore settings) - felix, how much did you clock? FC2 is somewhat cpu dependent after all.
 
..Which I think we concluded is running a different test (another reason could be older drivers ofcourse).
 
So, this is second NV release about GF100 and we still don't know anything. I don't know, but if I had a killer product, I'd spread benchmarks left and right to let people know what they should be waiting for... Unless the product itself (as opposed to the architecture) is a bit underwhelming. It kinda reminds me of R600, when people were fed with info on its horsepower (in terms of GFLOPS) and how it stacked up against G80, some benchmarks saying how wonderfully it did against G80 and then we had a flop (performance wise)...

I'm not a fanboy (altough I have owned more ATI than nVidia cards) and I do hope that Fermi is a success, as it promises some fancy features and performance, however all this does not quite add up to me...
 
So, this is second NV release about GF100 and we still don't know anything. I don't know, but if I had a killer product, I'd spread benchmarks left and right to let people know what they should be waiting for...
ATi had a killer product with HD5870 and I didn't see any benches 2 months before launch .
I think they are still finalizing clocks and drivers .

There are some benches at hardwarechunks :
http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-13.html


http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html
 
ATi had a killer product with HD5870 and I didn't see any benches 2 months before launch .
I think they are still finalizing clocks and drivers .

There are some benches at hardwarechunks :
http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-13.html


http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html


http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html

Naturally the cards we benchmarked weren’t equipped with anything above 512SPs since that is the maximum layout this architecture will allow. If we assume the performance we saw was coming out of the beta, underclocked version of a 512SP GF100 running alpha-stage drivers, this is going to be one hell of a graphics card. On the other hand, if NVIDIA was using 448SP equipped cards for these tests, the true potential of the GF100 is simply mind-boggling. Coupled with the compute power and architecture specifically designed for the rigors of a DX11 environment, it could be a gamer’s wet dream come true.
 
So 3.2ghz i7 (and probably not with slowest possible mem/uncore settings) - felix, how much did you clock? FC2 is somewhat cpu dependent after all.
I asked a friend of mine to do the benchmark on his rig, as I don't have an i7 setup to match my HD5870 with. ;)
 
Even a overclocked refresh for AMD will have little chance against this card it appears.

Looks like the battle will be fought on price again, where AMD should be in good position to compete. The GF100's performance seems great, but say, $520 will put it in a small niche, and the 5870 can still thrive and AMD has tons of room to cut the 5870 price massive amounts. Probably sub $300 5870's would be no problem for AMD.

So I guess a lot comes down to what the 3/4 Fermi looks like, die size, etc wise. (if there is to be a 3/4 Fermi die). And half die Fermi as well. Seems with tweaks, maybe 1/2 die Fermi could give 5870 some performance problems (then again, surely AMD can tweak back aka 5890 etc). Then again with all Nvidia's issues getting anything out the door, we're along way from that. I dont know, if 1/2 die Fermi doesn't compete with 5870 straight up, and it kind of seems like it wouldn't judging by avalaible benches, Nvidia just left such a massive hole in it's lineup at sub-high end.
 
Last edited by a moderator:
Doesn't make sense. It hasn't been done so far by any IHV and there is no mention of it in their graphics whitepaper.
I think anand misread that. It specifically says a texture unit can do 1 texture address and fetch 4 texture samples. But you need 4 texture samples for one bilinear filtered texture fetch, hence that's really the same rate as they always had. The difference though is now that they can take 4 individual samples for gather4 and return that, something older chips (from nvidia - amd could do that for a long time already) couldn't do. It is also possible efficiency was boosted otherwise, IIRC all (g80 and up) nvidia quite failed to reach their peak texture rate. Still, 64 units doesn't look like a lot - if you put that in terms of alu:tex ratio it is quite comparable to what AMD has, however.
 
ATi had a killer product with HD5870 and I didn't see any benches 2 months before launch .
I think they are still finalizing clocks and drivers .

There are some benches at hardwarechunks :
http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-13.html


http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html

The situation was a bit different, as ATI was not pressured by NV's recent product launch.

As to the benchmarks, I had similar (67 avg) results on a 3.2 quad core. Although I doubt i7 would add very much to it. These results are indeed promising, but let me remain sceptical for the moment ;).
 
ATi had a killer product with HD5870 and I didn't see any benches 2 months before launch .
They were not in such a bad situation as NV right now with highly underperforming products all over the place.

The "conclusion" slide NV just released simply is ridiculous with "up to 2 times GT200 performance at 8xAA high res", quite the best case scenario since GT200 is nowhere near Cypress with such settings, which many NV lovers here were pointing as totally irrelevant a few hours ago.

I'm quite interested in the way they implemented triangle setup, but I'm quite skeptical that will be a determining factor performance wise, even with quite heavy tessellation unless the engine is a plain pile of shit.

Btw, it seems tessellation needed to be included in the GPC low clock rather than ROP domain clock for some reason as they will probably be quite similar.
 
Last edited by a moderator:
They were not in such a bad situation as NV right now with highly underperforming products all over the place.

The "conclusion" slide NV just released simply is ridiculous with "up to 2 times GT200 performance at 8xAA high res", quite the best case scenario since GT200 is nowhere near Cypress with such settings, which many NV lovers here were pointing as totally irrelevant a few hours ago.

I'm quite interested in the way they implemented triangle setup, but I'm quite skeptical that will be a determining factor performance wise, even with quite heavy tessellation unless the engine is a plain pile of shit.


Triangle setup is pretty important when we are looking anything past oh around 700k polys per screens on cards out right now, it actually starts to become a bottleneck, and when you get to 1 million + per screen becomes a predominant bottleneck. This is why the HD5xxxx series has a 20%-35% performance penalty with tessellation, regardless of the res and settings.
 
It kinda reminds me of R600, when people were fed with info on its horsepower (in terms of GFLOPS) and how it stacked up against G80, some benchmarks saying how wonderfully it did against G80 and then we had a flop (performance wise)...
It kinda remind me of Conroe when when people were fed with info on its horsepower (in terms of GFLOPS) and how it stacked up against NetBurst , some benchmarks saying how wonderfully it did against NetBurst and then AMD had a flop (performance wise)...
You're trying too hard, really.
 
Triangle setup is pretty important when we are looking anything past oh around 700k polys per screens on cards out right now, it actually starts to become a bottleneck, and when you get to 1 million + per screen becomes a predominant bottleneck. This is why the HD5xxxx series has a 20%-35% performance penalty with tessellation, regardless of the res and settings.
Tessellation units themselves, but what about a real world scenario?

Pure tessellation is useless, what makes it powerfull are domain and hull shaders, which are not tied to setup rate. On top of that, there are all the other shaders which work with all the data thrown at them by the tessellation stage, so the higher the tessellation factor, the higher the pressure on the ALUs.

I really think something went wrong here.
 
Last edited by a moderator:
http://www.hardwarecanucks.com/foru...idia-s-geforce-gf100-under-microscope-14.html

Naturally the cards we benchmarked weren’t equipped with anything above 512SPs since that is the maximum layout this architecture will allow. If we assume the performance we saw was coming out of the beta, underclocked version of a 512SP GF100 running alpha-stage drivers, this is going to be one hell of a graphics card. On the other hand, if NVIDIA was using 448SP equipped cards for these tests, the true potential of the GF100 is simply mind-boggling. Coupled with the compute power and architecture specifically designed for the rigors of a DX11 environment, it could be a gamer’s wet dream come true.

Also from there,

First and foremost of these has to do with exactly what type of card was within the test systems since when asked, NVIDIA merely stated that it was the card that would be shipping on launch day.

If they were gonna ship a full part, why not say so? However, the card certainly looks impressive vis-a-vis 5870. How it matches upto it's true competition, 5970 remais to be seen.

Anyone willing to run the benches for 5970 with a stock Intel i7 960, 6GB of 1600Mhz memory and an ASUS Rampage II Extreme motherboard running Windows 7 ? :smile:
 
So according to the hardwarecanucks benchmarks this gf100 is 24%-28% faster than an HD 5870 in far cry 2?

If we take that as best case scenario it doesnt look all that great, actually it's last year all over again. Is there any reason to assume this isn't best case scenario, ie are nvidia known for demonstrating underpowered parts and choosing unflattering benchmarks?
 
Back
Top