NVIDIA GF100 & Friends speculation

Nothing will happen. Really you can't compare AFR mGPU systems with single-GPUs. You need profiles, you have microstuttering and you have only half of the memory. I hope that even the us magazines will criticize single-card AFR systems.

Just a quick note before I run out to dinner with the wifey.. you need profiles and you have microstuttering with single gpus as well.
 
Let's all make a deal, fellow posters in this thread! When the first GF100 GeForce is finally released and tested properly and real data is out there, I'll delete this thread and we'll never speak of it again. 'k?

Feck no! We should learn of our mistakes!

and I need a point of reference for my 64TMUs enabled, 64 disabled "told you so" PM in H2 of this year.
Degustator said:
For me getting GTX285+130% with MSAA 8x in HAWX is enough already not to be dissapointed with the provided info. As for the rest of performance numbers -- well, it's just a matter of time now.

but why look at the Canned benchmarks? everyone knows that 8xMSAA on GT200 was hardly one of it's strong points?
Saying "it's twice as fast at 8xMSAA@2560x1600" may sound impressive, but you've upgraded from lackluster to Cypress levels. If that's all than it isn't much of an upgrade now is it?
that's the whole problem with "technology previews" like this. For Fermi it was 8xDP performance.. sounds impressive but isn't much of a jump over currently available parts. This time it's the amazing 8xMSAA scores, which, quite frankly was one of the worst atributes of GT200.
 
Last edited by a moderator:
but why look at the Canned benchmarks? everyone knows that 8xMSAA on GT200 was hardly one of it's strong points?
Saying "it's twice as fast at 8xMSAA@2560x1600" may sound impressive, but you've upgraded from lackluster to Cypress levels. If that's all than it isn't much of an upgrade now is it?
that's the whole problem with "technology previews" like this. For Fermi it was 8xDP performance.. sounds impressive but isn't much of a jump over currently available parts. This time it's the amazing 8xMSAA scores, which, quite frankly was one of the worst atributes of GT200.

With a 2,33x performance increase in HAWX the GF100 card is a little bit slower than Hemlock:
http://www.hardware.fr/articles/777-13/dossier-amd-radeon-hd-5970.html :LOL:
 
Last edited by a moderator:
Hm. To me it's obvious Nvidia has made quite a chipdesign. And has innovated very well with their approach to tessellation. Truly I only have 2 questions: how well will this architecture scale (down) and is it actually manufacturable at the top end?
Yes I know they can make ES, but can they make the top end chips in any meaningful quantities? I'm keeping in mind I had to wait 5+ weeks in line after having paid for my 5870 before I got it, and those had a comparatively 'good' production ramp ....
(Powerdraw means nothing to me, neither does price, but there is a limit to how long I can be arsed to wait for it :/)
 
I don't know that GT200 is a DX10.1 card. ;)
And that's the point Comparing GT200 DX10 results against cards that can do DX10.1? Was the GF100 running in DX10 when the got the 2.33x? or in DX10.1?


I'm so glad you got the CB benches there

Take Page 18: http://www.computerbase.de/artikel/hardware/grafikkarten/2009/test_grafikkarten_2009/18/#

Scroll down and unfold the 2560x1600-8xMSAA table (if you're not already seeing it)

Overall HD5870 is almost 100% faster than GT200 at 2560x1600-8xMSAA. HD5970 is 200% faster overall. Now saying that your card is up to 2 times as fast (100%) isn't really awe inspiring in that sense.

The warning message at the top indicates that at those settings, games are generally unplayable or crash and you shouldn't use those numbers to base any assumption on.. makes sense no?
 
So the 2.33x is probably comming from enough memory(*), improved 8x, dx10.1 and geometry setup - in that order.
(*) still haven't seen precisely why the geforces runs out of memory (at least at 1gb) before the radeons. Compression or just better management from the driver?
 
And that's the point Comparing GT200 DX10 results against cards that can do DX10.1? Was the GF100 running in DX10 when the got the 2.33x? or in DX10.1?

It has nothing to do with AA.

I'm so glad you got the CB benches there

Take Page 18: http://www.computerbase.de/artikel/hardware/grafikkarten/2009/test_grafikkarten_2009/18/#

Scroll down and unfold the 2560x1600-8xMSAA table (if you're not already seeing it)

Overall HD5870 is almost 100% faster than GT200 at 2560x1600-8xMSAA. HD5970 is 200% faster overall. Now saying that your card is up to 2 times as fast (100%) isn't really awe inspiring in that sense.

The warning message at the top indicates that at those settings, games are generally unplayable or crash and you shouldn't use those numbers to base any assumption on.. makes sense no?
Two things: They use two DX11 games. We don't know how GF100 will perform in these games. The second thing is that the GT200 cards are running out of memory. That's the next ? we can't answer. You want to compare something? Use the 1920x1200 setting.

So the 2.33x is probably comming from enough memory(*), improved 8x, dx10.1 and geometry setup - in that order.

Not really, because it would be a lot faster.
 
I think DX10.1 has a lot to do with AA (performance.)

No, 10.1 increase the performance because of gather4. It has nothing to do with AA. Would they use such a benchmark they would put their hands on Battleforge or Stalker with DX11.
 
Not really, because it would be a lot faster.

Right, I remembered it as 2.33 at the 2560 setting, it's probably not. http://hardocp.com/images/articles/1263608214xxTstzDnsd_1_25_l.gif
So lets stick with the 1920

Doesn't GT200 (GTX 285) have 1 GB of memory? That's the same as 5870.

Sure, but you often see it drop off at the highest resolutions, so it must be using it less efficiently in some way. With lots of settings like at cb.de you'll often see the 295 drop off first, and then the 285, while you rarely see the drop for the 1gb radeons.
 
:cry:

Noooooo. We're just getting to the good parts. We don't always get entertaining fights like this.

Oh and call me outdated, but I was under the impression Fermi was coming out within the next few weeks. Turns out it was March. Goddammit.

March? HAHAHA, that's what they WANT you to think. In Feb, it will be April, in March it will be May, in April - June, in May - July, in June - Aug, in July - Sep, in Aug - Oct, in Sep - Nov, and then they will finally release it in time for xmas and they're master plan will be fulfilled.

See its all really a bar bet that Jen-Hsun made that the power of Nvidia was so strong that they didn't even need to release a product to prevent ATI's success, just endless reveals.
 
OK, I'm sorry, it looks like my memory was wrong and you're weren't negative to G80 and GT200 before their release.

Apology accepted.

Prices and performance has nothing in common. 5670 cost $100 and 5970 cost $700 -- is it 7x faster? No. So does that mean that everyone should go and buy 5670? Nope. Price is what you're ready to pay for a product and wrt graphics cards performance in today's games is not the only factor of pricing. So if 5870 will have 75-80% perfomance of GF380 and will cost 60% of GF380 then that's because GF380 has some other benefits to a buyer beyond performance alone. I've already described some of these benefits. Surely if you don't thik they're important then you're better of buying 5870 -- IF you're OK with it's performance because deltas aren't absolute numbers. If enough people will think the same NV will be forced to drop the prices. So that'll be solved one way or another, so I don't see any reason to talk much about it.

It's all relative to individual needs. Personally, I'm looking for a fast single-GPU solution with DX11 support, so my main consideration is the price-performance ratio between the two product lines. Everything after that is very secondary for me.

As I've said it's better to sell at a loss than not to sell at all. The pricing will be competitive or the products won't be on the market at all.

I wasn't trying to suggest that NV won't compete, they'll certainly do their best. But I don't think it unfair to also suggest that AMD is in the driver's chair at this point (6 month market lead, much smaller chip, more time to get yields up) and have more room to maneuver on pricing. That's not a bad position to be in. The danger for them, though, is NV leveraging that devrel advantage they clearly have to get some big-name games out that push geometry loads heavier than we've seen before here in the near future. IMO, Cypress strikes me as a very evolutionary part, enabling AMD to hit that Win 7 release window with DX11 support, but they can't sit on this design for too long if we see a geometry usage upscaling comparable to what GPUs in the early 2000s were upscaling with, say, fill rates.

5870 isn't fast enough for me on my 24" 1920x1200 so I don't really understand how is it fast enough for you on a 30" display.

My GTX 285 is fast enough for everything I've played over the last year. Dragon Age at 4x MSAA and 2x SSAA with 16xAF at 25x16 runs just fine. Dirt 2 demo, Batman, Call of Duty, etc. But I've always been fine with 30-35 fps for most games.

Fermi's key points are not only performance but features as well. So it's really a question of you caring about those features (PhysX, CUDA, 3D Vision etc). If you do then you don't really have a choice. If you don't then, well, you need to judge from performance pov. For me PhysX is a more killer feature than DX11 for the moment so I don't really have much choice (well I could wait for a Fermi middle end GPU and use it as a dedicated PhysX accelerator but why would I want to do something like that instead of simply buying a GF100 card?).

Absolutely. I just replayed Batman: AA and turned off PhysX. Having played the game once already, I didn't consider the extra visuals from PhysX worth the frame rate loss. This is where Fermi could be compelling, it should be able to let users 'enable and forget', at least for these initial PhysX games.

Like I said upstream, as a consumer I'm waiting to learn more on board configs, pricing, clock speeds, relative performance, etc., before deciding on my next GPU upgrade. As a hardware geek, however, there's no doubt that Fermi is far more interesting as an architecture. NV has done a lot of heavy lifting early in the DX11 life cycle that will most likely serve them well, especially as they transition the design to smaller fab processes over the next few years.
 
Last edited by a moderator:
Is that not worth noting?

Sure, but just pointing out that those particular numbers might not be relevant to Nvidia's Fermi/GT200 8xAA comparison. i.e that when they claim Fermi is 2.33x the speed of GT200 with 8xAA they weren't referring to situations where the latter ran out of memory.
 
March? HAHAHA, that's what they WANT you to think. In Feb, it will be April, in March it will be May, in April - June, in May - July, in June - Aug, in July - Sep, in Aug - Oct, in Sep - Nov, and then they will finally release it in time for xmas and they're master plan will be fulfilled.

See its all really a bar bet that Jen-Hsun made that the power of Nvidia was so strong that they didn't even need to release a product to prevent ATI's success, just endless reveals.


I might need a really big bucket of popcorn after all. ;)

It's kind of obvious why 2560+8x was chosen: GT200 crashes and burns due to memory management issues with AA I think. 1.5GB cards should help - until you try to up the texture res and bam! OOM again.
Ninjaedit: @Trini all other performance aspects bar memory could be samely valid while not at 2560. So why use those?
 
Back
Top