NVIDIA GF100 & Friends speculation

Nvidia!
No! ATI!
No! Islam!
No! Jesus!
No! Capitalism!
No! Socialism!

Man this thread has degenerated into idiocy.

Sure, Fermi may have some problems such as power, scalability, yields, whatever, but Nvidia has done something that's fundamentally creative and extended the concept of GPUs in so doing. It may take them another generation to perfect it, but, without efforts like this, neither they nor ATI nor Intel would be as good tomorrow as they will be now.

Now back to our infinitely scheduled debate on religio...uh...economic syste...uh techno toy technology cults?
 
Umm you don't honestly believe people upgrade their graphics cards every six months do you?

No , but like i said someone who bought a 5870 at launch isn't going to be pissed that something faster released 6 months later. They've been enjoying and will continue to enjoy it till their next card is purchased.

Do you think someone who buys a fermi will be pissed if 6 months later ati releases a new gpu that is much faster than the fermi ? Or do yo uthink they will be happy with the time they have had with their card and its performance ?
 
Sure they are idiots there in NV, they always remove irrelevant bottlenecks. I mean it's clear that parallelization of something serial is a bad way -- no one in the industry is doing such things at the moment!
In fact, it could very well be the case.

Add to that the possible unbalanced derivation of the architecture.

How do these setup engines/rasterizers work when you disable half a GPC's SIMD units or a quarter of 2 of them?

That raises at least as many questions as that answers to "will it be faster?".
 
You have a benchmark result -- Unigine. With Cypress you gain 27 fps, with GF100 -- 43. Is it an overkill? Is 27 fps enough for you?
Can I play the benchmark or otherwise gain enjoyment from it in any way? Does it push the hardware to its absolute peak, giving me insight into how my games will run in some way? Or is it just a pretty tech demo with a framerate counter with very little secondary meaning?

Yeah, and I've been lurking long enough to know that most of people saying "you're biased" are quite biased themselves to begin with. You're not a judge here, you're as subjective as everyone else.
OK, I'll play. I think you're (sometimes hilariously) biased :devilish:

Well let's say that I have a GTX285. Why would I buy 5870 for a ~+30% performance if I know that in a couple of months I may get a GF100 with +60-150% performance? I see no reason to do this so for me as a GTX285 owner 5800 avialability isn't a reason to not wait for GF100 (and everything less is simply slower than my GTX285 which I've bought a year ago; no, I'm not a fan of AFR cards for $700, thanks).
Did you get those performance deltas for GF100 from a comprehensive review using a multitude of theoretical and game benchmarks, ideally from games you're interested in, at the resolutions you want to run at, using the IQ settings that you need as a minimum to enjoy the graphics fidelity, from an outlet that you can trust as much as is humanly possible to give you an accurate view of real-world performance? Or did they come from NVIDIA?

Technical details can't have zero effect on performance.
The few informations we have clearly say that GF100 will be faster than Cypress. The only question right now is how much faster. Those fancy "4 times or even more faster" charts means that under some conditions it may be 4 times or even more faster than Cypress. As simple as that.
Cypress has GF100 licked in some non-subtle ways, by big margins. You might not enjoy a modern Radeon architecturally (I struggle sometimes, so it's cool, you're in good company) but it's hard to argue with their raw single precision numbers in Cypress, big ROP performance and that large dollop of sampling and filtering.

They've disclosed quite enough to make an informed guesses about performances.
No they haven't. Where's my clocks!

As far as I remember you wasn't too fond of G80 either so why should I bother?
Yeah, why did you bother?

Let's all make a deal, fellow posters in this thread! When the first GF100 GeForce is finally released and tested properly and real data is out there, I'll delete this thread and we'll never speak of it again. 'k?
 
People like you?

http://forum.beyond3d.com/showpost.php?p=1339955&postcount=156



From the Radeon 5800 review thread. My the double standards are sure flying there...

So if it's Nvidia then it's fine to compare a dual GPU to a single. But if it's AMD then it's certainly not OK. My how your comment on IQ must hurt.

Regards,
SB



LOL, I love it when something like this comes back biting you in the ass:LOL:!

Right now no one can buy a GF100. So you call "faith" something that doesn't exist? OK then.

You know it's hard to have a conversation with someone who twists your responses as they please...

You have a benchmark result -- Unigine. With Cypress you gain 27 fps, with GF100 -- 43. Is it an overkill? Is 27 fps enough for you?

No, last time I checked 43 was more than 27. Again, if Fermi is what it's pictured to be, I will buy one. But until I have enough proof that it's worthwile, I'm gonna remain sceptical. And by enough proof I mean thourough benchmarks done by an independent reviewers (plural on purpose).

Yeah, and I've been lurking long enough to know that most of people saying "you're biased" are quite biased themselves to begin with. You're not a judge here, you're as subjective as everyone else.

Of course I am. But I don't go around attacking everyone who dares say something sceptical in the light of these overwhelming and hard facts that were presented today! And my bias meter does sway towards ATI, as I don't really approve of NV's general conduct. However it doesn't cloud my judgment.


Well let's say that I have a GTX285. Why would I buy 5870 for a ~+30% performance if I know that in a couple of months I may get a GF100 with +60-150% performance? I see no reason to do this so for me as a GTX285 owner 5800 avialability isn't a reason to not wait for GF100 (and everything less is simply slower than my GTX285 which I've bought a year ago; no, I'm not a fan of AFR cards for $700, thanks).

You see, it's the +60-150% performance that I'm weary about. You should know that I own both GTX285 and HD5870. I tend to go for whichever GPU is on top at the moment.


Technical details can't have zero effect on performance.
The few informations we have clearly say that GF100 will be faster than Cypress. The only question right now is how much faster. Those fancy "4 times or even more faster" charts means that under some conditions it may be 4 times or even more faster than Cypress. As simple as that.

You know, that how mouch faster is quite important now, ain't it? "4 times or even more faster" means squat if your bottleneck is somewhere else. So it's not that simple as you're painting it.

Sure they are idiots there in NV, they always remove irrelevant bottlenecks. I mean it's clear that parallelization of something serial is a bad way -- no one in the industry is doing such things at the moment! [/end sarcasm]
It's a question of you grasping for straws more than NV not disclosing any information. They've disclosed quite enough to make an informed guesses about performances.

As far as I remember you wasn't too fond of G80 either so why should I bother?

Your memory serves you badly, I must say. I was impressed by G80 (feel free to find my posts stating otherwise - someone sensing another XMAN26 action ;)?). And guesses are not facts. As I said, preliminary numbers are indeed impressive. However, there are some things with in the whole story that don't add up to me. Feel free to influence, not force, my opinion.

Umm you don't honestly believe people upgrade their graphics cards every six months do you?

I do that sometimes ;)

EDIT: Sorry, Deg, that being fond of G80 was clearly not to me...
 
In the worst case (not-setup limited, not raster limited), the GF100 still compares favorably to Cypress. In raster limited scenarios, they've got 4x parallel raster generation vs 2x for Cypress, and for setup limited scenarios, they'll have 4x more headroom (I'm not including tessellation, just bog standard workload supplied tris). So, you have a card, which in the worst case workload (not geometry limited), is on par with cypress, but in the best case scenario (geometry limited), will have fewer bottlenecks. This is greater workload flexibility and potentially more future proof.

For tessellation, if amplification is 64X, then there 0.5 CUDA cores per (u,v) output for domain shading. Let's just pull a number out of my butt and say a domain shader takes 32 cycles. That means, it will take 64 cycles using all available cores in the SM to DS all of the TS's output. So in effect, the tessellator delivers 64 amplified vertices in 64 cycles, for an effective tessellation rate of 1 vertex per clock, but there are 16 SMs, so you get 16 tessellated vertices per clock throughput, but only 4 of these can be setup per clock, so the actual delivery rate will be 4 tessellated vertices per clock.

But this assumes that the CUDA cores are 100% tasked to DS, which given the setup bottleneck, doesn't seem likely. At some point, you will have domain shaded enough vertices and filled up some buffer, waiting for setup.
 
No , but like i said someone who bought a 5870 at launch isn't going to be pissed that something faster released 6 months later. They've been enjoying and will continue to enjoy it till their next card is purchased.

Not at all. That's why I responded to your comment that by the time DX11 games are more plentiful people will have upgraded from the 5870's they just bought. If Im not mistaken lots of folks are still using hardware they bought 3 years ago.
 
People like you?

http://forum.beyond3d.com/showpost.php?p=1339955&postcount=156



From the Radeon 5800 review thread. My the double standards are sure flying there...

So if it's Nvidia then it's fine to compare a dual GPU to a single. But if it's AMD then it's certainly not OK. My how your comment on IQ must hurt.

Regards,
SB

In case you didn't gather, I was more so arguing against the use of the X2s/Dual GPUs in the initial reviews, I was not argueing for there use. Comparing a single to a dual GPU setup is and always will a stupid freakin idea. If one wants to put one in there so people can see performance against a dual setup, then do so, so long as you make it known it is done for the purpose or performance annalysis, not for the sake of comparison. Also my arguement then was the fact that 5870 compared to 4870 is WOW, monster increase but compared to GTX285, it muh.
 
No, last time I checked 43 was more than 27. Again, if Fermi is what it's pictured to be, I will buy one. But until I have enough proof that it's worthwile, I'm gonna remain sceptical. And by enough proof I mean thourough benchmarks done by an independent reviewers (plural on purpose).

That makes no sense. You can't buy a GF100 card today. You must wait for the launch or you must buy something else right now.
I saw a nice architecture and a few numbers. So i decided to wait for GF100.
 
That makes no sense. You can't buy a GF100 card today. You must wait for the launch or you must buy something else right now.
I saw a nice architecture and a few numbers. So i decided to wait for GF100.

What makes no sense? I said I would buy (future). I own both 285 and 5870. So it's only wait that's left for me. I guess that makes it easier for me to be cautious about the whole secrecy, partial availability of the information thing...

I found some amazing fermi benchmarks. Clearly show the power of 4X in action.

lol
 
Let's all make a deal, fellow posters in this thread! When the first GF100 GeForce is finally released and tested properly and real data is out there, I'll delete this thread and we'll never speak of it again. 'k?

:cry:

Noooooo. We're just getting to the good parts. We don't always get entertaining fights like this.

Oh and call me outdated, but I was under the impression Fermi was coming out within the next few weeks. Turns out it was March. Goddammit.
 
What makes no sense?

The whole discussion about buying a GF100 card right now.
Everybody has two options: Waiting for GF100 or to buy a new card in the next days. After the announcement of the "graphics side" of GF100 i will wait. But that's only my opinion.
 
I'm amazed time and again. :) You sound a bit like Charlie, only more techie and educated in your ability to turn almost everything against Nvidia (I don't mean that in a bad way - I'm absolutely fine with different perspectives).
I thought it'd be fun to get people asking more questions about this - since there's a fairly lame assumption going on that setup rate is the bottleneck.

But what if cause and effect the way you take it are reversed?
Truthfully, I think it's a symbiotic thing: NVidia is rasterising small and large triangles efficiently by using a multiple setup configuration. This is a sort of multi-GPU rendering, but all in one GPU. I wonder if that will be relevant to SLI...

As an aside, I've been wondering for years why ATI's 1 triangle per hardware thread isn't a disaster of epic proportions - imagine a 1 fragment triangle leaving 15 quads out of the 16 in a hardware thread doing nothing. Maybe it is an epic disaster, and we're now seeing that in tessellated scenes. It'd be nice to find out, for sure, exactly what ATI's doing here - since I'm not 100% on the 1 triangle per hardware thread thing.

Jawed
 
The whole discussion about buying a GF100 card right now.
Everybody has two options: Waiting for GF100 or to buy a new card in the next days. After the announcement of the "graphics side" of GF100 i will wait. But that's only my opinion.

I didn't say anything about buying it right now (let alone started it). But it's getting pointless anyway (or was from the start). If you wanna know, trace back the conversation. I just voiced my opinion about how the whole situation looked to me, that's all (I mean that's when DegustatoR jumped on me ;) ).
 
Back
Top