Sorry for this but...

How fast the next generation GPU by NVIDIA will be?

  • It beats the R300 in some tests and lose on other tests.

    Votes: 0 0.0%
  • It wins hands down in almost all the situations

    Votes: 0 0.0%

  • Total voters
    91

nAo

Nutella Nutellae
Veteran
I posted the first poll ever (hi Boddo!) on this board and it was about DX9 GPUs, now I propose a new (and boring) poll.
The first time I declared my vote and ended pretty bad for me (I predicted nv30 would had been on the market before r300), not it's time to cast a new vote! Obviously you would cast your vote thinking NV30 and R300 are compared in a no cpu limited situation.
Even this time I'll make my bet and I say NV30 will be faster than R300 in most cases, I hope for NVIDIA I'm not wrong this time too ;)

ciao,
Marco

p.s. you have 3 days to cast your vote, so..some of you would wait for some higly reliable-last minute-rumour that hit the net before NVIDIA NV30 presentation that should be set for the next monday
 
Even this time I'll make my bet and I say NV30 will be faster than R300 in most cases, I hope for NVIDIA I'm not wrong this time too

Apart from probable chip speed what, may I ask, gives you this impression?
 
DaveBaumann said:
Apart from probable chip speed what, may I ask, gives you this impression?
I have feelings... 8)
I expect it to be faster cause NVIDIA put in it a lot of R&D and it seems they have really stressed rendering efficiency this time.
What do you expect Dave, if may I ask..?

ciao,
Marco
 
nAo said:
What do you expect Dave, if may I ask..?

I'm not going to say my impression will be given what I know.

I will say that IMO from a political point of view, for NVIDIA, it has to be faster.

I expect it to be faster cause NVIDIA put in it a lot of R&D and it seems they have really stressed rendering efficiency this time.

And what areas of 'efficiency' do you think have been addressed?

alexsok said:
What gives you the impression that the chip speed would be the only thing that NV30 will have better than R300?

Ummm, where did I put my thoughts Alex? I was aksing someone else why they felt it would be faster - that doesn't indicate what my thoughts are.
 
Alex, I thought you was past this kind of behavior... :)

NV30 lacks floating-point texturing from 1D-3D and cubic environment texturemaps. R300's got that. AFAIK, jury's still out on its displacement mapping features.

So obviously it's not more powerful in every way concievable. Stop bullshitting please.


*G*
 
I voted 2, pretty sure it'll an even split or close to. But I am hoping it will crush the R300. I want one of those at a cheapo price :)
 
well its pretty simple: ATi was trying to make a badass card and get it out asap. nVidia were trying to make a true Crusher. with FSAA and AF maxed out, i have no doubts nVidia will be fastest as they have had the time to implement 3Dfx tech and we all know how much work 3Dfx put into IQ as far as the Rampage was concerned. nVidia can afford better engineers, has had a lot more time to work on their chip, and has .13u. so how the hell could the R300 be able to match it? of course as for the R350... ;)
 
I would vote for the "How the hell should I know" option if there was one.The fealing floating in the air says that nv30 will be faster than radeon 9700 but as to how much rumors are mixed.
I wouldn't be shocked if it is either slower or faster.
 
Personally, if it was available, I would have voted between "wins some and loses others" and "wins almost all."

The idea that it will win every benchmark is, well, silly. It will most definitely lose some. But, I think it will win the majority of the benchmarks against the R300. Anisotropic filtering performance should definitely be on par. FSAA performance is the only unknown.

That said, the smaller die process and nVidia's performance track record lead to the obvious conclusion that it will beat out the Radeon 9700 in most performance tests.
 
This poll was the perfect bait for the fish species called "Common forum f@nboy". It is just irresistible, like a minow to seabass.
:D
 
For the record, I voted for "win some, lose some". I am sort-of opposite of what Chalnoth thinks though. I think in most fillrate / bandwidth limited game situations, R-300 will will more tests than it loses. I think the Nv30 will be faster in "synthetic", pixel shader tests. (Or faster in "cinematic rendering" situations, if you will.)

That said, the smaller die process and nVidia's performance track record lead to the obvious conclusion that it will beat out the Radeon 9700 in most performance tests.

??

You always have to shudder whenever Chalnoth uses words like "obvious" or "most definitely". ;)

1) I don't get the focus on die process. We already know the two parts have similar transistor counts. Beyond that, we can only speculate that 0.13 it might give NV30 a core clock (theoretical fill rate) edge, and possibly lower cost.

2) Everyone expects "raw" bandwdith of NV30 to be significantly lower than R-300. "Effective Bandwidth" is a complete unknown.

Considering 2, I don't see how it's "obvious" at all that NV30 would win in most performance tests.

Also, what "track record" of nVidia are you speaking of? Another non-sensical argument. nVidia has never produced a part that is as far ahead of its last generation part as the R-300 currently is over NV28.

To be clear, this is not to say that nVidia can't do it, and Chalnoth's predictions won't turn out correct. But to call them "obvious conclusions"..... :rolleyes:
 
I don't think a 128bit bus of NV30 will hurt it that much. I think the importance of high speed memory will get lower in the future. For todays games memory bandwidth are very important, but in the future the calculations power in the fragment pipelines will take over as the main performance determining factor. In my phong demo it doesn't give or take a single fps by enabling anisotropic, using fatter textures etc. For the very same reason I don't think any future design from now on will have more than one texture unit per pipeline, maybe they'll have even less, like a small set of independent texture units.
 
Humus...

You do realize we are saying pretty much the same thing? I don't expect NV30's bandwidth to "hurt" it when doing significant pixel shaders (like your latest demos). I do expect it to hurt when running practically every game out there though, where pixel shader performance is a non issue, and "pure texturing / filtering" is what is limiting performance.

Doom3 should be interesting....it seems to me it's sort of a cross between shading and more traditional multitexturing....
 
well its pretty simple: ATi was trying to make a badass card and get it out asap. nVidia were trying to make a true Crusher. with FSAA and AF maxed out, i have no doubts nVidia will be fastest as they have had the time to implement 3Dfx tech and we all know how much work 3Dfx put into IQ as far as the Rampage was concerned. nVidia can afford better engineers, has had a lot more time to work on their chip, and has .13u. so how the hell could the R300 be able to match it? of course as for the R350...

So Ati offering 2-6x the performance of the Ti 4600 in FSAA+Ansio is not a *true crusher* ?? You think that somehow Nvidia's claims that it is *more than 2x* faster with FSAA equating to something *more crushing* than the 9700????

Time to impliment 3dfx tech??? Is that what you think has been going on here? they are delayed by 6 months because they are *trying to impliment some 3 year old 3dfx tech?

Why do you think they have had a lot more time to work on the chip??? They only have a few samples to even use at this point... I just dont even understand the logic.

Better engineers? based on what... ATi got their product up and out the door monthes ago.. and are already working on the release of the next one..

.13u is being treated like the golden fleece or something. ATi alrady did what Nvidia is claiming (2x performance) with .15u and 100mhz SLOWER clock speed, and 200mhz SLOWER Ram. And yet people are still concluding that Nvidia has better engineers????

Look the Nv30 will obviously be cool. It will obviously be somewhat faster.. there is no doubt. But i just dont see how people get the conclusions they do.. like the ones above.
 
I've posted number 3.

Why the hell not?

They've had the R9700 at least from its launch date (and would not surprise me in beta form, before). ;)

They should have performed every benchmark with lots of different settings and PC configurations with it by now. They should really know what the R9700 is capable in all its current/different situations.

So when they launch, I expect them to say that is stomps the competition.

Anything else, puts them as 2nd best, says that the need for .13 process was unjustified, and their engineers aren't as good as ATi's.

P.S, Im not a fan boi, or inclined to support one or the other, but this is a war of minds. Who can get the best product and hold the speed crown for the longest period of time.

1st is everything, 2nd is nothing.
 
Back
Top