R300 the fastest for DoomIII, John Carmack Speaks Again

Status
Not open for further replies.
OpenGL guy said:
If performance is all you care about, then why don't you compare ATI's superior mobile products to nvidia's?

Because I will, in all likelihood, never own a laptop. If I ever do purchase one, however, my main concerns will not be 3D gaming, but battery life and base functionality.

OpenGL guy said:
I mean, you can't just say "nvidia has been putting out superior products to ATI's" and not qualify what you are talking about. It's quite clear that it isn't true, even by your standards.

I guess I just put too much faith in people to understand that I was speaking in the context of this thread. Maybe someday I'll learn...
 
Chalnoth said:
Because I will, in all likelihood, never own a laptop. If I ever do purchase one, however, my main concerns will not be 3D gaming, but battery life and base functionality.

And do you know anything about relative power consumption or "base functionality" of either part?

Chalnoth said:
I guess I just put too much faith in people to understand that I was speaking in the context of this thread. Maybe someday I'll learn...

Maybe you should learn to not make such blanket statements...
 
I do know that the Mobile Radeons generally have very slightly better power usage than the GF2/4 Go's, but I am also aware that I would rather get an even less power-hungry GPU instead of going for 3D functionality.

Oh, and by "base functionality," I'm talking about the ability to do some word processing, or maybe watch a DVD movie.

Regardless, I'm probably not ever going to buy a laptop, so it's not really a concern of mine.
 
OpenGL guy said:
you should learn to not make such blanket statements...
Ahem.

Or... perhaps you could try to interpret messages as most humans do? Which single company could be most accurately described as "dominant" in the graphic semiconductor industry for the past two years? nVIDIA. Obviously there are qualifiers, but remember this is a multiple choice question, and that your goal is to choose the "most correct" answer.

I agree, ATI has a better product in the mobile arena. But I'm sure you're aware that the mobile sector is relatively minor, and that for gaming purposes it approaches insignificance. And ATI may well dominate the next generation GPU market. Good for them, if so! But to suggest that nVIDIA isn't the dominant player at the moment is... (/set politeness mode=true) misguided.
 
demalion said:
I'm a bit astounded by the resistance to the possibility of the R300 being clearly better than the NV30 by some.

You shouldn't be. nVidia has been consistently putting out superior products to ATI's.

in response to my post and its points is not equal to:

"Which single company could be most accurately described as "dominant" in the graphic semiconductor industry for the past two years? nVIDIA."

portrayed as a multiple choice question. OpenGL guy was referring to one, not the other.
 
I am also in the dark as to whether he was talking about NV30 or NV25, and can make arguments either way. In another forum, someone brought up a valid point about how NVidia stated that they are shifting philosophy to make better pixels instead of more pixels. If they stuck to this, it seems plausible the R300 is faster than NV30.

In any case, this is definately a big thing for ATI. You know NVidia would do anything in their power to stop ATI from having this honour. I think we can all agree that we have a great battle brewing for this fall...
 
Hmm well I read it again. I figure JC must be referring to an NVidia card which he has access to because he says "; but in every test we ran, ATI was faster". It doesn't make much sense to compare nv25 with r300 so i figure chances are he does have an nv30 board.

NV30 being slower than r300 - i suppose this could be because it's really early (buggy?) silicon, or because of driver problems (=> hence the "NVidia has done everything they possibly could"?). This is looking good for ATI...

IMO it's kind of scary that Doom3 gets choppy even on the ATI next gen.

Serge
 
It was also my impression that Carmack was comparing the R300 with the NV30. Of course that was just my IMPRESSION. Carmacks a smart guy ;) I cant see him arguing merits between one companies current product and another companies next gen product. So kudos to ati on at least this one point (though it is still grossly early to make blanket judgements what with driver quality, widespread performance, features etc).

And while the slowdowns were evident in spots, I actually found it somewhat reassuring that the unoptimized Doom engine was running as well and as fast as it was on a 2.2 ghz P4 and a video card that will be available in ~4 months.

Of course neither the R300 or the NV30 will power the pc I build to run doom. That honor ;) will go to the best 3d chipset of spring of 2003...
 
Nvidia seems to have the edge about 3 months out of 6 months. Tradionally speaking. THen a product comes that is at least competitive, if not faster in speed. EG, Geforce1 was undisputed for 5 months before V5 vs GF2. GF3 about 4 months then R8500 came, etc etc

I too think he was talknig about early versions of Nv30 vs R300, at least in feature specs.

I'm going to take the cautious side, and wait and see :p Typically fanboyz get themselves in a rut when they make broad speculation about unreleased products too soon. (We all remember how R8500 was supposed to be a GF3 killer after a lot of driver updates, which never happened.. THey are, at least in speed, still comparable).

OTOH, it wonder if there is merit to the statement that Nvidia lost a few months in hardware development b/c of Xbox.
 
Chalnoth said:
demalion said:
I'm a bit astounded by the resistance to the possibility of the R300 being clearly better than the NV30 by some.

You shouldn't be. nVidia has been consistently putting out superior products to ATI's.

That is a complete joke, be it Tomshardware or Anandtech as that is what would I read from that statement your post is SIMPLY wrong.

To put things in context, just becuase a card runs 5 fps faster doesn't give it the SUPERIOR PRODUCT title, maybe in your mind but I hope not in the majority of minds.
You have two cards, one runs 140 fps and one runs 125 fps, web pages like Anand and Tom base the WINNER soley off FPS..which is seriously flawed. Sure FPS are important, but both cards deliver more than enough for any game, and most online servers cap the FPS @ 99 anyways.

The Radeon 8500 is a more complex chip than any Nvidia product on the market and this is a 9 month old card.
The Radeon 8500 can do many things a Geforce 3 and 4 can't:

Single-pass texturing with up to 6 textures, 2 textures per clock
TRUFORM technology
DirectX 8.1 pixel shaders up to version 1.4
400MHz RAMDAC


Superior image quality in DVD playback (proven by MANY sites), superior software interface and remote control (AIW). No broken texture compression and adaptive anistropic filtering that reduces the performance hit by over 50%. Superior 2D display as mentioned by MANY sites and Superior TV out as mention by MANY sites,
Is the 8500 perfect, nope..but is not Inferior by any means, its superior
 
Has anyone at Beyond3d bothered to email Nvidia about whether or not nv30 was sent to id software to run Doom III? Kristof? Dave Wavey? Reverend?
 
Too bad it's:

1) Still slower than a GF4 (e.g. Doom3, GF4 multipass faster than 8500 PS1.4 collapsed passes)

2) Inferior AA performance

3) Crappy RIP-mapped anisotropic

4) Drivers suck (but getting better)


The 8500 took bold steps forward, and ATI did a very good job rescuing themselves from 3dfx's fate, but it's not a Gf4 killer. The R300 sounds like the first chip that could seriously challenge NVidia for the performance crown.

Then again, no one has asked Carmack point blank what he was talking about. What does it mean that NVidia tried everything they could? Does that mean they failed to deliver a working sample, working drivers, etc? Does it mean the sample performed badly or was underclocked? All this is supposition until otherwise.

Hopefully some web site can get him to clarify what he was talking about.
 
DemoCoder said:
Too bad it's:

1) Still slower than a GF4 (e.g. Doom3, GF4 multipass faster than 8500 PS1.4 collapsed passes)

2) Inferior AA performance

3) Crappy RIP-mapped anisotropic

4) Drivers suck (but getting better)


The 8500 took bold steps forward, and ATI did a very good job rescuing themselves from 3dfx's fate, but it's not a Gf4 killer. The R300 sounds like the first chip that could seriously challenge NVidia for the performance crown.

Then again, no one has asked Carmack point blank what he was talking about. What does it mean that NVidia tried everything they could? Does that mean they failed to deliver a working sample, working drivers, etc? Does it mean the sample performed badly or was underclocked? All this is supposition until otherwise.

Hopefully some web site can get him to clarify what he was talking about.


1) Prove it, as thats simply not proven as no GAMES SUPPORT PS 1.4 :p

2) Inferior AA performance, big deal..delivers superior IQ and also Alpha textures..frames frame frames :rolleyes:

3)Its not ripmapping and proven many times here right on this forum

http://www.beyond3d.com/forum/viewtopic.php?t=678&highlight=ripmapping

4) Drivers suck (but getting better)..maybe one should OWN one before making such statements ...getting very old :-?
 
DemoCoder,

Excuse me but... what the hell are you talking about? what a bunch of trollish filth. Do you feel beter now having posted this crap? Seriously. We are talking about the R300 here.

On top of that.

1. its apparently only slower than a Ti 4600, and Carmack has never said how much slower, it could be very6 little.

2. AA looks better and is fast enough compared to a GF3, in fact the newest drivers it is consistantly Faster even in Quality SV than a standard GF3.

3. Not this same BS crap about Ansio... please. I would rather have nice fast ansio over what Nvidia offers any day. Who cares what the car looks like if you cant drive it.

4. he drivers simply do not suck. Do you even own one? I do and it works great. I have NO problems with any of my games. Not that it matters you people will keep saying the same crap no matter what anyone says.


Since when did you nvidians start openly comparing the 8500 to a GF4 ti 4600 DIRECTLY, because Carmack clearly isnt testing performance on a ti 200 now is he...? EVERYONE knows that the 8500 was meant to compete with the Gf3 line in performance. Its frikking LAME that people like you even go there. You Want me to start comparing your GF4 to a R300? Its clearly a POS in comparrison.. not very fair.. is it...

Good grief.. this kind of #$%& never changes......
 
excuse me fred....

We all remember how R8500 was supposed to be a GF3 killer after a lot of driver updates, which never happened.. THey are, at least in speed, still comparable).

But it HAS happended. Or dont you pay attention? Its faster than a vanilla Gf3 and Ti 200 in most if not all games with the modern drivers. Just look around at some of the recent reviews. Its also faster than all cards including the ti 500 (GF4 excluded)in doomIII. In fact the new 8500's are nearly on par with GF4 ti 200's based on more than one recent review.

Id say 20+ FPS is a GF3 killer wouldn't you? I know I have considerable FPS increases with all the recent drivers. Oh, and Features wise it is a GF3 MUTILATOR, not just killer and HAS been since release.
 
DemoCoder said:
Too bad it's:

1) Still slower than a GF4 (e.g. Doom3, GF4 multipass faster than 8500 PS1.4 collapsed passes)

Uh, you're comparing an older part to a newer part. I would hope that nvidia's newer part could beat one that is older. Otherwise, why would nvidia bother releasing it?

3) Crappy RIP-mapped anisotropic

False.

4) Drivers suck (but getting better)

The drivers don't suck. I've used my R200 for months with very few problems. Want me to go on about how all my friends with nvidia cards were bitching about all the crashes they were having?

The 8500 took bold steps forward, and ATI did a very good job rescuing themselves from 3dfx's fate, but it's not a Gf4 killer. The R300 sounds like the first chip that could seriously challenge NVidia for the performance crown.

I really don't think the R200 was meant to be a GF4 killer, but it does pretty well anyway.

Then again, no one has asked Carmack point blank what he was talking about. What does it mean that NVidia tried everything they could? Does that mean they failed to deliver a working sample, working drivers, etc? Does it mean the sample performed badly or was underclocked? All this is supposition until otherwise.

I see. Supposition is useless. Kinda like your trolling.
 
Hellbinder[CE said:
]3. Not this same BS crap about Ansio... please. I would rather have nice fast ansio over what Nvidia offers any day. Who cares what the car looks like if you cant drive it.

How about some scores with aniso in Quake3? After all, aniso performance has improved in later drivers on the GF4...

Here are mine with all details max, compressed textures on (except trilinear...I turned that off to make the comparison a little bit more even):

GeForce4 Ti 4200 64MB (no o/c)
Athlon 933MHz on nForce 415-D

1600x1200x32
No aniso: 107.3 fps
2x: 89.2 fps
4x: 76.2 fps
8x: 68.0 fps

Just fyi, with trilinear enabled, the 8x score is 64.7 fps. Personally, I can't stand to play without trilinear filtering.

Oh, and btw, I always play with max anisotropic, in all my games, and usually with 2x FSAA (I don't generally play Quake3, but in UT I play at 1600x1200x32 w/ 8x aniso and 2x FSAA. In Morrowind, I play at 1024x768x32 w/ 8x aniso and 2x FSAA...I could probably do better, but it works fine and looks good this way).

And why don't we compare FSAA?
Quake3, 1024x768x32, 8x aniso:
no FSAA: 116.9 fps (CPU-limited here, obviously...)
2x FSAA: 108.0 fps
4x FSAA: 76.9 fps

And don't forget, these are all with max aniso, and I decided to turn on Trilinear filtering this time.
 
Look at it this way. The GF4 ti 4600 gets about 35-40% more fps than an 8500 on average, depending on application, resolution, etc. Now lets take the worst possible interpretation of this for ATI: that this this was a comparison between R300 and NV25. What that means is that a pre-beta version of R300 with PRE-beta drivers beats the GF4 4600, with its mature drivers, across the board ("in every test") in a next generation game. What does that tell us about the speed boost we should expect in going from R200 to R300? Now add to that the fact that if this interpretation is correct, it also means more than likely R300 will go to market before NV30. It would be even better news for ATI if it is in fact a comparison to NV30, but it seems to be good news for them either way you look at it...
 
Chalnoth said:
Hellbinder[CE said:
]3. Not this same BS crap about Ansio... please. I would rather have nice fast ansio over what Nvidia offers any day. Who cares what the car looks like if you cant drive it.

How about some scores with aniso in Quake3? After all, aniso performance has improved in later drivers on the GF4...

Here are mine with all details max, compressed textures on (except trilinear...I turned that off to make the comparison a little bit more even):

GeForce4 Ti 4200 64MB (no o/c)
Athlon 933MHz on nForce 415-D

1600x1200x32
No aniso: 107.3 fps
2x: 89.2 fps
4x: 76.2 fps
8x: 68.0 fps

Just fyi, with trilinear enabled, the 8x score is 64.7 fps. Personally, I can't stand to play without trilinear filtering.

Oh, and btw, I always play with max anisotropic, in all my games, and usually with 2x FSAA (I don't generally play Quake3, but in UT I play at 1600x1200x32 w/ 8x aniso and 2x FSAA. In Morrowind, I play at 1024x768x32 w/ 8x aniso and 2x FSAA...I could probably do better, but it works fine and looks good this way).

And why don't we compare FSAA?
Quake3, 1024x768x32, 8x aniso:
no FSAA: 116.9 fps (CPU-limited here, obviously...)
2x FSAA: 108.0 fps
4x FSAA: 76.9 fps

And don't forget, these are all with max aniso, and I decided to turn on Trilinear filtering this time.

How about some Ansitropic scores on a modern game, not one that is over three years old, Serious Sam: 2 or RTCW ??
 
DemoCoder said:
Then again, no one has asked Carmack point blank what he was talking about.

Okay, I did it. Here is his response in its entirety:

It [The ATI card used] was compared against a very high speed GF4. It shouldn't be surprising that a next-generation card is faster than a current generation card. What will be very interesting is comparing the next gen cards (and the supporting drivers) from both vendors head to head when they are both in production.

Everyone working on DOOM still uses GF4-Ti cards at the moment, and if someone needs to buy a new video card today, that is what I tell them to get.

John Carmack

Btw, I also posted this up in the front page of http://www.nvnews.net
 
Status
Not open for further replies.
Back
Top