R300 the fastest for DoomIII, John Carmack Speaks Again

Status
Not open for further replies.
John Carmack is up @ 3:28 AM Eastern Time ?? Could you take a screen capture of that email...As Rev has tried many times to get a reply from JC and nothing ever came of it.
 
Sure :)

jcemail.jpg
 
OpenGL guy said:
Kristof said:
I have to agree with Wavey... What was he exactly comparing to what ? Did ATI put all its effort into making this one app run (full of driver hacks/whatever) to grab full PR for a product they have not even announced ? And what was it compared to ? The already released GF4 4600 Ti ?

Its IMHO really hard to say what has really happened here. Possibly he is saying : ATI had early silicon that they managed to tweak so it runs my DoomIII quite fast, NVIDIA did not yet have silicon or did not get the tweak hacks running in time. Could even mean that NVIDIA is aiming for .13 and AT for .15 hence ATI was able to get silicon quicker... Who knows ;)

Who said anything was "tweaked"? You think that ATI must be using "driver hacks" because John Carmack says we are faster?

Instead of being negative, maybe you should think about the good things to come.

I am not saying they used hacks to be faster, I said they "probably" used special tweaks/hacks because they are this early in the actual release schedule of this product ! R300 is not up and running fully, there are months to go to the final release, probably even months before anyone outside developers gets to see drivers and hardware. What has happened here is that ATI has worked closely with JC on this, JC realised it was going to be the fastest and ATI knew this was a kick-ass marketing possibility. I am pretty darn sure that a huge partition of the ATI driver team has been hacking and tweaking away to please Carmack (he says jump and they WILL have asked how high), and this means they probably took some short cuts. This is how early showings of hardware work, you select a title (or some titles - nothing else gets shown or tried since they know it would have likely failed or shown bugs of some kind) you want to show off (not show bugs or crashes) and you tweak and hack like hell so you can beat the competitor. Anyone who thinks that R300 has been available in silicon at ATI with quality level drivers for a long time is being overly optimistic, the DX9 side of drivers will defiitely be shacky given that the thing only went final last weekend (OpenGL obviously has a step up there).

So in summary I am not doubting that R300 will be good, I am not suggesting it will need hacks. I do believe that given its early state of development (at least of drivers and quite possibly even silicon - might need a respin to work out some last minute nasty bugs that they botch around in the driver right now) its probably running overtweaked customised drivers (DoomIII drivers). All in all I am just trying to get peoples feet on the floor a bit.

:devilish: 8)

Btw if this was NVIDIA, or anybody else, I would be thinking the same. When you do a big splash demo you make sure that whay you demo works, even if it requires a special driver release.

K~

P.S. Seems I was right asking the question "what was being compared to what" ...
 
I find this thread exceedingly amusing.

All of you who can't abide by the possibility that in their current states the r300 could be outperforming the nv30 are just sad.

If any of you have paid attention to anything floating around the net, you'll already know that the r300 is further in development than the NV30 (well, there's a few.. well, Nvidiots really is a fitting term here... that just won't ever be able to take their blinders off i guess). I wouldn't hazard a guess as to which will be faster 6 months after their release... but those of you who think it's not possible for the r300 to end up faster than the NV30, at least in their first incarnation, better wake up and smell the coffee. Both the r300 and nv30 are vastly superior in every way to the current cards. Performance will entierly be a tossup based on a large number of factors.

And yes, Nvidia has in the past put out product cycles faster than ATI has... but there's been a lot of reasons for that which have very little to do with engineering capabilities.... One of the largest ones is that 3rd parties really bitch when they finally get a "cutting-edge" card released to market, and it is relegated "obsolete" by a new announcement 2 weeks later.

Nvidia made a lot of bad blood with the release of the GF4 series with 3rd parties... A lot of companies GF3 series were turned into loss-leaders realy quickly because of that. Good for the consumer, but really bad for the companies involved. I think that will become abundantly clear when you see what happens with future announcements of AIB manufactures continuing to defect from nvidia and releasing ATI boards in the future.

I have no idea which will be faster between the NV30 and the R300. I have no qualms believing the current state of the R300 is faster than whatever incarnation of the NV30 is currently available. Based on what i know I certainly think it is *possible* for the first-gen R300 to be faster than the first-gen NV30, but like I said above... there's a lot of factors that will play out in that fight.

Edit: I hadn't seen the JC email when I posted this. But I still think some of you need to get your head out of your arse when it comes to some of this. The outcome of the NV30/R300 battle will be entirely up for grabs this fall.
 
Thx,

Looks like Beyond3D needs to buddy up with JC a little more :p

JC the party guy...3:28 AM EST and still rockin..who would have thought :LOL:


Edit:

11:54 I see, still :LOL:
 
Btw, I went ahead and commented out his e-mail address. I figure it's better for him if it's at least a little bit challenging for the rest of the world to find his e-mail. Hopefully you'll still trust me that this is JC :)
 
Hellbinder[CE said:
]give me a break guys....

he is OBVIOUSLY talking about the Nv30. The R300 is not due to be released for severl moths. It doesn't even make LOGICAL SENSE to assume he is comparing the next generation to this generation. The next generations superiority is an OBVIOUS GIVEN.

No he did not say it specifically. But the intent is clear, and the reason he said it is clear. He HAD to put to rest all the BS rumors that Nvidiots started spreading.


LOL @ some people.
 
I do have to admit, that when I looked back at the quote again (before reading JC's e-mail response), it did look an awful-lot like he was talking solely about next-generation hardware.
 
I guess I'm just _real_ good with the crystal ball...This is what I stated on Rage3D...

nVidia doing everything they possibly could....You could go either way on this...But I'm of the opinion that since JC said that nVidia is already behind the curve with NV30...That nVidia was burning some midnight oil leading up to E3 trying to optimize their GF4 (or some super overclocked GF4) to use for E3...and it simply wasn't as fast as the R300 sample.

Having said that...Is there something in particular you guys would like to know about the future? :)
 
Obviously it was very foggy, hence the 3-page discussion. BTW, I don’t want to offend Chalnoth (Who I know all the way from old Sharky’s Delphi forum, if I am not mistaken) or anyone else on Nvnews crew, but the fact that they can get a reply from JC in a matter of hours while Rev and the rest of B3D people can’t get a reply at all is somewhat unsettling.
 
Chalnoth,

I just want to say one thing...

After browsing through your email screenshot...

Congrats on the promotion! (I have no idea what that really means, BTW!)
 
Geeforcer said:
Obviously it was very foggy, hence the 3-page discussion. BTW, I don’t want to offend Chalnoth (Who I know all the way from old Sharky’s Delphi forum, if I am not mistaken) or anyone else on Nvnews crew, but the fact that they can get a reply from JC in a matter of hours while Rev and the rest of B3D people can’t get a reply at all is somewhat unsettling.

Thats why I asked for the screen shot..not that I didn't believe him, more that I couldn't believe that Beyond3D couldn't get a simple reply like that one out of him...11:54 PM maybe he just got back from the bar and felt generous :p

Edit:


Very High Speed Geforce 4 ??? Another mystery maybe huh...he could have said Ti 4600..maybe this is the Refesh card.
 
Geeforcer said:
Obviously it was very foggy, hence the 3-page discussion. BTW, I don?t want to offend Chalnoth (Who I know all the way from old Sharky?s Delphi forum, if I am not mistaken) or anyone else on Nvnews crew, but the fact that they can get a reply from JC in a matter of hours while Rev and the rest of B3D people can?t get a reply at all is somewhat unsettling.

It probably has more to do with the question than who I am.

And yes, I started my online message board browsing days over at Sharky's Delphi forum (Well, at least 3D hardware forums...).
 
We all remember how R8500 was supposed to be a GF3 killer after a lot of driver updates, which never happened.. THey are, at least in speed, still comparable).

Now, I have to say even I was surprised at this:

Taken from our 8500 review part 1 only 2 months ago on with the 6025/6037 drivers:

rtcw_fillrate.gif

rtcw_bench.gif


Tested this month on the 6071 drivers:

rtcw_fillrate.gif

rtcw_bench.gif


That’s a healthy increase – if they have similar gains in other titles between the driver sets then if it were NVIDIA we would have had a blaze of marketing, mail-shotting every damned user (and non user!) out there, saying how fantastic the drivers are. I think just because ATi aren’t making the big song and dance NVIDIA do about driver improvements it would be remiss to assume its not happening.
 
Hellbinder[CE said:
]No he did not say it specifically. But the intent is clear, and the reason he said it is clear. He HAD to put to rest all the BS rumors that Nvidiots started spreading.
Yes, I'm sure the reason JC answered was to quell unfounded fanboy ravings. :rolleyes:

Doomtrooper said:
The Radeon 8500 is a more complex chip than any Nvidia product on the market and this is a 9 month old card.
The Radeon 8500 can do many things a Geforce 3 and 4 can't:

Single-pass texturing with up to 6 textures, 2 textures per clock
TRUFORM technology
DirectX 8.1 pixel shaders up to version 1.4
400MHz RAMDAC
Complexity does not automatically a good card make. The superior shaders don't seem to be put to much good use over the GF3/4 series. The 400MHz RAMDAC is the manufacturer's concern, not nVidia's. And ATi's FSAA is slower than the GF4's (the 128MB 8500's are competing with GF4's, like it or not). Still, the 8500 is undoubtedly a better price/performance deal at this point, in light of NewEgg's $99 250/275 64MB card. And it does seem to be a more well-rounded card, multimedia-wise.

John Carmack 8) said:
It was compared against a very high speed GF4. It shouldn't be surprising that a next-generation card is faster than a current generation card. What will be very interesting is comparing the next gen cards (and the supporting drivers) from both vendors head to head when they are both in production.

Everyone working on DOOM still uses GF4-Ti cards at the moment, and if someone needs to buy a new video card today, that is what I tell them to get.
OK, more interesting--perhaps nV supplied a higher-than-normal-clocked GF4 sample. Good to hear that ATi's R300 outperformed it, even at this early stage--if it is the R300, and not the RV250.

Were it a NV30 ATi's part was competing against, I was prepared to consider ATi had more driver writers put on Doom 3 as the reason their part performed better. But the best explanation ATM seems to be R300 vs GF4-Ti. Considering a GF3 was supposed to get 10x7@30fps, I'm curious as to why the R300 stuttered--if it's lack of optimization on id's or ATi's end. Either way, it seems JC will hit his performance target right on the nose, if he releases in 6-12 months.

BTW, I'd love to see someone post aniso and FSAA scores to compare with Chalnoth's, particularly with a level (bilinear) aniso playing field. Someone step up to the plate! :)
 
Doomtrooper said:
Very High Speed Geforce 4 ??? Another mystery maybe huh...he could have said Ti 4600..maybe this is the Refesh card.

Yes, I noticed this too.

nVidia did release a GeForce2 Ti video card...why wouldn't they release another GeForce4 Ti card? Of course, it might not have been a card planned for release, but possibly just a hand-picked reference board from nVidia (i.e. hand-picked chip, hand-picked memory...) that was capable of very high clock speeds.

And no, I'm not going to fire off another e-mail, as I don't really care, and I doubt he does either.
 
Complexity does not automatically a good card make

Neither does a few more FPS like some websites think, IQ, texture compression, VIVO and all the other things that make a GRAPHIC card not FPS cards.


As for the anistropic challenge, I'll take that...but not Quake 3..Unreal Tournament Thunder Demo and RTCW or even Serious Sam 2.
 
Doomtrooper said:
Very High Speed Geforce 4 ??? Another mystery maybe huh...he could have said Ti 4600..maybe this is the Refesh card.

Could that be NV18 we keep hearing about?
 
Theres also an NV28 being touted. Reactorcritical mentioned it in relation to AGP8X; I've heard it said before that NVIDIA may also wish to move GF4 to .13um for even more speed increases.
 
DaveBaumann said:
Theres also an NV28 being touted. Reactorcritical mentioned it in relation to AGP8X; I've heard it said before that NVIDIA may also wish to move GF4 to .13um for even more speed increases.

Actually, that would make sense.
 
Status
Not open for further replies.
Back
Top