JVD-
But lets not forget that right now nvidia is the only one buying this ram which will slow the rate at which its price drops. Also in a very short time faster cards from ati will be out and will force the price of the fx down quickly.
I'm not talking about it in a PC add in board capacity on that end, I'm talking about the financial end for console useage. And on ATi having a faster board out soon, almost everyone assumed the same about nVidia when the R300 hit.
What i take from this is carmack couldn't get good performance out of the nv30 . So he made its own path.
The NV30 is forced to run in FP32 using the current ARB path as it doesn't allow using FP16 if the board supports higher precission. This is something that is being rectified in the final build. As far as actually getting the NV30 up to speed, how long do you think Carmack has actually had a NV30? We know he has had a R300 since last May at least seeing he showed D3 at E3 on one.
Tag-
I'll believe Gainward's claims when I see it. They're an exception.
Why would they even claim a noise level that low unless they had something that could deliver? At 20db it would still be far less noise then any other top tier card, I would say that certainly have something that removes the noise issue for FX's.
You really think in interlaced the core does half the pixels or something?
The core is still rendering (hopefully) 60 fields per second at 1920x1080.
If you notice, I've been listing 4x AA numbers when possible, and in past conversations we've had I've been quite adamant in stating that I expect 4x AA @1080i to be the norm next gen. I bring this up as they would actually be rendering @3840x1080 vs the PC running @2054x1576-2560x1920/2048 using the resolutions I was using to compare. The actual resolution of 1080i is 1920x540.
About the advanced pixel shaders though:
You mean obsolete pixel shaders(certainly not PS 2.0/3.0)
PS 1.4 native test, I am looking forward to seeing the next revision of 3DMark to see what current pixel shader performance is at.
Curiously enough it looks to me like the GF FX rocks the house at fixed-function work but at this point pretty much keels over and dies the moment shaders are enabled... could have something to do with FP32 but I'm not sure.
My assumption would be that they are forced in to FP32 by default(although it is possible that this could be altered by a future driver revision), another reason to look forward to a DX9 level shader bench so they can specify the level of precission used. Pixar uses a FP16 color format for their movies, I don't think we are quite at the point where we need higher precission in real time quite yet
You cut out the second half of what I said.
I've seen reports saynig that the .15u R350 will be the only one out in the near future too, those various bits of information floating around are what they are, rumors. It was rumored that the NV30 taped out long before it did, when I see something definite from ATi I'll believe it a lot more then what I do now.
Revealing core functions could lend them a contract somehow? Explain?
By allowing vendors to low level access to their hardware they could assure the backwards compatibility will only work with their chips.
And I'm sure it's already patented, silly, this is an NV2x core we're talking about. It's still proprietary nVidia tech.
Ever read any of the patent threads that go on here? It regularly takes years between the time a patent is applied for and when it is granted in the tech sector.
JVD-
So Gubbi it would be okay for me to create a 3ghz chip that gives the same performance as a 1ghz chip but that is okay because i designed it to clock that high and to get that speed ? Hell no.
Hell yes. Wait until Hammer core chips show up this year, they will likely be running with P4s running at 60%-70% higher clock speed in performance terms.
Compare the Voodoo3 to the GeForce1. The GF1 was clocked at 120MHZ while the V3 had versions clocked at 166 yet the GF1 throttled it completely. Even in the CPU space clock speed is a distant second in any real world terms outside of marketing, in the GPU space it has never really meant much at all when comparing chips of different architectures.
Lets hope the nv35 does a whole lot of things right and the r350 does alot of things wrong. And lets hope nvidia can get decent drivers for the card ...
Why hope ATi stumbles? It is illogical to want a company to do poorly particularly when we are looking at two companies that for all practical purposes are within spitting distance of each other and are easily strong enough to keep pressure on the other. I wouldn't worry about the NV30 drivers, Carmack has already stated their OGL offerings are better then the R300's, a pre production preview piece of hardware versus one that has been out half a year. It is possible that the NV30 will ship with major problems with drivers, but based on nVidia's history for the last several years I find that extremely unlikely.
They have equaled nvidia on an older micron and a slower clock speed. To me that means the r300 is clock for clock faster and over all = to the nv30.
On the flip side, the R300 has quite a bit more bandwith so the NV30 is much more efficient then the R300. We don't know that either of these are true until we can see some good OCing numbers to get them comparable to each other in terms of core clock and mem bandwith.