(and the Xbox version of DX exposes more of the hardware, so in some respects, it's slightly beyond a GF4 on a PC).
Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2?
Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2? The XB2 won't be using a R350 nor a NV30/NV35. Odds are nearly nil that it will be using a R400 or NV40 either. We are almost certainly looking at R450/NV45 or later parts, where each company is in a year from now will be much more telling then where they are today trying to gauge anything. The NV40 team has been working on their design for a year or so now, same as the R400 team. ATi hit one 'six month' product cycle(~nine months, but close) and nVidia missed one(twelve months, missed big), let's see if ATi hits their next six month cycle with R350 and if nV misses theirs with the NV35 before we try and make these telling trends.
That depends, to a major extent, on the actual areas of the chips.BenSkywalker said:Some of the other factors, cost is a huge one. nV is ramping up their .13u process now, ATi will at some point. When nV has production ramped their parts are almost certain to be less expensive then ATi's offerings simply due to build process.
The vibe I get is that nVidia could build a GPU as powerful as the PS 3 for Microsoft, but Huang wants more money to build it than MS has on the table.
BenSkywalker said:In terms of cooling, quieter then a human heartbeat doesn't sound too loud to me, certainly a hell of a lot less noise then the ATi built 9700Pros I've heard.
First off, consoles playing with old code means jack shit. Find the most demanding situation you can find and throw it at them, usually synthetics-
The above is running the Code Creatures bench. The highest TV setting is 1920i which in terms of pixels falls between 10x7 and 12x9/10. In that instance even the non Ultra FX holds an edge over the 9700Pro(actually, the edge increases as the resolution is upped). Looking at pure synthetics-
The FX is 40%-100%+ as fast as the 9700Pro in geometry throughput, remember the hype around the PS2's 66Million polys per second? Remember the hype around the XBox's 100Million+ polys per second? Anyone remember any hype around the anisotropic filtering performance of each? Raw pure specs on the hype end, and ones people can easily recognize. Carmack has also been on record stating the Doom3 runs the fastest overall on the FX. None of this is to say the FX is whipping the 9700Pro silly or anything of the sort, simply to point out that if the FX had launched along side the 9700Pro who 'won' would be splitting hairs between the two(R300 sometimes, NV30 others).
Some of the other factors, cost is a huge one. nV is ramping up their .13u process now, ATi will at some point. When nV has production ramped their parts are almost certain to be less expensive then ATi's offerings simply due to build process.
Then the other cost factor, memory bus. ATi is @256 while nv is @128 without a major performance rift. From an overall cost perspective nV would likely be a decent amount cheaper.
Backwards compatibility is another. Assuming that all developers stick to high level API coding then there wouldn't be a problem switching over to ATi, and don't think nVidia isn't aware of this. I wouldn't be in the least bit shocked to see them release tools allowing for lower level optimzations and more exacting specs on the chip once it becomes reasonably obsolete. They do this and have developers exploit it they build themselves in some assurance. ATi can do the exact same thing on the GameCube end to avoid a possible coup by nVidia also.
Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2? The XB2 won't be using a R350 nor a NV30/NV35. Odds are nearly nil that it will be using a R400 or NV40 either. We are almost certainly looking at R450/NV45 or later parts, where each company is in a year from now will be much more telling then where they are today trying to gauge anything. The NV40 team has been working on their design for a year or so now, same as the R400 team. ATi hit one 'six month' product cycle(~nine months, but close) and nVidia missed one(twelve months, missed big), let's see if ATi hits their next six month cycle with R350 and if nV misses theirs with the NV35 before we try and make these telling trends.
Owing to Gainward’s advanced R&D skills the maximum noise is reduced to only 7db, the same as a human heartbeat. Competetive products maximum noise levels can be rated as high as 70db the same level as a domestic vacuum cleaner.
1080i has more pixels than the maximum sane PC resolution. Between 10x7 and 12x10? That's a laugh. Learn to multiply.
How about 3DMark2001's Nature test? The Radeon whoops the FX in that one.
ATi's RV350 is .13u, it's taped out
You forget that even different cores by the same IHV will handle things slightly differently. To-the-metal GeForce3 code probably wouldn't run quite right on a GF4, and definitely wouldn't run right on an FX...
And even when the core is 'reasonably obsolete' there's still nVidia confidential tech in there which they still use and don't want ATi to see.
Its louder than a delta black label. Second of all the geforce fx uses top of the line ddr2 ram and that equals alot of money where as ati uses older slower ram and a 256bit bus. The question is which one is more expensive.
On doom 3. Carmack said that the geforce fx while using the nv30 path is faster than the radeon using the arb path
All this is a moot point though as the r350 will be out in a month or so and that is where nvidia is screwed.
Okay i just looked over your post and see you mention that no one cares about aa and what not .
That depends, to a major extent, on the actual areas of the chips.
BenSkywalker said:For noise levels- http://www.gainward.se/news/030130.pdf
Further reference-
Owing to Gainward’s advanced R&D skills the maximum noise is reduced to only 7db, the same as a human heartbeat. Competetive products maximum noise levels can be rated as high as 70db the same level as a domestic vacuum cleaner.
http://www.anandtech.com/news/shownews.html?i=18126&t=pn
1080i has more pixels than the maximum sane PC resolution. Between 10x7 and 12x10? That's a laugh. Learn to multiply.
I am assuming that is a joke and you are just fooling around, with the amount of times it has been discussed about the differences between interlaced and progressive modes there is no way in hell you could not be aware of just how wrong your numbers are.
How about 3DMark2001's Nature test? The Radeon whoops the FX in that one.
I guess 'whoops' means something different where I come from, at the very least it includes being faster-
http://www.hardocp.com/image.html?image=MTA0MzYyMDg1OTVjVVNkMzFISXhfMl80X2wuZ2lm
9700Pro losing at every resolution is whooping on the GFFX? I guess you could say the R300 whoops the GFFX at CodeCreatures too. And the GFFX whooped the R300 hard in terms of getting to the market using your guidelines
ATi's RV350 is .13u, it's taped out
Heard the same thing about the GFFX months before it was.
------------and reportedly it'll be widely available before the .15u R350.
You forget that even different cores by the same IHV will handle things slightly differently. To-the-metal GeForce3 code probably wouldn't run quite right on a GF4, and definitely wouldn't run right on an FX...
If nV went in knowing what they needed to do it would, which they obviously would be...
And even when the core is 'reasonably obsolete' there's still nVidia confidential tech in there which they still use and don't want ATi to see.
If it could land them a potential $100Million+ contract they may be willing to let a few things slide. Not to mention, when the patents come through admitting to exacting details on certain parts doesn't become all that important.
zurich said:The vibe I get is that nVidia could build a GPU as powerful as the PS 3 for Microsoft, but Huang wants more money to build it than MS has on the table.
Again, I say that IMO, NVIDIA's the only company that can put MS on an equal (or greater) playing field with Sony. Here's hoping they'll shell out...
Tagrineth said:EMBM, Vertex Shader, and Advanced Shader are the Radeon's domain...
And it isn't far behind at DOT3 and Pixel Shader
V3 said:(and the Xbox version of DX exposes more of the hardware, so in some respects, it's slightly beyond a GF4 on a PC).
The same as those OGL extensions ?
yup, and being the R350 will be out first(and Rumors of Nvidia pulling out on the NV30 are all over right now) , its no contest.Steve Dave Part Deux said:To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.
Steve Dave Part Deux said:To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.
Gubbi said:Steve Dave Part Deux said:To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.
Bollocks.
Pipeline length influence clock speed. And pipeline length is a (micro) architectural parameter.
Longer pipeline == shorter individual pipestages => higher clock speed.
A valid comparison must be done in the same process and have similar die size.
In real life we'll *always* be comparing apples to oranges. In the end the only valid comparison is that of the market.
Cheers
Gubbi
jvd said:well Gubbi. If chip a does 3million polygons at 325mhz and card b does 3million polygons at 400mhz . Then chip a is faster than chip b at doing polygons. Simple logic would tell you that if chip a is upped to 400mhz it would do more than 3 million polygons and if chip b is reduced to 325mhz it would do less than 3 million polygons.
Gubbi said:You compare products on absolute performance, not clock speed