nVidia's Jen-Hsun Huang speaks out on X-Box 2 possibility.

Yeah, imagine all the great jokes we'll have when MS releases a NV35-powered Xbox2 with "Jet Turbine Sound Simulator" in it.
 
(and the Xbox version of DX exposes more of the hardware, so in some respects, it's slightly beyond a GF4 on a PC).

The same as those OGL extensions ?

Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2?

Yeah, they did a good job with Xbox, why are people counting them out for Xbox2. But its interesting though, this time around they might get competition. Hopefully that will bring the best in them, instead of faltering.
 
Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2? The XB2 won't be using a R350 nor a NV30/NV35. Odds are nearly nil that it will be using a R400 or NV40 either. We are almost certainly looking at R450/NV45 or later parts, where each company is in a year from now will be much more telling then where they are today trying to gauge anything. The NV40 team has been working on their design for a year or so now, same as the R400 team. ATi hit one 'six month' product cycle(~nine months, but close) and nVidia missed one(twelve months, missed big), let's see if ATi hits their next six month cycle with R350 and if nV misses theirs with the NV35 before we try and make these telling trends.

The vibe I get is that nVidia could build a GPU as powerful as the PS 3 for Microsoft, but Huang wants more money to build it than MS has on the table.
 
BenSkywalker said:
Some of the other factors, cost is a huge one. nV is ramping up their .13u process now, ATi will at some point. When nV has production ramped their parts are almost certain to be less expensive then ATi's offerings simply due to build process.
That depends, to a major extent, on the actual areas of the chips.
 
The vibe I get is that nVidia could build a GPU as powerful as the PS 3 for Microsoft, but Huang wants more money to build it than MS has on the table.

Again, I say that IMO, NVIDIA's the only company that can put MS on an equal (or greater) playing field with Sony. Here's hoping they'll shell out...
 
Oh boy where to start with this . FIrst off you should go check out a sound file of what the geforce fx sounds like. Its louder than a delta black label. Second of all the geforce fx uses top of the line ddr2 ram and that equals alot of money where as ati uses older slower ram and a 256bit bus. The question is which one is more expensive. But lets just say they are both cutting edge and prob about the same price. Then there is the 10 layer board for the r300 vs the 12 layer for the nv30. Last but not least that massive cooler.
If you go to anands and you check out benchmarking of the two cards with aniso and fsaa you will see that in 5 out of 6 tests the radeon smacks down the geforce fx. Code creatures is a special case as it was programed on a geforce 3 and has run slower on radeon cards since the benchmark came out.
On doom 3. Carmack said that the geforce fx while using the nv30 path is faster than the radeon using the arb path (I believe that is the correct name). He also stated that while both using the standerdized extensions (arb) The r300 is twice as fast the nv30. The last thing he said is that when using the r200 path the r300 is faster than using the arb path. Which basicly means there is no optimized path for the r300 to make it look good. He didn't have to go in and program a whole new path just to get decent performance out of it.
All this is a moot point though as the r350 will be out in a month or so and that is where nvidia is screwed. The r350 will be faster unless an act of god does something to the chip. The nv30 will have to drop in price which will hurt margins and it will not b e the highest performance card out there. That will hurt sales esp since the fx takes two slots and sounds like a jet plane .
Okay i just looked over your post and see you mention that no one cares about aa and what not . More polygons aren't going to matter anymore. Next gen where are going to be look at close to a billion polygons of which prob not even a 100million whill be used. Eye candy is going to be important and if card a can do a game at 640x480 with 4xfsaa and card b can do it with 8xfsaa you can bet which one will be more sought after.
 
BenSkywalker said:
In terms of cooling, quieter then a human heartbeat doesn't sound too loud to me, certainly a hell of a lot less noise then the ATi built 9700Pros I've heard.

Quieter than a human heartbeat? Where did you read that? You posted links to AnandTech yourself, he talks about volume much louder than a human heartbeat. And Tom's Hardware has MP3's of both the 9700 Pro and FX. The FX is WAY louder... and sounds like it 'whines' rather than hums which ends up being much more annoying even at the same volume.

First off, consoles playing with old code means jack shit. Find the most demanding situation you can find and throw it at them, usually synthetics-

How about 3DMark2001's Nature test? The Radeon whoops the FX in that one. The Radeon also doubles the FX's Advanced Pixel Shaders speed.

The above is running the Code Creatures bench. The highest TV setting is 1920i which in terms of pixels falls between 10x7 and 12x9/10. In that instance even the non Ultra FX holds an edge over the 9700Pro(actually, the edge increases as the resolution is upped). Looking at pure synthetics-

1024 * 768 = 786,432

1280 * 1024 = 1,310,720

1920 * 1080 = 2,073,600

1600 * 1200 = 1,920,000


1080i has more pixels than the maximum sane PC resolution. Between 10x7 and 12x10? That's a laugh. Learn to multiply.

The FX is 40%-100%+ as fast as the 9700Pro in geometry throughput, remember the hype around the PS2's 66Million polys per second? Remember the hype around the XBox's 100Million+ polys per second? Anyone remember any hype around the anisotropic filtering performance of each? Raw pure specs on the hype end, and ones people can easily recognize. Carmack has also been on record stating the Doom3 runs the fastest overall on the FX. None of this is to say the FX is whipping the 9700Pro silly or anything of the sort, simply to point out that if the FX had launched along side the 9700Pro who 'won' would be splitting hairs between the two(R300 sometimes, NV30 others).

FX's theoretical max at 500MHz is 350MPoly/s, R300's at 325MHz is 325MPoly/s. Not very far apart. Although yes with 8 lights the FX is more efficient...

Some of the other factors, cost is a huge one. nV is ramping up their .13u process now, ATi will at some point. When nV has production ramped their parts are almost certain to be less expensive then ATi's offerings simply due to build process.

ATi's RV350 is .13u, it's taped out, and reportedly it'll be widely available before the .15u R350.

Then the other cost factor, memory bus. ATi is @256 while nv is @128 without a major performance rift. From an overall cost perspective nV would likely be a decent amount cheaper.

And yet the FX needs a 12 layer PCB while the 9700 line needs 10 layers.

Backwards compatibility is another. Assuming that all developers stick to high level API coding then there wouldn't be a problem switching over to ATi, and don't think nVidia isn't aware of this. I wouldn't be in the least bit shocked to see them release tools allowing for lower level optimzations and more exacting specs on the chip once it becomes reasonably obsolete. They do this and have developers exploit it they build themselves in some assurance. ATi can do the exact same thing on the GameCube end to avoid a possible coup by nVidia also.

You forget that even different cores by the same IHV will handle things slightly differently. To-the-metal GeForce3 code probably wouldn't run quite right on a GF4, and definitely wouldn't run right on an FX...

And even when the core is 'reasonably obsolete' there's still nVidia confidential tech in there which they still use and don't want ATi to see.

Right now nVidia has proved to be six months late with one product, and people are already counting them out for the XB2? The XB2 won't be using a R350 nor a NV30/NV35. Odds are nearly nil that it will be using a R400 or NV40 either. We are almost certainly looking at R450/NV45 or later parts, where each company is in a year from now will be much more telling then where they are today trying to gauge anything. The NV40 team has been working on their design for a year or so now, same as the R400 team. ATi hit one 'six month' product cycle(~nine months, but close) and nVidia missed one(twelve months, missed big), let's see if ATi hits their next six month cycle with R350 and if nV misses theirs with the NV35 before we try and make these telling trends.

No argument here.
 
For noise levels- http://www.gainward.se/news/030130.pdf

Further reference-

Owing to Gainward’s advanced R&D skills the maximum noise is reduced to only 7db, the same as a human heartbeat. Competetive products maximum noise levels can be rated as high as 70db the same level as a domestic vacuum cleaner.

http://www.anandtech.com/news/shownews.html?i=18126&t=pn

Tag-

1080i has more pixels than the maximum sane PC resolution. Between 10x7 and 12x10? That's a laugh. Learn to multiply.

I am assuming that is a joke and you are just fooling around, with the amount of times it has been discussed about the differences between interlaced and progressive modes there is no way in hell you could not be aware of just how wrong your numbers are.

How about 3DMark2001's Nature test? The Radeon whoops the FX in that one.

I guess 'whoops' means something different where I come from, at the very least it includes being faster-

http://www.hardocp.com/image.html?image=MTA0MzYyMDg1OTVjVVNkMzFISXhfMl80X2wuZ2lm

9700Pro losing at every resolution is whooping on the GFFX? I guess you could say the R300 whoops the GFFX at CodeCreatures too. And the GFFX whooped the R300 hard in terms of getting to the market using your guidelines :LOL:

ATi's RV350 is .13u, it's taped out

Heard the same thing about the GFFX months before it was.

You forget that even different cores by the same IHV will handle things slightly differently. To-the-metal GeForce3 code probably wouldn't run quite right on a GF4, and definitely wouldn't run right on an FX...

If nV went in knowing what they needed to do it would, which they obviously would be...

And even when the core is 'reasonably obsolete' there's still nVidia confidential tech in there which they still use and don't want ATi to see.

If it could land them a potential $100Million+ contract they may be willing to let a few things slide. Not to mention, when the patents come through admitting to exacting details on certain parts doesn't become all that important.

JVD-

Its louder than a delta black label. Second of all the geforce fx uses top of the line ddr2 ram and that equals alot of money where as ati uses older slower ram and a 256bit bus. The question is which one is more expensive.

RAM drops in price very quickly, added complexity for the memory bus doesn't.

On doom 3. Carmack said that the geforce fx while using the nv30 path is faster than the radeon using the arb path

The board that has been the lead for Doom3 development recently has slower code running on it then one that Carmack just got his hands on. If you want to get in to the ARB path and the features it doesn't address that the NV30 has and how that could give a board with less flexibility working on the current, non final, ARB path which is missing components that the final version will include that the NV30 build currently supports then that is another matter entirely.

All this is a moot point though as the r350 will be out in a month or so and that is where nvidia is screwed.

What exactly is meant by 'be out'? Paper launch or retail availability? Not saying it won't be retail, but I don't recall the last cycle product(top tier) that hit store shelves within a month of the tecnology announcement(paper launch). This is not criticism, I'm simply stating that all the vendors have been doing a paper launch a month or more in advance of shipping and we have no paper launch on the R350 yet.

Okay i just looked over your post and see you mention that no one cares about aa and what not .

Could you point that out, anywhere? I said there wasn't much hype around the anisotropic performance characteristics of the console. Since you didn't make the connection, the XBox stomps the PS2 in texture filtering by a much larger margin then it does in poly throughput and yet we still didn't hear any hype about it. You know why that is? 32tap anisotropic with trilinear filtering at ten times the speed of the competition doesn't quite roll off people's tongues the way 100Million polys/sec does. AA performance is obviously going to be very important next generation, a discussion we have had numerous times here(and in general, the likely narrow gap between any of the platforms in terms of visuals due to limitations increasingly moving to artists and their budget instead of technology).

Simon-

That depends, to a major extent, on the actual areas of the chips.

That is true but given the lack of eDRAM or any other details indicating a potential for higher density areas of transistors on the R300 you could just as easily assume that nVidia has a far larger edge then the finer build process would indicate.
 
Ben - Yes ram does drop in price very quickly. But lets not forget that right now nvidia is the only one buying this ram which will slow the rate at which its price drops. Also in a very short time faster cards from ati will be out and will force the price of the fx down quickly.

On to what carmack said . Well that is just what he said , the fx is the fastest while runing the nv30 path with the radeon 9700 slower in most cases but pulls ahead sometimes whie it is runing the ARB path. The radeon9700pro is twice as fast as the nv30 while both are running the ARB path. THe radeon is faster whie running the r200 path over the ARB path. The ARB is the highest quality path made. What i take from this is carmack couldn't get good performance out of the nv30 . So he made its own path. The radeon could use the bestquality one and ran it fast ... almost as fast as the nv30 runs its special code.

Well what imean by out is bench marks on the web. Since i was in compusa today and saw no geforce fxs and the one i ordered on the net hasn't shipped yet . Come on if ati released cards to anands and it showed it spanking the fx by 30% in ever bench mark and was the same price who would by the fx. Its like buying a 2003 model car for 20grand when next years model will be out in a month for 20grand
[/b]
 
BenSkywalker said:
For noise levels- http://www.gainward.se/news/030130.pdf

Further reference-

Owing to Gainward’s advanced R&D skills the maximum noise is reduced to only 7db, the same as a human heartbeat. Competetive products maximum noise levels can be rated as high as 70db the same level as a domestic vacuum cleaner.

http://www.anandtech.com/news/shownews.html?i=18126&t=pn

I'll believe Gainward's claims when I see it. They're an exception.

1080i has more pixels than the maximum sane PC resolution. Between 10x7 and 12x10? That's a laugh. Learn to multiply.

I am assuming that is a joke and you are just fooling around, with the amount of times it has been discussed about the differences between interlaced and progressive modes there is no way in hell you could not be aware of just how wrong your numbers are.

You really think in interlaced the core does half the pixels or something?

The core is still rendering (hopefully) 60 fields per second at 1920x1080.

How about 3DMark2001's Nature test? The Radeon whoops the FX in that one.

I guess 'whoops' means something different where I come from, at the very least it includes being faster-

http://www.hardocp.com/image.html?image=MTA0MzYyMDg1OTVjVVNkMzFISXhfMl80X2wuZ2lm

9700Pro losing at every resolution is whooping on the GFFX? I guess you could say the R300 whoops the GFFX at CodeCreatures too. And the GFFX whooped the R300 hard in terms of getting to the market using your guidelines :LOL:

My mistake, I never read H's preview, I thought I saw 3DMark2001 Nature results at AnandTech but I was wrong.

About the advanced pixel shaders though:

http://www.anandtech.com/video/showdoc.html?i=1779&p=6

EMBM, Vertex Shader, and Advanced Shader are the Radeon's domain...

And it isn't far behind at DOT3 and Pixel Shader.

Curiously enough it looks to me like the GF FX rocks the house at fixed-function work but at this point pretty much keels over and dies the moment shaders are enabled... could have something to do with FP32 but I'm not sure.

ATi's RV350 is .13u, it's taped out

Heard the same thing about the GFFX months before it was.

You cut out the second half of what I said.

and reportedly it'll be widely available before the .15u R350.
------------
You forget that even different cores by the same IHV will handle things slightly differently. To-the-metal GeForce3 code probably wouldn't run quite right on a GF4, and definitely wouldn't run right on an FX...

If nV went in knowing what they needed to do it would, which they obviously would be...

Well I guess they could pull a GF FX and have a full fixed-function pipeline alongside the shader pipe...

And even when the core is 'reasonably obsolete' there's still nVidia confidential tech in there which they still use and don't want ATi to see.

If it could land them a potential $100Million+ contract they may be willing to let a few things slide. Not to mention, when the patents come through admitting to exacting details on certain parts doesn't become all that important.

Revealing core functions could lend them a contract somehow? Explain?

And I'm sure it's already patented, silly, this is an NV2x core we're talking about. It's still proprietary nVidia tech.
 
zurich said:
The vibe I get is that nVidia could build a GPU as powerful as the PS 3 for Microsoft, but Huang wants more money to build it than MS has on the table.

Again, I say that IMO, NVIDIA's the only company that can put MS on an equal (or greater) playing field with Sony. Here's hoping they'll shell out...


Maybe. It comes down to nVidia's engineers selling to Microsoft engineers that the talented chip design that they can put into X-Box 2 GPU will be worth the cost.
 
Tagrineth said:
EMBM, Vertex Shader, and Advanced Shader are the Radeon's domain...

And it isn't far behind at DOT3 and Pixel Shader

Wait a second. EMBM is the Radeon's domain (R300 + 3fps), but it "isn't far behind at DOT3 (NV30 + 17fps) or Pixel/Frag Shading (NV30 + 23fps).

I also fail to see how the Vertex Shader of the R300 is superior. It might be faster; but architecturally, the NV30 owns it in elegence.
 
V3 said:
(and the Xbox version of DX exposes more of the hardware, so in some respects, it's slightly beyond a GF4 on a PC).

The same as those OGL extensions ?

Slightly more even, since you can in theory bang on the hardware directly and do things OGL on a PC won't allow.
 
Steve Dave Part Deux said:
To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.
yup, and being the R350 will be out first(and Rumors of Nvidia pulling out on the NV30 are all over right now) , its no contest.
 
Steve Dave Part Deux said:
To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.

Bollocks.

Pipeline length influence clock speed. And pipeline length is a (micro) architectural parameter.

Longer pipeline == shorter individual pipestages => higher clock speed.

A valid comparison must be done in the same process and have similar die size.

In real life we'll *always* be comparing apples to oranges. In the end the only valid comparison is that of the market.

Cheers
Gubbi
 
Gubbi said:
Steve Dave Part Deux said:
To be valid, architectural comparisons between two cards must be done clock for clock. Clock for clock R300 is significantly faster.

Bollocks.

Pipeline length influence clock speed. And pipeline length is a (micro) architectural parameter.

Longer pipeline == shorter individual pipestages => higher clock speed.

A valid comparison must be done in the same process and have similar die size.

In real life we'll *always* be comparing apples to oranges. In the end the only valid comparison is that of the market.

Cheers
Gubbi

well Gubbi. If chip a does 3million polygons at 325mhz and card b does 3million polygons at 400mhz . Then chip a is faster than chip b at doing polygons. Simple logic would tell you that if chip a is upped to 400mhz it would do more than 3 million polygons and if chip b is reduced to 325mhz it would do less than 3 million polygons.
 
jvd said:
well Gubbi. If chip a does 3million polygons at 325mhz and card b does 3million polygons at 400mhz . Then chip a is faster than chip b at doing polygons. Simple logic would tell you that if chip a is upped to 400mhz it would do more than 3 million polygons and if chip b is reduced to 325mhz it would do less than 3 million polygons.

You missed the point completely. Because chip a is made with a shorter pipeline, it can't clock at 400MHz.

Show me a Radeon 9700 clocked at 400MHz

Show me a Radeon 9700 Pro clocked at 500MHz

And overclocks don't count.

This is *exactly* the same discussion the CPU geeks have over Athlon vs. P4

You compare products on absolute performance, not clock speed

Cheers
Gubbi
 
Gubbi said:
You compare products on absolute performance, not clock speed

Exactly, well said.

Or I suppose that when comparing the R300 against the Parhelia - they could be forced to disable 4 of the R300s pipelines as it skews its per unit preformance and they should be benched with equal standings. Except people don't realise that just as the number of computational units in a design is a function of the overall preformance, so is clockspeed.
 
Back
Top