NV30 volume possibly earlier...

Naaa, I think 500 Mhz is gonna be the limit for now. not that it can't clock higher, just that 500 Mhz is the most we as consumers are gonna get. for now that is.

750 Mhz and over is prolly being reserved for NV35/GeForce FX2 :)
 
V3 said:
but about executing running Humus's phong shader

Do you think 500 MHz core is enough ? Maybe the Ultra is a higher clock version, who know they might get it up to 750 MHz, with that ugly gigantic thingy :)

errr... I don't know about 750 mhz any time soon. It is very possible that the "ultra" nv30 card is 500 mhz and the "non-ultra" less then 500mhz. I seriously doubt that it would come with a higher stock clock then 500mhz.
 
Typedef Enum said:
There's only 1 small problem in stating your case...It's based only on supposition, not fact.


Personally...I strongly believe that there is functionality missing from NV30 that was supposed to be there all along...

Type-- Any sniff that there may be something there that they will "turn on" later? That's another fave of theirs.
 
yeah, and who you callin ugly!?

neways, i think a lot of people are forgetting that the NV30 is the ONLY card capable of redering cinematics. and i dont mean the cinematic quality games everyone is gabbing about, i mean its the only card that has the programmability and precision necessary to replace CPU rendering. with a proffessional workstation flag on the chip, it will sell a LOT and for HIGH prices. just look at current pricing for Wildcat4's, and then remember that the NV30 not only replaces the gfx card, but also those 4 Xeon 2.0GHz CPUs AND it does it FASTER! oh, nVidia has sure got a winner whether it can outperform ATi or not at 3dMark and DoomIII. any speculations as to whether the R350 will be marked for cinematics too?
 
Geo,

I honestly have no earthly clue...I'm just taking a stab in the dark.

I would have bet $100 on the fact that nVidia was going to offer a newer AA feature with NV30. Heck, who know...Maybe it will be a little better...

Clearly, we've seen this time and again where certain features were not considered significant to drag down the release of a product...only to resurface later on.

Likewise, there's no denying that .13u was very frustrating for nVidia...As somebody else said, Brian Burke and crew seemed just relieved to finally get this thing moving along. With that said, I might just be inclined to believe that the NV35 refresh is going to contain a healthy number of features that didn't make it in NV30.

Who knows...maybe it will contain another TMU...a nicer AA method...etc.

It's pretty sad that the day they unveil a brand new architecture, we're already looking ahead to the refresh :)

Honestly...look @ the performance numbers we're seeing right now, and then tack on another 10-15% (possibly) by the time this thing ships. Talk about a performance monster.
 
Nvidia is the only card capable of rendering cinematics? Jeez, you eat up the pr. Aren't those the same cinematics that the gforce 4 could do? (final fantasy in real time) Didn't the Ati demo the r300 running a scene from lord of the rings in real time?

All I can say is Nvidia has one heck of a marketing team.
 
I think he means full cinematic rendering in terms of outputting 128 bits all the way to the framebuffer. This isn't necessary a real-time mode, as long as it does it faster than the current hardware they are using for cinematic rendering now, it will be useful.

http://www.pcrave.com/news/week.htm?id=1751#1751

"John Carmack quote in it's entirety: Nvidia is the first of the consumer graphics companies to firmly understand what is going to be happening with the convergence of consumer realtime and professional offline rendering. The architectural decision in the NV30 to allow full floating point precision all the way to the framebuffer and texture fetch, instead of just in internal paths, is a good example of far sighted planning. It has been obvious to me for some time how things are going to come together, but Nvidia has made moves on both the technical and company strategic fronts that are going to accelerate my timetable over my original estimations."
 
Sage said:
yeah, and who you callin ugly!?

neways, i think a lot of people are forgetting that the NV30 is the ONLY card capable of redering cinematics. and i dont mean the cinematic quality games everyone is gabbing about, i mean its the only card that has the programmability and precision necessary to replace CPU rendering. with a proffessional workstation flag on the chip, it will sell a LOT and for HIGH prices. just look at current pricing for Wildcat4's, and then remember that the NV30 not only replaces the gfx card, but also those 4 Xeon 2.0GHz CPUs AND it does it FASTER! oh, nVidia has sure got a winner whether it can outperform ATi or not at 3dMark and DoomIII. any speculations as to whether the R350 will be marked for cinematics too?

Put down that goddamn crackpipe, my son! :LOL: :LOL: :LOL:
 
"With that said, I might just be inclined to believe that the NV35 refresh is going to contain a healthy number of features that didn't make it in NV30."

indeed. I would think so too.

"Who knows...maybe it will contain another TMU...a nicer AA method...etc."

yeah, an extra TMU like GF2 GTS had over GF1. plus better AA, like GF4 Ti had over GF3.


"It's pretty sad that the day they unveil a brand new architecture, we're already looking ahead to the refresh

how very true. but it's only to be expected since we're getting more & more, at such a rapid pace, yet we still want even MORE. Especially since some of the things originally planned for the new architecture were probably cut, but will re-appear in the refresh.
as with TNT>TNT2, GF1>GF2, GF3>GF4
 
kid_crisis said:
I think he means full cinematic rendering in terms of outputting 128 bits all the way to the framebuffer. This isn't necessary a real-time mode, as long as it does it faster than the current hardware they are using for cinematic rendering now, it will be useful.

http://www.pcrave.com/news/week.htm?id=1751#1751

"John Carmack quote in it's entirety: Nvidia is the first of the consumer graphics companies to firmly understand what is going to be happening with the convergence of consumer realtime and professional offline rendering. The architectural decision in the NV30 to allow full floating point precision all the way to the framebuffer and texture fetch, instead of just in internal paths, is a good example of far sighted planning. It has been obvious to me for some time how things are going to come together, but Nvidia has made moves on both the technical and company strategic fronts that are going to accelerate my timetable over my original estimations."

:rolleyes:

..been spoken several times... it's one year old, kid. ;)
 
Megadrive1988 said:
"Who knows...maybe it will contain another TMU...a nicer AA method...etc."

yeah, an extra TMU like GF2 GTS had over GF1. And the AA, like GF4 Ti had over GF3.

Again: what for? 128 bit?
 
Hi there,
V3 said:
but about executing running Humus's phong shader

Do you think 500 MHz core is enough ? Maybe the Ultra is a higher clock version, who know they might get it up to 750 MHz, with that ugly gigantic thingy :)
Well, perhaps not 750MHz, but there's certainly a reason for all these "500+ MHz" (rather than 500 MHz) strings in the slides. ;)

ta,
-Sascha.rb
 
kid_crisis said:
:rolleyes:..been spoken several times... it's one year old, kid. ;)

So get it a cake and light it a candle! :D Doesn't make it any less true now, does it?

Yes it is less true. Carmack said it before he knew the ATI card was going to be out 6 months before the NV30--the ATI card that also can store full FP to the framebuffer, and thus render all cinematic effects.

I'd like to follow up on Democoders point above. The real strength of the NV30 will be how fast it runs its shaders. If it can issue more shader instructions per cycle (this is still up in the air), combined with its higher clock rate, it might be able to run shaders that are 3 or 4 times as long in real time than the R300. I think looking at maximum shader length as being the NV30's advantage was a mistake; the real advantage might be in usable shader length.

Of course, such an advantage would only show up in games if games decided to provide a code path optimized for NV30-length shaders.
 
Would you guys shut up with your fanboi bickering. Its so tiresome to have to wade through it on every single fricking thread. You don't bring anything new to the discussion, so don't bring it.
 
Well, perhaps not 750MHz, but there's certainly a reason for all these "500+ MHz" (rather than 500 MHz) strings in the slides.

Why not 750 MHz ? With that hideous thing, it seems possible.
 

If their changes only require metalization changes, they could have done this and shaved 2-3 weeks off of their production time.


I agree Russ. Based on DemoCoders notes, it certainly appears that A01 does not contain any significant bugs that cannot be masked in the drivers. It's unclear what speed they are running core/mem at though. If they are indeed running near target frequencies, then there's a very good chance that their gamble will actually pay off with them only requiring metal mask changes for minor bug fixes and perhaps improve yield to some degree. A full layer spin if necessary, can always come later once the product is already ramped and shipping. They've done this in the past with NV20 for example.

DemoCoder: Those numbers you posted for Q3 and others, were those actual timedemo's that were being demonstrated or were those the numbers that nVidia gave out?
 
You keep talking about a gamble. How do we know that they started a ton of wafers and held them at contact, in hopes that they would only need metal changes to fix whatever problems they had?
 
How do we know that they started a ton of wafers and held them at contact, in hopes that they would only need metal changes to fix whatever problems they had?

That's the information that I've been told.
 
Back
Top