NV40 Coming @ Comdex Fall / nVIDIA Software Strategies

Kristof: Okay, okay, you're right, the bottleneck will often be memory bandwidth. You darned TBDR guys :D

But I was supposed we were talking shader-specific here. More like no AA/AF cases, where memory bandwidth is unlikely to be the problem. Of course, yes, memory bandwidth plays a key role in overall performance.

BTW, the NV40 uses GDDR2, probably second generation one. This has been confirmed by "never-ever-ever-wrong-sources" - so you can hold me accountable if it ain't correct :)

Now, I don't quite understand your "not even leaked AFAIK." ...

From CMKRNL:
http://www.beyond3d.com/forum/viewtopic.php?t=3010&highlight=

I had stated earlier that the programmable tessellation unit was going to be in NV35. It's not clear to me anymore that this is the case. Apparently this will be in a 4Q'03 part, which leads me to believe that it will be NV40, not NV35. This part will also contain a completely revamped unified shading model. This means that both vertex and pixel shaders will share the exact same ISA and constructs. In other words, pixel shaders will also have access to constant based/dynamic branching. What's most interesting is that nVidia is not the only company doing these things in that timeframe.
( he's refering to the R400 here, but we all know it's canned, right? Please, don't make me explain it AGAIN... )

Okay, so this is dated one year one, but I'd be surprised if it changed so much in the design, and CMKRNL has never been wrong before AFAIK ( of course, he leaked things which changed after in the past, but that's another story )

And yes, this quote *could* be refering to not having shared functionality but just the same ISA - but I'd be very surprised by that, even more so considering how near it already is in the NV30, hardly worth a "completely revamped" sticker IMO. And as I said, the ILDP thingy I've been talking about a little lately would seem to indicate nVidia already got experience in the dynamic allocation area thanks to the NV30.


As for the Comdex dates... Let me be clearer...
It was, about two months ago, and still is AFAIK, nVidia's target to:
- Tape-out the NV40 in July at IBM
- Launch it at Comdex
- Have availability hopefully, at least limited, by year end, obviously for christmas.


I don't want to overhype this part. Let me restate it COULD not have true dynamic allocation, that it could be delayed, that it could be buggy, and so on. nVidia isn't so go at delivering right now, so we got no idea whether it'll "go back to their golden days" on the delivery POV with the NV40. There's no way to tell, and that's why I infact encourage people to say they don't want to accept this hype.
All I want to say, however, is that from my understanding, the NV40 is *very* ambitious. But then again, so was the NV30, and look what happened...


Uttar
 
Colourless said:
I'll believe hype about the NV40 only after it's arrived and it's proven to actually be 'good'.

Wasn't the NV30 supposed to be the best thing ever, that ATI was supposed to have no chance of defeating? In reality, things turned out quite differently. As such, I have no faith at all in what people say about NV40.

Neh , neh ....

Don't be so harsh on the NV30 . The architecture is impressive and performant . It's just that they've cosed 128 Bit instead of 256 Bit .

I would have fired the guys that came with this ideea along with the idiots that aproved it if I was Kirk .

The NV35 w/256 Bit bus is what NV30 "should have been" like you said and if they would have chosen this path from the first time it would have been a hit .
 
WaltC said:
Yes, and I would add that folks need to realize that although nv40 might well be "announced" at Comdex, it won't be "coming" for another few *months* after it is announced.

That I know . But *comming* could mean some "extraordinar" preview @ anand or some site that shows a very fast card which is a HUGE PR win .

WaltC said:
If ATi does another high-end product this year based on R350, it will probably "come" about the time nv40 is "announced," I would think.

Yes , that will probably happen .
 
isn't their good reason to be excited about NV40? Nvidia has said NV30 had 'problems' because resources were diverted to the XBox project. that shouldn't be a problem now, does anyone really expect NV40 to be as lackluster as NV30?
 
Why do people think the nv40 is going to go against the r360 ? I see no reason for that thinking. Also right now nvidia has a card set to ship that is faster than the r9800pro but its also 200 bucks more. Not very impressive. Esp once you factor in cheats that are being done. NVidia blew it big time this gen and I wont even give them a second thought untill the new card comes out , had good performnce and is proved not to use cheats .
 
WaltC said:
Chalnoth said:
From what I've heard, there's a second iteration of the NV35 due out this fall, followed by the NV40 around Christmas. nVidia shouldn't need the NV40 to compete with an "R360."

Hmmm....here it is June and the first iteration of nv35 is not shipping. From what I hear the 5900U sent out around the review circuit won't be shipping in any kind of volume until August at the earliest, with the lower-priced nv35 cards only beginning to ship into the maket in late June or early July. That's the first iteration of nv35.

So how is it you figure nVidia's going to be into the first iteration of nv35 around August, but will be shipping the second iteration of nv35 AND nv40 prior to Christmas 03? Actually, I wouldn't doubt the revisions of the chips may change for nv35 later this year--but I'd expect improvements to be minor as opposed to fundamental. I heard all of last year that nv30 would "definitely" ship by Christmas '02. My hope is that maybe this year we will have a much more sober picture.

nVidia's got to do something to control its runaway marketing. IMO, it's the singlemost reason nVidia is constantly chasing expectations it can never meet. I hope they'll realize it's hurting them more than it's hurting their competition at some point. They knew full well what a dog nv30 was when they launched their "Dawn of cinematic computing" PR blitz at the time of the product announcement--the thing of it is they went ahead and did their PR show anyway. All that dawned was the "failure" of nv30, as the nVidia CEO put it. I wonder what it's going to take before they'll quit chasing their tail in this manner...

In Germany NV35 cards already appeared on price lists with target dates sometime in june.
Of course Germany always gets cards later than the US or Japan so i am not that optimistic that they will achieve that june date in Germany.
But in othere regions mentioned above i would guess they will achieve that.
I know you guys over at rage3d would love to see NV35 appearing in august or september but i am sorry its not going to happen :)
In a few weeks Nvidia is back in business and king of the hill again.

Nvidia said June/July so until July is over they still are on track. The latest Digitimes story claimed that cards will appear in the market in june.
Lets see if it turns out that way.

As for R360... i don't think Nvidia needs the NV40 to beat that product. A speed bumped NV35 is enough. But again if NV40 does not run into problems then there is no reason for Nvidia to delay it in order to sell a speed bumped NV35.
But if NV40 is some months away from that day when R360 appears in the market then i guess we will see a speed bump of NV35.

DB: There's a reason why words are filtered - lets try and steer clear of using it please
 
Josiah said:
...does anyone really expect NV40 to be as lackluster as NV30?

Heaven forbid that! :p

No, it's just that we are still talking .13 process and although it will be more mature there will be limits to what you can do beyond NV3x if the NV40 is going to have 1) a PPP 2) VS 3.0/PS 3.0 support and hopefully 3) fast FP performance.

It's still an open question whether they had to cut some corners on the NV30 because of process problems, but at least they should be familiar with it now and have known for a year now that there is still a competitor alive.

So I don't expect it to be lackluster. But you have to ask yourself whether it makes sense to let NV40 make massive changes to the CineFX architecture just a year after it was launched? :|
 
That raises an interesting question actually - shouldn't we start seeing some developer documentation on the massive changes its going to bring soon?
 
DaveBaumann said:
That raises an interesting question actually - shouldn't we start seeing some developer documentation on the massive changes its going to bring soon?
Haven't you mentioned the PS 3.0 etc. writings by Kyro people ?

What else do you expect ?
 
Wouldn't the PPP unit require some serious documentation to go along with it?
 
I find it really hard to get excited after:

1) all the failed hype and delays over NV30
2) low-k dialectric still not ready
3) continual proprietary "enhancements" to OpenGL
4) lack of clarity over architectural directions of NV3x+
5) games still targeted at DX8 level coding and cards

and last but not least

the time required to propogate "the next new wave" into any useful software before yet again following that the next new generation comes along and wait, once again the direction changes.

I'd rather see steady (or explosive) progress against a clearly charted 3-5 year plan that everyone is following, versus all the twists, turns and mis-direction that seems to be occuring at the moment...
 
David G. said:
Don't be so harsh on the NV30 . The architecture is impressive and performant .

...Unless you want to do some pixel shading that is, which is what the thing was hyped to death to handle so well...

I would have fired the guys that came with this ideea along with the idiots that aproved it if I was Kirk .

What if it WAS Kirk that approved it, huh? :LOL::LOL::LOL:

It seems to me the bad boss pushes all the blame of a failure on his subordinates and fires them. Shoots the messenger, so to speak. The good boss carries the responsibility his position brings him and accepts the consequences, whatever that might be.

Seems very easy to determine what kind of boss you are. :LOL:

The NV35 w/256 Bit bus is what NV30 "should have been"

No, the NV30 is what the NV30 should have been. Saying it should have been something else is attempting to rewrite history.

and if they would have chosen this path from the first time it would have been a hit .

What's the point of this hindsight-gloating you're partaking in? Oh yeah, they'd have r0xx0red if they'd released NV30 in spring 2002 as originally planned, as would 3dfx if rampage had been out in time blah blah etc etc. So what. They didn't. End of story.


*G*
 
Sharing the execution units between pixel and vertex shaders sounds like a very smart move, if they would do it. Lets say chip A has 16 execution units for pixel shaders and 16 execution units for vertex shaders. Chip B has 32 execution units that are shared by both.

In the case that on chip A, only 4 units are used for vertex shading and the full 16 for pixel shading, then chip B would be more efficient by using 4 out of 32 for vertex shading and the remaining 28 for pixel shading. And ofcourse that also works when the vertex shaders are the bottleneck.

It sounds like a great solution, however I don't know wheter other effects deminish/negate the usefullness/performance of such a solution.
 
DaveBaumann said:
That raises an interesting question actually - shouldn't we start seeing some developer documentation on the massive changes its going to bring soon?
The NV30 was the first launch where nVidia first publicly released developer documentation. It is clear now that this release of developer documentation was due to the delay of the NV30 hardware.

If the NV40 is on time, I don't expect any official information until actual release.
 
sonix666 said:
It sounds like a great solution, however I don't know wheter other effects deminish/negate the usefullness/performance of such a solution.
For one, if the processing is done in serial in each unit, it should be fairly easy to avoid any load balancing issues.

That is, if unit A does all of the processing in triangle X from the vertex shading to the shading of each pixel within that triangle, efficiency of processing could be maximized. There may be additional problems with memory bandwidth efficiency, but I don't see how it would be all that much different from current architectures.

If the NV40 does indeed sport unified shading, then it seems logical that the shaders will be optimized for FP32 performance, as FP32 will be need to be used almost exclusively when executing a vertex program.
 
Richthofen said:
In Germany NV35 cards already appeared on price lists with target dates sometime in june.
Of course Germany always gets cards later than the US or Japan so i am not that optimistic that they will achieve that june date in Germany.
But in othere regions mentioned above i would guess they will achieve that.
I know you guys over at rage3d would love to see NV35 appearing in august or september but i am sorry its not going to happen :)
In a few weeks Nvidia is back in business and king of the hill again.

I had NV30 on price lists 2 months prior to arrival - so what? OTOH we have been promised deliveries of NV35 product "at the very latest by 1st week of July"... Unfortunately, I have yet to see it conclusively demonstrated that NV35 is a better shader performer than R350. Indeed, the synthetic tests seem to indicate otherwise. Perhaps with better drivers all round...

P.S. You'll tend to find people here interested in tech/industry, so you'll find them scattered throughout Raged3d, Nvnews, 3dgpu, MF, etc...
 
The NV35 has not even hit the market yet, why would NV announce the NV40 anytime soon? Unless they want to experience the "Osbourne" (sp?) effect, I don't think we will see any official announcement any time soon.
 
Fuz said:
The NV35 has not even hit the market yet, why would NV announce the NV40 anytime soon? Unless they want to experience the "Osbourne" (sp?) effect, I don't think we will see any official announcement any time soon.

fuz remember the ti 500 to the geforce 4 , that was less than 4 months. Nvidia doesn't care
 
Back
Top