NV30 not taped out yet....

Evildeus said:
Nagorak said:
Well...the issue isn't that the taping out itself is going to take them 2-3 months, it's that even if they taped out tomorrow, it's still going to take 2-3 months for them to go into production. It's also safe to say that production is fairly far off since they haven't made any noise yet, and you know they'd have fouled up the R9700 launch if they could have.
Well, i see. But i don't think that's important for the futur of the R300. What's important is its price, its availabality (talking of mid-september or later now :(), and when the NV30 will be available it's comparaison to the NV30. Till then i think that Ati have a small period of joy ;). Now sayng 3 months it could still be october (yeah 2 months and 25 days actualy :D)

Well, I also think its rather premature to assume the end of the world for the R300 when the NV30 is released. I don't know how the R9700 will fare selling in the ultra-highend (read overpriced and out of reach for 95% of the target audience). We'll just have to wait and see. Either way the R9700 is a great card and hopefully it will get cheap enough to buy before too long.
 
Nagorak said:
Well, I also think its rather premature to assume the end of the world for the R300 when the NV30 is released. I don't know how the R9700 will fare selling in the ultra-highend (read overpriced and out of reach for 95% of the target audience). We'll just have to wait and see. Either way the R9700 is a great card and hopefully it will get cheap enough to buy before too long.
Did i say anything else?
 
mboeller said:
If you read the news from reactorcritical then they have an odd definition of tape-out. It seems that parts of the NV30 had an tape-out, but not the complete chip. So now when they tried to merge the parts of the chip in the final design it did not work and they had to alter the chipdesign to make it work. So it seems even reactorcritical says that the NV30 (the complete chip) had not taped out yet. IMHO this means we are at an preA01 step. Hopefully with this subparts being tested and optimised before they can make it with only one revision and get A02-chips and boards out in Jan/Feb 2003.

Well, NVIDIA may have used the shuttle services at TSMC to do their development, testing out different blocks of the device as separate blocks, rather than putting it all together at once and doing verification that way.

The shuttle is essentially buying a small portion of a mask set so that the cost is shared amongst several ASIC vendors. The numbers I've heard for a all layer mask on the shuttle is about 40k-70k, vs. 800k for the full reticle mask. One of the problems is that this only happens once a month or so, and if you miss the deadline, you have to wait. Of course, TSMC requires you buy the full reticle to go to production, but its quite a bit cheaper for development if you afford the schedule hits.

This may be the source of "taped out at .15u", "T&L is a separate chip", and all sorts of other various wierd conflicting rumors going around. Maybe the logic was developed at .15u for functional verification purposes, and maybe the vertex block was developed as its own chip, with the intent that all these pieces get integrated into a .13u single die device in the end.
 
RussSchultz said:
Well, NVIDIA may have used the shuttle services at TSMC to do their development, testing out different blocks of the device as separate blocks, rather than putting it all together at once and doing verification that way.

and:

RussSchultz said:
This may be the source of "taped out at .15u", "T&L is a separate chip", and all sorts of other various wierd conflicting rumors going around. Maybe the logic was developed at .15u for functional verification purposes, and maybe the vertex block was developed as its own chip, with the intent that all these pieces get integrated into a .13u single die device in the end.

I just got this crazy idea that they are indeed going full deferrend rendering this time. This is the only reason I can see why they would almost need to get silicon back as a part of the die verification. I don't know if they would even need to include the full pixel pipeline to check it out, but they might have.

The plot thickens.... :eek:
 
LeStoffer said:
RussSchultz said:
Well, NVIDIA may have used the shuttle services at TSMC to do their development, testing out different blocks of the device as separate blocks, rather than putting it all together at once and doing verification that way.

and:

RussSchultz said:
This may be the source of "taped out at .15u", "T&L is a separate chip", and all sorts of other various wierd conflicting rumors going around. Maybe the logic was developed at .15u for functional verification purposes, and maybe the vertex block was developed as its own chip, with the intent that all these pieces get integrated into a .13u single die device in the end.

I just got this crazy idea that they are indeed going full deferrend rendering this time. This is the only reason I can see why they would almost need to get silicon back as a part of the die verification. I don't know if they would even need to include the full pixel pipeline to check it out, but they might have.

The plot thickens.... :eek:

I don't see how going the gigapixel route would have any different functional verification requirements than any other system, or would lend itself more to it being developed in pieces. (In other words, I'm not grasping how you're making the connection that these facts are pointing toward your conclusion)
 
RussSchultz said:
I don't see how going the gigapixel route would have any different functional verification requirements than any other system, or would lend itself more to it being developed in pieces. (In other words, I'm not grasping how you're making the connection that these facts are pointing toward your conclusion)

No? Well, I was thinking along the lines that it did take PowerVR some time to get their architecture just right, and if you want to test a load of different games engines with all their "hacks" and workarounds you'll need real silicon to test because simulation would be way too slow of a large number (albeit they of course did some computer simulation beforehand).

Does it make any sense now?
 
I guess, though I think you might be underestimating the amount of functional testing that goes on with any graphics core, whether its a tiler or not.
 
If nVidia have got a deffered rendering system in action does this mean that they will be able to do ray casting in real time, as some of the demo picks needed, and would this substantiate the rumour that they are getting MSAA, essentially, for free?
 
mboeller said:
Evildeus said:
Seems they have taped-out initial part.

http://www.reactorcritical.com/#l1205

The tape-out of this Cinematic Shading GPU was made back in May. But we should understand that developing so complex processors is extremely hard and there may be some erratas with the real chip. Modern GPUs consist of a number of blocks that are put together at the final stage of the design. The problem with the final stage is that there are used a lot of applications and emulators. Developers have to write some additional scripts so that different programs could work in collaboration. Sometimes those patches are a bit incorrect and the silicon version functions not the way it should have been. It happed with 3dfx`s Napalm and now it seems to detach Nvidia a bit. We should figure out that the latter has a very big designer`s team and we believe that an ordinary errata cannot set the GPU aside for three or four months some sources claim.
[/url]

I will take the official word from the CEO all day over the wispers from reaktorcritical. If the NV30 would have taped out already then the CEO would have given this information, cause his "has not taped out yet" had made a lot of noise, bad press and reduced the value of the company (see the other thread)

Thank God you said that. Also because of the upcoming certification by the CEO "law" on Aug 14th, I would wager that Huang (& other CEO's here in the states) will be dropping other little truth bombs.

I saw a couple of pages back (too lazy to get it) that someone had the audacity to suggest that the CEO of nVdia is not the person that we should ask questions of . I would ask then, whom?!? Kenneth Lay?
 
No, nVidia is not going deferred rendering. There have been many comments in the past that pretty much show this without a shadow of a doubt. Unfortunately, I can't seem to quickly find some of the quotes I'm thinking of, but I am certain of this. You'll find out at the release of the NV30 whether or not I'm right.
 
Hasn't nVidia also been quoted many times in the recent past thet there was no need for a 256 bit memory bus?
 
I only saw that once (it's been requoted by others not from nVidia), about 3-6 months ago.

Additionally, it's looking like the NV30 has moved to DDR2 and a 256-bit bus.
 
Chalnoth said:
No, nVidia is not going deferred rendering. There have been many comments in the past that pretty much show this without a shadow of a doubt. Unfortunately, I can't seem to quickly find some of the quotes I'm thinking of, but I am certain of this.

You don't need to find the quotes as I remember the statements about not going deferred rendering quite clearly myself. They even hinted that
the Gigapixel architecture probably wasn't anything they would use.

But that is not the same as saying they would never come to the conclusion that their LMA architecture just would not be efficient enough as it gets more and more computional heavy to render each and every pixel (visible or not).

May I direct you to this line that Dave wrote in his NV30 specs:

New focus on computational efficiency rather than memory efficiency

This basicly states that memory efficiency may indeed not be the bottleneck with this architecture. Please also note Dave's mention about
reactorcritical missing "a very important word out when talking about that bandwidth number" (which was 48 GB/s bandwidth).

Chalnoth said:
You'll find out at the release of the NV30 whether or not I'm right.

Yes. I cannot compete with the possible NDA you have signed, but otherwise let us at least have some fun thinking about it, okay? ;)
 
Let me just clarify a little something. I have signed no NDA (I wouldn't be talking about this if I had). Nor am I acting under any sort of insider info here. It's just that previous quotes from nVidia personnel have made me very certain that they are not going for any sort of deferred rendering in the forseeable future.

Additionally, if you look at the "computational efficency" quote, that seems to me that it's more about packing more processing into every pixel, so that not so much memory bandwidth is needed, but lots of pixel shader power is.
 
There may have been no need pre-R300, but since nVidia seems to have so much time on their hands, I can't imagine why they wouldn't be spending some of it adding a faster bus. With all the delays they're having, it shouldn't come at much of a price penalty either, as we now have three cards using it (Parhelia, P10, R300).

The computational efficiency issue makes sense, as we're seeing fillrate, rather than memory bandwidth, become the limiting factor as we shift to AA+AF+tri. You'll notice many overclocking benchmarks show as much gain from OCing the core as they do from OCing the memory.

I agree that 48GB/s is probably missing an "effective." That number just sounds ridiculous. Sure, 4x FSAA would be "for free" with that kind of bandwidth, but I just don't see it being feasible. Would excessive power draw become an issue at that speed?

<tangent>With GPU's becoming as powerful as they are at cetain operations, I can see motherboards begin to give equal though to both CPU and GPU. I'd love 512MB's 256-bit DDR2 for a P4A 3GHz or Hammer 3000+, that's for sure. Dammit, someone develop a MB with a shared memory architecture and two sockets! :) I don't see Intel doing so, as it would take away from their position as most important part of the PC, but nVidia should certainly consider it in conjunction with AMD.</tangent>
 
My guess to the word was "peak".

What type of design would offer a peak bandwidth that high?

For lack of a ready answer, I reconsidered and started guessing "uncompressed" where the data is transferred in a compressed state. While not the most elegant phrasing in the context of that quote, it does seem to make sense (i.e., lossless Z Buffer compression).

EDIT: realized that "compressed" fits better and conveys the point accurately. :oops:
 
pete, unfortunately, you don't just "Add" a 256bit bus. The external bus is such a crucial part of the chip, i'd wager its set in stone very early on in the desing process (As opposed to other stuff).
 
mboeller said:
Don't know if this was already posted, but it seems NV30 cards are at least 6months away :

http://www.tomshardware.com/business/02q3/020802/siggraph-03.html

If tomshardware says it it must be true, cause they will do everything to let Nvidia shine bright :LOL:
Why haven't I heard about this before? 15% yields sounds pretty awful for what should be a large card, though I'm not sure if that's "working at top frequency" yields, or just "working" yields (which they can shunt to slower, cheaper variants via their vertical chip-family offerings). They're even having trouble with the GF4MX, which I thought was their bread and butter card!

Tom's [H said:
ardware]An analyst source had this to offer in response to our inquiries:

Our contacts indicate that TSMC's average yields at 0.13 micron have been just 15%. So even if the part was ready for volume production later this year, those poor yields are likely to impact the number of parts meeting spec that you can get to market.

NVIDIA took a gamble by acting as one of the guinea pigs for TSMC, and it looks like this gamble isn't working out, right now. ATI's strategy of staying with the proven 0.15-micron process will probably pay dividends in the near-term.

Lastly, NVIDIA has experienced a tough time with its board partners with regard to its NV17 (GeForce4MX). A number of vendors have experienced tons of trouble buliding boards based on the chip and Nvidia's reference design.
 
Back
Top