Penstarsys Speculates on upcoming nVidia products...

RussSchultz said:
Yeah, the whole plugfest thing blows the NV18 being a integrated chipset out of the water.

Sigh, and I thought for once I was onto something.


Hey, its a Floor Wax AND a Dessert Topping!

Nvidia seems to reuse their "NVxx" nomenclature if the underlying graphics technology is the same. Example:

NV18 -- base desktop version
NV18m -- mobile (i.e. low power)
NV18gl -- professional version
NV18crush -- integrated chipset (i.e. northbridge)

There IS a "crush" version of the NV18. Check out these links. On the second link there are 4 more links on that page. All of them show Nforce2 motherboards, and they all mention "NV18" "crush 18" or "Nvidia crush 18" at some point. This is probably the version of NV18 that Nvidia showed on their infamous table (that had 81M transistors).

http://www.warp2search.net/article.php?sid=4924&mode=thread&order=0&thold=0

http://www.warp2search.net/article.php?sid=4959

Just because there is a "crush" version, I don't think this precludes there being a desktop version of the same base name (with far fewer transistors of course).
 
Looking at the nForce2 section of nVidia's website, it appears that if there's an integrated version of NV18 the same may be true of NV28... Because if you look at the two categories, the gamer one suggests an integrated GF4Ti... Unless of course they're assuming you add one. While the other two both talk about a GF4MX...

Hope this hasn't been discussed, already, but I didn't think I saw anything on it...
 
Nagorak said:
How many pipes does the GF4 MX have?
2

Considering it is faster than a GF2 Ultra now, it must have 4 pipes?
No it doesn't, it has bandwidth optimizations that was developed for GF4Ti's.
Z-compression, Z-culling.

Does GF4 MX have occlussion culling or is it really totally based on the GF2 architecture?
Yes it has the occlusion culling.

I'm guessing it doesn't because that's the only way it could suck so badly. :p
No it's because it has 2 pipes.
So you could compare it with R7500 thats the closest architecturally (2 pipes, Hyper-Z).
It doesn't suck that bad if you compare with that.

R9000 beats GF4MX because of the 4 pipes...
 
and roll out its new-generation NV30 in time for the Christmas season

Well if they roll it out and it still a big iff they not on shelve where u can grab 1 and buy it............... it somewhere in a box on a boat arriving first Quarter in 2003 :D
 
Ok, I thought I would chime in on this one since I wrote that little blurb on PenStar.

I remember getting excited about Mr. Huang's statements about the transistor count on the NV-18 and NV-28 being in the 80 million mark. But that really didn't make a whole lot of sense, especially for the NV-18. You do not want to produce a budget chip that has that many transistors! It is just too expensive! Also, the NV-18 will be featured in mobile markets as well as the chipset markets in the future, so having 80+ million trannies just doesn't make a whole lot of sense, and NVIDIA has in fact told me that NV-18 won't be significanlty larger than the NV-17! So I am thinking that Jen-Hsun and gang suffered a TIA (transient ischemic attack- aka STROKE) and those numbers are way, way off base.

NVIDIA is in a position that they have to produce a smaller and more affordable processor, while still addressing DX 8.1 compliance at the low end. Taking the current NV-25 and making that a low end chip will cut the margins to nothing on that particular chip, and NVIDIA is all about margins. To address that area, they have to make a smaller chip with the same functionality, and have it be competitive with the Ti 4200 and Radeon 9000 Pro. Making a 2x2 processor makes quite a bit of sense vs a 4x1 (such as the Radeon 9000 pro). My justification in this is that we are honestly no longer fillrate bound. More pipes means more raw fillrate, but with DX 8.1 apps there are more texturing and shading operations going on, so that is the bottleneck. Also, if using the current 1 pixel 2 texture pipeline, it doesn't need to be modified to provide 2x the amount of loopback (like the Radeon 9000 Pro has to do over the Radeon 8500). So with shading operations being the overall bottleneck, keeping with a 2x2 pipe makes a lot of sense vs the way ATI is doing it. And knowing NVIDIA, they would probably optimize the pipelines even more for both the pixel and vertex shaders, so performance most likely would not be much less than a Ti4200, but at over half the cost per chip.

Now, this was entirely speculation from me with little or no confirmation. So do not take this as the truth, but looking at the way the market is moving, this makes quite a bit of sense. Why would NVIDIA make the NV-28 a super GF4 Ti chip, when the NV-30 is their high end product? It would make much more sense for them to be working on a value DX8.1 part, so they can address the mainstream and counter ATI's budget minded moves. As for the NV-18, I highly doubt it will have any DX8.1 functions, but it could be the extreme low end and mobile chip of choice.

As for the .13u speculation, if I remember correctly, Jen-Hsun basically just said that the NV-30 is going to be made on it, and didn't mention that it was going to be the only chip made on that process. NVIDIA has had a long tradition of utilizing advanced process technology accross the board for their value products (GeForce 2 MX, GeForce 4 MX), so I honestly can see NVIDIA going ahead full speed on .13u across the board. From all indications TSMC has finally got their act together on the .13u process and are making significant strides in improving the yields. I would bet the process would be readily available this month, with mass production starting on that line shortly.

As for NV-30.... I have very little faith that it will show up in mass production before January 30th. We may see some running silicon at Comdex, but it will be very close to the first silicon stuff. I have again heard rumors that the NV-30 still hasn't taped out yet (eg. data for the masks sent to TSMC). If this is true, NVIDIA is going to be very hard pressed to have it out in a timely manner. To give an indication where that might be, last year at this time NVIDIA had GeForce 4 Ti samples in their labs for testing and qualification, and I believe it was A1 silicon. The GeForce 4 Ti was released about 7 months after that point. There is going to be a lot of the midnight oil being burnt at NVIDIA for the next few months for anything to be shown in November.
 
As for the .13u speculation, if I remember correctly, Jen-Hsun basically just said that the NV-30 is going to be made on it, and didn't mention that it was going to be the only chip made on that process.

No. He has said at some point that NV30 was the only .13um chip at the moment.

If they are having process related difficulties with one chip then they are unlikely to replicate those multiple difficulties across several different designs. Its far more likely they will use NV30 to gain experience with the process and then start subsequent products once they've gone through the .13um pain barrier.
 
"
To give an indication where that might be, last year at this time NVIDIA had GeForce 4 Ti samples in their labs for testing and qualification, and I believe it was A1 silicon. The GeForce 4 Ti was released about 7 months after that point. There is going to be a lot of the midnight oil being burnt at NVIDIA for the next few months for anything to be shown in November
"

Pretty bad example. A lot of people know that Nvidia did not want to launch the GF4Ti that early.
They decided to earn a little more money with the GF3 core and guess what? It worked. ATIs Radeon8500 launch was disturbed by a GF3Ti series of cards + new drivers. The price of the Radeon8500 decreased pretty sharply during the first months.
All Nvidia did was to delay the GF4Ti launch. I remember saying Anand that he had his GF4Ti in November 2001 already.
I think they could have launched the GF4 series a lot erlier but there was really no reason to do that.
Nvidia is all about making money. Not the small cent but the big buck :)
That is what i like about them - pretty similar to Microsoft and Intel.
 
Haha, remember, CEO's are like politicians! Not only is it important to listen to what they say, but also to what they don't say. How many chips have they actually publicly recognized at this moment? NV-30, NV-25, etc. They have made very little mention about NV-28 and NV-18, so saying that NV-30 is the only announced chip on the .13u process could possibly be misleading.

When going to a new process, it is usually much easier to work out the big problems by using a simpler product. Going to a 120 million transistor product and then trying to troubleshoot the process with such a massive chip is bordering on insane. Using a 18 million or 36 million transistor product on such a process would provide much more reproducible results, as well as keeping things a lot simpler. Once that is complete, then move onto the bigger dies and work out the smaller (yet just as important) problems that will pop up with larger chips. Baby steps Dave, baby steps.

I of course could be so far off base with this one that it is not even funny, but betting that NVIDIA is not transitioning other smaller and simpler products to the .13u process isn't terribly productive. It will be very interesting to see one way or another. Again, it is Jen-Hsun's job to obfuscate their plans where possible, and while saying publicly that the NV-30 is the only (semi-announced) chip going onto the .13u process, hiding the fact that other unannounced chips (and we don't know squat officially about NV-18 and NV-28) will be made on the same process is probably meant to confuse competitors such as ATI. NVIDIA is not going to announce their secret weapons if they can avoid it when possible.
 
JoshMST said:
Ok, I thought I would chime in on this one since I wrote that little blurb on PenStar.

Thanks, in the past you have seemed to some decent clue to what was going to happen later on. ;)

JoshMST said:
As for NV-30.... I have very little faith that it will show up in mass production before January 30th.

Why do you mention this particular date? Hmmm....?

JoshMST said:
We may see some running silicon at Comdex, but it will be very close to the first silicon stuff. I have again heard rumors that the NV-30 still hasn't taped out yet (eg. data for the masks sent to TSMC). If this is true, NVIDIA is going to be very hard pressed to have it out in a timely manner.

Comdex (mid November) sound about right for some previews. That is if the damn thing has taped out. These on-going rumours about the delay start to sound a bit odd to me given nVidias apparent faith that it'll be here this Christmas. Could you please elaborate a bit on this rumour as to why the delay is? If your source don't know this, I wouldn't put to much faith into this rumour.

JoshMST said:
To give an indication where that might be, last year at this time NVIDIA had GeForce 4 Ti samples in their labs for testing and qualification, and I believe it was A1 silicon. The GeForce 4 Ti was released about 7 months after that point. There is going to be a lot of the midnight oil being burnt at NVIDIA for the next few months for anything to be shown in November.

But as I understand it, nVidia could have pressed for a lanch of the GF4 in November, but didn't as the GF3Ti's were doing a good job? They have stated about 100 days from tape out to production, so the GF4 isn't anything to judge by.

BTW: Welcome aboard Sir!
 
When going to a new process, it is usually much easier to work out the big problems by using a simpler product. Going to a 120 million transistor product and then trying to troubleshoot the process with such a massive chip is bordering on insane. Using a 18 million or 36 million transistor product on such a process would provide much more reproducible results, as well as keeping things a lot simpler. Once that is complete, then move onto the bigger dies and work out the smaller (yet just as important) problems that will pop up with larger chips. Baby steps Dave, baby steps.

Errr, Josh, every single new architecture has been on a new process and it always the biggest part. Look at the history.
 
What we see and what actually goes on are usually two different things. Engineering and marketing are definitely two different things, and when NVIDIA releases new products, we usually don't know what has gone before. You are correct about the process deal, but only from what we see. Lets take a look at this:

GeForce 256- supposedly the first product on the .22u process, but very shortly after that we heard about the TnT-2 pro which is being made on the .22u, and it was out in far greater numbers than the GF 256. Of those two, which was fabricated first? I would guess the TnT-2 Pro, as it was a well known design, and any problems with the new process would be exposed by such a design, and that knowledge could be passed to the production of the GF256.

GeForce 2 GTS- again, it was first announced and released in April, with it being readily available in late June. GeForce 2 MX was released in July, and there were products instantly on the shelves featuring this simpler chip. Which of those two probably first tested out the .18u waters? Again, I would bet on the MX part, as it is simpler to work with because it has 1/2 the trannie count. It also probably went into production well before the GeForce 2 GTS, as there were many thousands of chips sent to manufacturers in a May timeframe, while limited amounts of GF2 GTS were sent at the same time.

As for the GeForce 3, from my understanding, the .15u process had very few technical changes from the .18u. One of the biggest was that of the use of copper. So the GeForce 3 does stand alone here, and from what I can tell, no other product at the time was made on that process.

As for the GeForce 4 Ti release, it may or may not be a good example. If NVIDIA could have released it that November, I would be hesitant to comment. There were some significant changes from the GF3 to GF4Ti, and those take time to work out. As for Anand having a board in November, that could very well be so. Then again, the same guys had the GeForce 3 boards in hand around the same time the year before, but that chip still wasn't ready for the mass market until 2 more spins (and then was released in Feb.)

January 30 is just a number I pulled out of my head, but one that makes some kind of sense thinking about the entire tapeout to time to market. NVIDIA says 100 days usually, but for this type of part, I would be very surprised if they could make it. If they were to tapeout today, then 6 months from now would be a good idea of what they could do. Of course, they could be working 24 hours a day on it with 3 hours of sleep for the engineers, and we could see nicely working samples by December 1. I guess what I am trying to say here is that the NV-30 is almost as great of a leap over the GeForce 4 Ti as the GeForce 4 Ti is over the original GeForce 256. Not so much in transistor count as with the overall complexity and newness of the design.
 
I guess what I am trying to say here is that the NV-30 is almost as great of a leap over the GeForce 4 Ti as the GeForce 4 Ti is over the original GeForce 256. Not so much in transistor count as with the overall complexity and newness of the design.

Definetly agree with that statement m8! :D
 
As for the GeForce 3, from my understanding, the .15u process had very few technical changes from the .18u. One of the biggest was that of the use of copper. So the GeForce 3 does stand alone here, and from what I can tell, no other product at the time was made on that process.

Josh, you are surmising that GF3 is the one that stands out, when in fact you've guessed the others. Fact is, the larger parts on significant new architectures have always appeared first - you've guessed the rest so that it fits your theory but GF3 highlights that your guesswork here is probably wrong.

;)
 
JoshMST said:
January 30 is just a number I pulled out of my head, but one that makes some kind of sense thinking about the entire tapeout to time to market. NVIDIA says 100 days usually, but for this type of part, I would be very surprised if they could make it. If they were to tapeout today, then 6 months from now would be a good idea of what they could do. Of course, they could be working 24 hours a day on it with 3 hours of sleep for the engineers, and we could see nicely working samples by December 1.

I'm as sceptical about this Christmas launch as you are - at least if we're talking about any volume of significance - but 6 months sounds a bit high.

Anyway back to the rumour about not being tape out yet: how close is your source to the buzz?

JoshMST said:
I guess what I am trying to say here is that the NV-30 is almost as great of a leap over the GeForce 4 Ti as the GeForce 4 Ti is over the original GeForce 256. Not so much in transistor count as with the overall complexity and newness of the design.

Yes, but ATI pulled if off having to fight their issues with staying at the 0.15 um on a 100+M transitor design. nVidia said they had some tests on the 0.13 um which might as well have been on smaller parts of the overall NV30 design.

Dunno, but we're taking about a company famous for it's ability to execute and it would be kind of hazard to damage this image big time by stating over and over again that the NV30 will be their fall product if they know full well they probably won't make it at all. Something just sounds off to me... :(
 
You could very well be correct and I could just be pissing on the wall. That's the way that goes. I guess we will never know until we consult a few NVIDIA engineers, and this could easily be a question that they may answer without much arm-twisting. I suggest we give it a shot.

Whether or not it is true, I think using smaller, less complex dies to test the waters is a logical idea. Then again, with the GeForce 3, how much competition at the mid-level and low end did NVIDIA have vs ATI? The Radeon was good, but not that good when compared vs. the GeForce 3 and GeForce 2 Pro/Ultra, and the GF2 MX. So there really wasn't much of a need to introduce a higher performing, budget minded piece of technology at the time.

Boy I would love to get into the heads of the product planners at NVIDIA and ATI. We could definitely learn some interesting things.

BTW, why aren't you on IM Dave? Sure would be nice to discuss this in realtime.
 
JoshMST and the Geforce2 mx

that is really so not true the radeon is so much better then the gf2 mx

Im not gonna state the obvious of why that is...............
 
Haha, I guess I didn't write that one very well.

The GeForce 2 GTS/Pro, as well as the GeForce 3, were all in the same area that the Radeon was trying to compete in.

The low end, where the GeForce 2 MX was placed, had the Rage 128 Pro to compete with. I would consider the GF2MX to be quite a bit better than the Rage 128 Pro.

So you are right, the Radeon was quite a bit better than the GeForce 2 MX. But for the low end, the GF2MX was the card of choice vs. what ATI had to offer.
 
Back
Top