Outlandish NV40 specs appear from Russian site

Is there some sort of indicative connection between embedded DRAM and IBM's 0.09 process? Also, what clock frequency limitations are there in association with its usage in the context of such a connection, if any?

Hmm...did some searching to answer my own question, and found a few somewhat related things:
  • This discussion at Ars Technica.
  • This recent mention at EETimes.
  • This announcement about the alternatives nVidia seems to have rejected, which might lend some validity (as much as can be to specs that come from apparently nowhere) about ambition in this regard being related to a (presumed) foundry change for the NV<afterwhatevertheyannouncethisfalliftheydosuch> product.
  • This quite possibly outdated assertion of divergence between NEC and IBM at Silicon Strategies, to address a contrast that you might want to make depending on what you think nVidia's main competitor is doing.

Such figures actually fit with some theories I had about what nVidia would have to do to compete with their design strategy, with some of their cooling requirment talk, and even with presuming there is an increased likelihood of information leakage due to nVidia having to talk more specifically to reassure partners in the face of the nv30 issues with competitiveness.

Now there's just the matter to address of it actually being possible and economical to execute :p. (Oh, and not just being made up, which is hardly worth mentioning for something on the internet :LOL:)
Note that I don't think web sites parroting the same info is any sort of confirmation at all, unless some indication of independent or somewhat trustworthy sources for it are included.
 
Evildeus said:
Well if i remember correctly i made this thread on Nvnews on february, and making some research i found this original thread at that time. So what's new since september 2002? :LOL:

Knowledge of the nv30, it's performance, and transistor count...making this more likely to fool people with/actually happen (which it is depends on whether the figures are completely made up or actually based on something tangible). :D
 
demalion said:
Knowledge of the nv30, it's performance, and transistor count...making this more likely to fool people with/actually happen (which it is depends on whether the figures are completely made up or actually based on something tangible). :D
Yeah that's cool 8)
 
micron said:
I thought Australians were decendants of convicts?...no offence, I'm not too bright when it comes to this stuff...

Depends on persons background. :)
I was born in Australia but my family line (mum, dad, etc...) are born in Europe. My grandad/great grandfather/great great grandad (not sure which) on fathers side was born in Hungary, my mums side (AFAIK) was born in Croatia. That's as far as I know. My family comes from all over Europe. My grandmother on my mum's side is Chinese (my grandfather split with my mums mum and married a Chinese women).

Family = confusing as hell.

Back on topic: Seriously, can someone estimate the price fo a card with such specs?
 
OpenGL guy said:
DaveBaumann said:
42? Nah, too short ;)

350 million transistors, 0.09, 2H 2004 not a chance. Think about all the troubles 0.13 gave nvidia. You think it will get easier with nearly three times as many transistors and a new process? Plus, what team is working on NV50? NV35 is not here yet so you're talking less than two years for a new architecture (assuming the NV30 team is doing NV50). It doesn't add up.

I'm sure that there was an article (could've been at AnandTech) that was one of those 'Inside' articles, that look at how GPU's are made. I'm sure there was one on nVidia, that said their supercomputers were already performing some task in relation to NV50. So they probably are already working on it. I'll see if I can find the article.

EDIT: http://www.anandtech.com/video/showdoc.html?i=1711&p=6. Look at the bottom of the page. Nothing concrete though.

Here's another - http://www.digit-life.com/articles2/inside-nvidia/index2.html
 
OpenGL guy said:
DaveBaumann said:
42? Nah, too short ;)

350 million transistors, 0.09, 2H 2004 not a chance.

IBM should have 0.09 ready in 1H 04, in H2 the 0.09 process might very well be rather mature.

Think about all the troubles 0.13 gave nvidia. You think it will get easier with nearly three times as many transistors and a new process?

Yes, I think IBMs 0.09 process will be more mature in H2 04 than TSMCs 0.13 was in H2 02 and the Power 5 chip might be larger than 350million transistors.

Plus, what team is working on NV50? NV35 is not here yet so you're talking less than two years for a new architecture (assuming the NV30 team is doing NV50). It doesn't add up.

Assuming that they only has two teams and it is not possible that one team can not start the innitial work on NV50 before the NV3x are finnished.
 
Australia was free settled in South Australia (4th largest state), - that article must have been posted by a Russian Orthodow Greek of course :rolleyes:

Australia is cosmopolitian and Sydney where I live has to be one of the most beautiful, clean, safe and exciting places in the World. /end_bias

I am 3rd generation Aussie - but from my Dads side English / French - from my Mother's Irish / New Zealand and my wife is English / Indian descent - so were a varied mob down under :)
 
OpenGL guy said:
350 million transistors, 0.09, 2H 2004 not a chance. Think about all the troubles 0.13 gave nvidia. You think it will get easier with nearly three times as many transistors and a new process? Plus, what team is working on NV50? NV35 is not here yet so you're talking less than two years for a new architecture (assuming the NV30 team is doing NV50). It doesn't add up.
Well, that would be a problem for another company if that ever happens, doesn't it? ;)

I would personnaly bet on 1 half 2005, making a 15-18 months between the NV40's launch and the NV50's. In fact, Nvidia is quite behind Ati in terms of generation at the moment (3-6 month when the NV35 will be out). With the NV40 they could be on par and with the NV50 take the lead.

350 millions sure seems too much, but 250-300 should be resonnable (180 is the rumored figure for NV40), 0.09 process should be up this year in IBM fabs (TSMC also?) and available to customers early next year i supposed. So using a 0.09 process at the end of 2004/early 2005 is quite likely.
 
Ok, so that leaves about 200 mill for logic then. If NV40 is going to be 180 mill, and assuming that it doesn't have eDRAM, then I guess the transistor value could be possible.
 
EasyRaider said:
elroy said:
How many transistors would 16 MB of eDRAM take up?

128 million, assuming 1 transistor per bit.

(edit)
134 million actually, forgot 1 kB = 1024 bytes

Your insane, you're lying. :cry:
I knew eDRAM was a waste of space. You can use all those transistors on more usefull things unfortunately. :cry:

(PS: Just kidding about the lie and sane bit ;))
 
"
I would personnaly bet on 1 half 2005, making a 15-18 months between the NV40's launch and the NV50's. In fact, Nvidia is quite behind Ati in terms of generation at the moment (3-6 month when the NV35 will be out). With the NV40 they could be on par and with the NV50 take the lead.
"

I don't understand this.
With NV35 it will be more like 2 months behind concerning generations i think they will take the lead already with NV40.
As you know R400 has been chancelled, delayed or whatever so that means refresh number 3 of the R300 core in Q4 2003.
 
Richthofen said:
I don't understand this.
With NV35 it will be more like 2 months behind concerning generations i think they will take the lead already with NV40.
As you know R400 has been chancelled, delayed or whatever so that means refresh number 3 of the R300 core in Q4 2003.
Well the first 9800 pro were available at the end of march if i remember correctly. The first NV35 should be available at the end of june, so it makes 3 month at the minimum.

Well for the R390/420/locci, time will tell us. We have another 6 month before knowing for sure...
 
If the specs are true, it makes me wonder why Nvidia is persistently insisting on discrete vertex and pixel shading units. Why can't they just develop a processor with generalized units units to cover all shader computations as Carmack advised?
 
I actually think they'd advertise such a design by listing the best cases for the specifications that users are familiar with, and the spec list reads as a consumer, rather than technical, oriented listing of specifications (or, of course, something structured to be credible to enthsuiasts in general/consumers).
 
Back
Top