Nvidia GT300 core: Speculation

Status
Not open for further replies.
Maybe he went to Munich, too:

http://translate.google.com/transla...0-jiz-za-par-tydnu&sl=cs&tl=en&hl=en&ie=UTF-8

Amazingly, double the ALUs of GT200, with a texture rate that's basically unchanged. Woah, 6 multiprocessors per cluster.

Jawed

Or maybe we all know Charlie is a mojor douche bag and know not to believe a word he says about NVidia. You know the saying, even the sun shines on a dogs ass from time to time. He has been wrong more times than he has been right. SO he got part of bumpgate right, now is batting average is .010 instead of .000.
 
Or maybe we all know Charlie is a mojor douche bag and know not to believe a word he says about NVidia. You know the saying, even the sun shines on a dogs ass from time to time. He has been wrong more times than he has been right. SO he got part of bumpgate right, now is batting average is .010 instead of .000.

My my aren't we a little hostile? Charlie's been right about a lot of things and wrong about quite a few. I don't keep track, but he was right on ATI/AMD well ahead of everyone else. He was right about Global Foundries well ahead of everyone else and responsible for bumpgate.

DK
 
If there's no such thing as 'GT300' I'm glad I still think of it as NV60 because that doesn't get confusing

G70 / GeForce 7800 = NV47

G80 / GeForce 8800 = "NV50"

G98 / GeForce 9800 = "NV50" or "NV51"

GT200 / GTX 280 = "NV55"

"GT300"/ "GTX 380" = "NV60"


Some might say that GT200 / GTX 280 was NV60 and that the upcoming GT300 is NV70 but I can't see it that way since GT200/GTX280 was of the same DirectX and Shader generation as G80. The GTX280 just got more compute and texture resources. The "GT300" will be the first new generation DirectX & Shader architecture since G80 in 2006.
 
He's been right often enough in the past to be taken seriously. So I'll have to go with CJ on this one (and I guess Charlie too). 2010 is looking more and more like a sure thing.
This was pretty much a sure thing after ATI said they would be first, you don't say that if you only think you will be first by 1 or 2 months ... and I am certain they have good sources.

What I'm wondering is if NVIDIA will break NDA early when ATI starts shipping.
 
And performance "around GTX295" levels. the new Radeon announced around the same time and performs about the same as well.

So a $300 GTX380, nice.

Since when is nv following a mid range X2 strategy? A $300 (even $380) GTX380 will certainly mean that they will have to produce an X2 card to compete with AMD's X2 part.
 
He's been right often enough in the past to be taken seriously. So I'll have to go with CJ on this one (and I guess Charlie too). 2010 is looking more and more like a sure thing.

Safest bet at the moment yes; anything earlier than that would be just a pleasant surprise. On a side note just because M$ isn't aware that I'm to become a father, it doesn't mean that my wife isn't pregnant :p
 
Maybe he went to Munich, too:

http://translate.google.com/translate?u=http%3A%2F%2Fpctuning.tyden.cz%2Fcomponent%2Fcontent%2Farticle%2F1-aktualni-zpravy%2F14552-geforce-g300-jiz-za-par-tydnu&sl=cs&tl=en&hl=en&ie=UTF-8

Amazingly, double the ALUs of GT200, with a texture rate that's basically unchanged. Woah, 6 multiprocessors per cluster.

Jawed

If odd amounts of clusters work for anyone, be my guest :rolleyes:

Just for the record's sake picture lend from the 3DCenter fora:

e7ttg4h2.jpg


g300.png


Find the differences ;)
 
Except for timing, all of those don't matter during the PNR process. That sounds blunt, but it's basically true. SI and capacitance are intermediate products on the way to getting your next timing report. Leakage is just there. You can't do much about it anyway at that stage of development, other that being selective in putting fast cells in there.

Actually SI and Cap can have a large effect on circuits in chips. Leakage can also have an effect on a lot of parameters when you move processes as well.

"Rework" makes it sound complicated. If the process are sufficiently faster, you'll update the clock parameters in the synthesis script, synthesize, write out new PNR timing constraint files and feed them into the placer. I make it sound easy and it usually is. We're talking a couple of days for the whole chip, a fraction of the overall backend time.

You obviously haven't worked on large scale designs. Just churning synth can takes weeks to months.

For a full node, I'd guess it's still only an annoyance, even if you're bleeding edge: when you're already in PNR, the parameter updates aren't that dramatic anymore. You'll see some disturbance in your timing report, which you fix during the next iteration.
I was more talking about the early stages of new designs: those who want to squeeze out maximum performance out of upcoming 32nm processes are probably still seeing fairly large changes, but there's still plenty of time to design around those.

Any full node change is fairly significant. There are tons of things that worked in the prior processes that will need reworking to work in the next process if you care at all about frequency.

No, it doesn't work that way. Timing problems are localized and uncorrelated with overall die size. A electron doesn't really care if it's flying around a die of 50 or 500mm2: the largest stretches of uninterrupted metal are measured in um. If you're 50mm2 chip has issues with a particular timing path, it will wreck the yield in the slow corner no matter what and it's going to have to be fixed before production.

Two words:

Intra-die variation.

When a timing path is detected, it's a safe bet that it will be due to 1 of 2 reasons:
- the timing script had a bug.
- some number was overlooked in a report or incorrectly waived as safe.

Or any number of other issues ranging from bad parametrization or just the will of the universe. Timing discrepancies have a large list of factors between EDA and silicon.
 
You obviously haven't worked on large scale designs. Just churning synth can takes weeks to months.
Big machines running 24/7 for weeks to months? :oops:
Or mostly just in working hours with pauses for human intervention & then re-running?
What sort of scale of machine do they use, like Top 500 scale or just decently big?
 
Ha, well, it's in NVidia's best interests to make AMD believe it's failing to get GT300 working. Leaks of the kind we saw before GT200 turned up (performance dot on a graph, CUDA future directions) don't seem to have occurred.

Jawed
 
My my aren't we a little hostile? Charlie's been right about a lot of things and wrong about quite a few. I don't keep track, but he was right on ATI/AMD well ahead of everyone else. He was right about Global Foundries well ahead of everyone else and responsible for bumpgate.

DK


its like a dart board if you enough darts one is bound to hit the spot he was looking:LOL:
 
Lets put it this way, they can't be worst than degustator's sources


much worse actually ;), there has been no scheduled launch that I know of and there is no scheduled delay simple isn't it, charlie is most likely wrong. What happened to the Win 7 parts? Oh they came out months ago.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top