Interesting info on early NV40 memory

It was first hyped for LMA2, but, like the crossbar memory controller was present in LMA1.

Don't forget the tech was developed largely because of X-BOX which is a GF3 derivatative.
 
Please, show us where auto precharge is shown as being a part of LMA I/GeForce3. I did a search on google and found nothing.
 
It was first hyped for LMA2, but, like the crossbar memory controller was present in LMA1.

Proof? (in links, and the Inquirer is not a valid source)

Either that, or you are avoiding that you were wrong.
 
X-BOX and nForce1 are the same chipset (except for cpu interface and a couple of other minor things)...
GF3 (NV20 and NV2A) are almost the same chip...

Its the exact same situation as GF256/DDR and GF2 - the GF256 supported NSR and FSAA, it was just never exposed.
 
radar1200gs said:
X-BOX and nForce1 are the same chipset (except for cpu interface and a couple of other minor things)...
GF3 (NV20 and NV2A) are almost the same chip...

NV2A featured double the vertex shaders of NV20, an exta FX ALU per pipeline and double pumped Z/Stencil - if thats "almost the same" then I'd like to know whats different!

Pre-Charge was first introduced with NV25, as were a few other elements such as Z compression.

Its the exact same situation as GF256/DDR and GF2 - the GF256 supported NSR and FSAA, it was just never exposed.

The reason "NSR" wasn't advertised was becuase it was buggy. In fact under OpenGL the capabilities were exposed on NV10, because they could restrict the operations to the element that worked, however under DX they couldn't so they held over until NV15 in which the issues were sorted out. FSAA on GTS is largely a software implementation, in that it doesn't take anything to support it, other than a bunch of game compatibility workarounds and enough pixel grunt to make it reasonable performance.
 
It seems to me that when you want Dave to answer a question, you just get radar to post a lot of false information, and bingo, along comes Dave, right on time :)

Has anyone ever seen Dave and radar logged on at the same time? ;)
 
Pre-Charge was first introduced with NV25, as were a few other elements such as Z compression.

That's what I thought...

The reason "NSR" wasn't advertised was becuase it was buggy. In fact under OpenGL the capabilities were exposed on NV10, because they could restrist the operations to the element that worked, however under DX they couldn't so they held over until NV15 in which the issues were sorted out. FSAA on GTS is largely a software implementation, in that it doesn't take anything to support it, other than a bunch of game compatibility workarounds and enough pixel grunt to make it reasonable performance.

That's what I thought as well... (well, the FSAA part anyways)

So... some more vague/incorrect data to show radar?

Its the exact same situation as GF256/DDR and GF2 - the GF256 supported NSR and FSAA, it was just never exposed.

Believe it or not, the GF256 does support FSAA, however.. like the GF2.. it is a software implementation.. not a hardware implementation (as you would like us to believe)

NV2A featured double the vertex shaders of NV20, an exta FX ALU per pipeline and double pumped Z/Stencil - if thats "almost the same" then I'd like to know whats different!

At the time when the GF3/Xbox was out.. I did feel the GF3 was crippled in that aspect.. the GF4 was closer to the Xbox equivalant (in my opinion)

X-BOX and nForce1 are the same chipset (except for cpu interface and a couple of other minor things)...
GF3 (NV20 and NV2A) are almost the same chip...

Memory precharge on a mobo is probably different from one done on a video card.. but I could be wrong on that. It accomplishes the same thing though.
 
It seems to me that when you want Dave to answer a question, you just get radar to post a lot of false information, and bingo, along comes Dave, right on time

Kinda amusing ain't it? ;) :LOL:

Too many half-truths or complete BS... makes you wonder if NV PR wander these forums...
 
digitalwanderer said:
ben6 said:
Or what about OVERCLOCKED memory compared to the CARD SPECIFICATION . Ask me about my Ti500 reference board sometime (240 core/260 memory when card was specced at 240/250). Err different topic :)
Hey Ben, what about your ti500 reference board? :|

Well the first thing I did was fire up Coolbits and it registered at 240/260 instead of the 240/250 that the retail cards shipped at. Not really that big a deal (as every Ti500 I reviewed OCed to 260 without issue, but interesting nonetheless)

BTW, in the ever interesting world of review units, for some reason I now have more NV 6800 class cards than I do ATI X800 class cards
 
I'm the one from the Nvnews thread.

Whether the ram is overclocked or not I don't know although I would have been happier if it read GC16 instead of GC20. However it does run @ 600MHz stable.

Here is the reply email from Nvidia:

"Thank you for your email.

Regarding your card's RAM: The memory has been fully tested and
certified by Samsung to run at 550MHz with sufficient margin.

Early in production, a limited number of memories were incorrectly
marked as 500MHz material.

To be sure, NVIDIA requested that Samsung retest the material and
certify its speed grade per the requirements of NVIDIA.

The RAM is not being overclocked, and has been fully tested and
certified to run at the 6800 Ultra-specified clocks.

We're confident your card carries the same reliability, stability, and
compatibility you expect from an NVIDIA product."

Cheers
Tze Lin

--
Ong Tze Lin
Technical Marketing Manager
Asia-Pacific

NVIDIA Singapore
80 Marine Parade Rd #15-08
Parkway Parade, Singapore 449 269

T +65.6340.6825 F +65.6348.8110 M +65.9832.7125

tzelin@nvidia.com
 
DaveBaumann said:
The reason "NSR" wasn't advertised was becuase it was buggy. In fact under OpenGL the capabilities were exposed on NV10, because they could restrict the operations to the element that worked, however under DX they couldn't so they held over until NV15 in which the issues were sorted out.
:oops:
I wonder who told you that. Certainly not someone who knows DX7 and NV_register_combiners...
 
radar1200gs said:
Dave, first, read what I quoted you as saying in the old thread.

Second, nVidia does not label memory, samsung does!

Third, as usual, you missed the whole point...;)

Here's what you quoted in your initial post:

According to Radar said:
Oh and regarding the GC20 modules. I e-mailed Nvidia about them being overclocked and they replied that the first few runs of GC16 modules were marked incorrectly by Samsung as GC20 yet were infact GC16. The Nvidia Tech guy who responded to my e-mail said that Nvidia had Samsung retest the mislabled GC20 modules to indeed confirm that they would run at GC16 specifications.....he assured me that the GC20 modules on the first batches of 6800U's were infact GC16 speced and that I had nothing to worry about.

So, okay, first your guy is telling you that the info he relates comes from nVidia, not Samsung, and as you said, nVidia does not make the ram. So the reason Dave mentions nVidia is because the source for the info you posted is nVidia, not Samsung.

Second, your information source was obviously had by the oldest trick in the book, which is: "Dazzle them with bs, or else pass the buck." As such it's riddled with logical inconsistencies...;) If, as alleged, nVidia "had" Samsung "retest the modules" (which would have been a pretty lengthy and expensive affair I would imagine), and Samsung "discovered" that they indeed really were GC16 instead of GC20, and that Samsung had indeed simply marked them wrong from the start, then most certainly Samsung would have *remarked all of them* as GC16 for the very simple purpose of being entitled therefore to *charge more money* for the faster ram.

Secondly of course "running at GC16 specifications" doesn't mean that the ram doing that is GC16--because all overclocked ram could be said to be running at a "specification" which is different from those pertaining to specs relative to what the ram is marked as, and sold as, by the manufacturer.

Basically, the alleged nVidia employee was "dazzling your source with bs" in making his "running at GC16 specifications, never mind how it is marked" comments, and when he invites the person to disbelieve his eyes as to the way the ram is marked from the Samsung factory, he "passes the buck" to Samsung, as it's Samsung's "fault" that it marked the ram incorrectly from the start, and only the omniscience of nVidia allowed Samsung to correct its mistake. It would have been nice, wouldn't it, if in addition to "having" Samsung retest all of that ram, since nVidia had divined telephathically that despite the fact that it was labelled CG20 it was *really* GC16, if nVidia would have "had" Samsung relabel the modules so that their markings would correspond to their actual specifications...?...;)

Heh..:D What I suspect is the case is that the poster you quoted was merely asking nVidia to help him feel good about the GC20 ram used in his product--good enough to discount what his eyes were telling him--and nVidia was only too happy to oblige...:D
 
If NVIDIA was really omniscient then they would have known before ordering the RAM that Samsung would label it incorrectly. Not only that but they would know Samsung would retest it successfully and not relabel as GC16. They would also know that some kid would ask about the RAM being a lower spec than he had ordered (the card was an Ultra) even though it ran at the right speeds and would need an excuse to FOB him off with.

Therefore this story from the NVIDIA rep is entirely true and NVIDIA are merely playing it out, but us folk that are not omniscient simply dont understand these issues - and never will.
 
I think all the controversy here comes from not understanding what “speed binningâ€￾ tells you. I’m no expert on speed binning but it was explained to me once. It’s been a while so if anyone sees a mistake please correct me. :)

It goes something like this: 90% of the chips are guaranteed to operate with in 10% of the speed grade. The numbers are variable depending on how uniform the chips need to be. The more uniform a speed bin the more expensive it tends to be.

There will be some chips for a given speed bin that will run faster than what the spec calls for. How many and at what speed depends on how thoroughly the chips are tested and whether they are checked for higher speed ratings.

There is no such thing as a GC16 or a GC20. They are artificial labels we have put on the chips. Some GC20’s will outperform GC16s while most GC16’s will outperform the GC20 when pushed to their max under testing conditions.

There is no difference between a chip that was labeled GC20 that performs at a higher speed than is required and a GC16 chip that performs at that same speed.

So if Nvidia bought GC20 chips and culled the ones that ran under GC16 speed grade there is nothing wrong with it, although it would be expensive.

The important point is that review cards should be tested with their memories running at the same speed as the memory speed in retail cards.
 
Maybe NVidia ordered a small batch of 1,6ns chips when Samsung hadn't been testing for 1,6 yet, and Samsung decided to take some of their already packed and labelled 2,0ns chips and re-tested them for 1,6ns. Those that passed the test were delivered to NVidia with the guarantee that they would run as 1,6ns chips.

Or maybe something different happened. Who cares?
What really matters is what speed they run at, and what speed they're guaranteed to run at (to the customer), not what's printed on the package.
 
Why, if nVidia were lying would they send an email (a legal document) out containing said lie?

If they truly are lying about the memory in the email that would constitute a criminal offence for which they could potentially be prosecuted.

Do you really believe nVidia is that stupid???
 
Back
Top