Interesting info on early NV40 memory

I have put all the evidence available to me squarely on the table. You may rest assured the the instant I learn more, I'll post that also.
 
radar1200gs said:
I have put all the evidence available to me squarely on the table. You may rest assured the the instant I learn more, I'll post that also.

If it conincides with your views that is. Cause the probability of you providing info which contradicts what you want to prove is slim to none.
 
No, you are thinking of Dave, not me there.

For example he never responded to me about the specs/status of SM4.0. Thats only one incident out of aprrox half a dozen I can recall.
 
Greg – responding to everyone of your crackpot posts is a thoroughly futile exercise, as many have found out. As for some specific instance you are talking about – SM4.0 isn’t an announced spec so if I do or don’t know the status of it I’m hardly likely to suggest on public message boards – I have confidences with various people and you have to know when you can tread the line and when you can’t.

As for contacting NVIDIA about this – if I were to respond to every single message post that poses a query contrary to the official information we’ve been given I’d get nothing done. And, although your warped impression may not think it, our relationship is just fine.
 
radar1200gs said:
The implication at the time was that nVidia was trying to cheat and pull the wool over reviewers eyes by overclocking the memory on reference boards, claiming they were one thing qwhen in fact they were another (in other words people were trying to say that nVidia hadn't changed and were going to continue to lie their ass off with nV40).

The only statement I was making was that, based on the facts at the time, they appeared to be overclocking slower memory parts in order to fill a gap in supplies. That's not cheating by any stretch of the imagination, since the cards are still perfectly stable at those speeds. It was just the result of an unfortunate supply/demand situation.

Dave has already addressed all this, though.
 
It's clearly a non-issue and noone cares...

Some things... you don't know.. and you shouldn't claim to know... that's why noone will care or listen (heck, claiming ATI/NVidia does/does not do something proves nothing to your case)
 
radar1200gs said:
Do those who claimed nVidia was cheating and overclocking the ram still stand by their claims in the light of this information?
Interesting post. Remind me again how shipping reviewer boards at 1100MHz and then shipping retail boards (with warranties) at 1100MHz is in any way cheating?

Molehill -> mountain, IMO.
 
popc1.gif
 
radar1200gs said:
Auto-precharge was introduced with DDR-1.

ATi tells us they design for lower power requirements, which is why I assumed they may not use auto-precharge (would also partly explain why nVidia almost always gets better memory efficiency than ATi too).

Do you even understand what auto-precharge is?

The thing that takes power in DRAM is effectively the activate. In the case that auto-precharge is a performance win, in a non-auto-precharge system you will be manually pre-charging each dram bank anyways, which results in a net performance lost and will use roughly the same power.

In the case that auto-precharge will use more power, it is because you didn't need to switch page within the bank that you are automatically precharging. In this case you pay an additional bandwidth and latency cost to do the un-needed pre-ras sequence, when without auto-precharge you just issue a new cas.

Now, auto-precharge does have an advantage for memory streams that have low spacial locality ( most commonly seen in high end server workloads, eg DB or anything resembling linked-list traversal), but these memory acces patterns also cause the dram to operate at a fraction of their rated bandwidth due to all the wasted cycles in pre-ras-cas.

For something like graphics, a lot of time is spend optimizing the pixelation algorithims and the structure/control of the memory controller to enable sequential reads and writes to memory which results in the highest possible memory bandwidth. In these cases, auto-precharge provides no performance benefit and merely sucks power.

Aaron Spink
speaking for myself inc.
 
Or what about OVERCLOCKED memory compared to the CARD SPECIFICATION . Ask me about my Ti500 reference board sometime (240 core/260 memory when card was specced at 240/250). Err different topic :)
 
aaronspink said:
radar1200gs said:
Auto-precharge was introduced with DDR-1.

ATi tells us they design for lower power requirements, which is why I assumed they may not use auto-precharge (would also partly explain why nVidia almost always gets better memory efficiency than ATi too).

Do you even understand what auto-precharge is?

The thing that takes power in DRAM is effectively the activate. In the case that auto-precharge is a performance win, in a non-auto-precharge system you will be manually pre-charging each dram bank anyways, which results in a net performance lost and will use roughly the same power.

In the case that auto-precharge will use more power, it is because you didn't need to switch page within the bank that you are automatically precharging. In this case you pay an additional bandwidth and latency cost to do the un-needed pre-ras sequence, when without auto-precharge you just issue a new cas.

Now, auto-precharge does have an advantage for memory streams that have low spacial locality ( most commonly seen in high end server workloads, eg DB or anything resembling linked-list traversal), but these memory acces patterns also cause the dram to operate at a fraction of their rated bandwidth due to all the wasted cycles in pre-ras-cas.

For something like graphics, a lot of time is spend optimizing the pixelation algorithims and the structure/control of the memory controller to enable sequential reads and writes to memory which results in the highest possible memory bandwidth. In these cases, auto-precharge provides no performance benefit and merely sucks power.

Aaron Spink
speaking for myself inc.

Well Aaron, nVidia obviously believes Auto-Precharge provides a performance benefit to graphics since they introduced it on the GF3. That was probably the first consumer space application of it followed by its appearance on nForce1 boards.

As you explained above not using Auto-Precharge will result in a performance loss. I'm not sure about using the same amount of power though. I think nVidia and ATi's memory controllers go about things quite differently and ATi's seems to focus a lot on power conservation, wheras nVidia says to hell with power/heat concerns; we want raw performance.
 
ben6 said:
Or what about OVERCLOCKED memory compared to the CARD SPECIFICATION . Ask me about my Ti500 reference board sometime (240 core/260 memory when card was specced at 240/250). Err different topic :)
Hey Ben, what about your ti500 reference board? :|
 
DaveBaumann said:
Greg – responding to everyone of your crackpot posts is a thoroughly futile exercise, as many have found out. As for some specific instance you are talking about . . .

And then what did you do? Hey, somebody go over to Radar's and get Dave's goat back.

J

Hmm. No, *not* "universe without a j". (for all you Heinleiners).
 
You know I have never had an nvidia card thats OCed poorly on memory. Even my 5700NU personal OCed at least 75 Mhz.


And my 6800NU and 5900LX both OCed from 700 to 900..
 
nVidia obviously believes Auto-Precharge provides a performance benefit to graphics since they introduced it on the GF3

WRONG

http://www.nvidia.com/page/pg_20020201750469.html

Lightspeed Memory Architecture II
LMA II boosts effective memory bandwidth by up to 300%. Radical new technologies―including Z-occlusion culling, fast Z-clear, and auto pre-charge―effectively multiply the memory bandwidth to ensure fluid frame rates for the latest 3D and 2D games and applications.

Precharging was a feature of the GF4. How's about doing RESEARCH to what you are saying radar.
 
Back
Top