New Nvidia patents

You're going into competition with Demirug now for resident Patent Diving champ? :LOL:

Thanks. . .

Edit: Why is it that I can never see images at that site? All I get is a flash of QuickTime and then a blank page.
 
Last edited by a moderator:
Well, PCI-Express does allow extension cables and hot-swapping. For some applications an external "graphics card" isn't a bad idea at all.
 
geo said:
You're going into competition with Demirug now for resident Patent Diving champ? :LOL:

Nah, you'll notice all I provide are links - no insightful extrapolation from me :)

To get the images you need a TIFF viewer, click on the images link, then click on help and it'll guide you to the download for the viewer. You'll get a very nice PDF format that you can print out and read at work looking real busy ;)

After reading through the memory controller patent it does seem like it can rival R580's memory controller with lower transistor count. At least it gives the same 32-bit granularity and seems to optimize parallel transfers and compression somewhat as well.

Hopefully one (or more) of the more qualified members can take a gander and let us know what they think.
 
trinibwoy said:
Nah, you'll notice all I provide are links - no insightful extrapolation from me :)

To get the images you need a TIFF viewer, click on the images link, then click on help and it'll guide you to the download for the viewer. You'll get a very nice PDF format that you can print out and read at work looking real busy ;)

Damn, I should have asked this a year ago! :LOL: Thanks, that worked.
 
geo said:
Damn, I should have asked this a year ago! :LOL: Thanks, that worked.

Yeah, used to piss me off till one day I decided to click "Help" and can you believe it, it actually helped! :D

Come on Demirug, Xmas, Jaws, Jawed, neliz and co.....no comments? What do y'all think of the memory controller patent. Is it something worth talking about?
 
I have seen this yesterday, too. Primary it describes a memory controller that can do some “dirtyâ€￾ tricks in the case there are two memory chips attached. The result of these tricks is a virtual double bus width in some special cases like “Fast Clearâ€￾.
 
  • Like
Reactions: Geo
Xmas said:
Well, PCI-Express does allow extension cables and hot-swapping. For some applications an external "graphics card" isn't a bad idea at all.
I think I discussed the possability of this with NVIDIA over a year ago. If you have a notebook with a doking station thn you could have an external PCIe connector to that docking station - remove the notebook from the dock and use integrated graphics, put it on the dock and use highe performance graphics in the docking station. Alteratively, the same could be done with an SLI syste, cutting down the number of lanes to the docking station (8 lanes, plus SLI connection).
 
Xmas said:
Well, PCI-Express does allow extension cables and hot-swapping. For some applications an external "graphics card" isn't a bad idea at all.

External gfx card AND with it's own PSU :p

Good find, trini!!!
 
Demirug said:
I have seen this yesterday, too. Primary it describes a memory controller that can do some “dirtyâ€￾ tricks in the case there are two memory chips attached. The result of these tricks is a virtual double bus width in some special cases like “Fast Clearâ€￾.

Summed up, it reads to me like a way of saving bandwidth.

pairing a first set of subpackets having a first set of tile locations with said first subpartition and pairing a second set of subpackets having a second set of tile locations with said second subpartition, wherein tile data may be accessed with a memory transfer data size less than that associated with a partition
 
Demirug said:
I have seen this yesterday, too. Primary it describes a memory controller that can do some “dirty” tricks in the case there are two memory chips attached. The result of these tricks is a virtual double bus width in some special cases like “Fast Clear”.

Seems to be a bit more than that though. Fast Z-Clear acceleration is just one application - simultaneously writing the two flags for different tiles if the flags exist in different subpartitions.

patent said:
The architecture of the present invention provides the memory controller with the capability of tightly interleaving data to memory subpartitions so as to create a unified memory arrangement with improved efficiency in data accesses. That is, even as the bus width for the memory data bus expands, by providing interleaved access at a finer granular level, it is possible to assure that there is a more full or complete use of the overall bus width, that is, that bus width is not wasted when smaller atoms of data need to be accessed, as for instance in connection with tile data processing.

What confuses me is the fact that DRAM reads are in bursts anyway. How would interleaving logically adjacent data across DRAMs help improve read performance?
 
Last edited by a moderator:
Probably not the best answer:
http://www.pcguide.com/ref/ram/timingInterleaving-c.html

It's been around for a while...

More links:
http://www.georgebreese.com/net/software/readmes/venabler_v015_readme.htm
http://www.mycableshop.com/techarticles/interleaving.htm
http://www.overclockers.com/tips105/ - an article on the performance difference
http://devforums.amd.com/index.php?showtopic=378

Google is your friend.

The idea is to improve memory efficency... I'm pretty sure someone can find better articles...
 
Last edited by a moderator:
Deathlike2 said:
Lotsa links

Thanks, I actually did Google it a bit and came across several articles on interleaving and why it improves performance but the patent already explained that pretty well. What I'm asking is how interleaving and burst modes go together. Maybe I don't have a good picture of how the data is spread around.

Anyway, it does seem like this has been out there for a while - can we assume that it's already implemented in older designs?
 
http://xtronics.com/memory/how_memory-works.htm

Interleaving is a scheme where the system gets its memory accesses with two or more banks of memory. While in the second halve of reading the first bank the second bank is already in the first halve of it's read cycle. Thus, we have odd and even banks of memory. This cuts the access time in halve IF the memory accesses are sequential as in a burst mode request. If you are just going for a single word you still have to wait for the full access time or more (more due to setup time and the fact that you may have to wait for the right bank of memory to be available). Double data out SDRAM does interleaving on chip.

The whole idea in interleaving is being able to access data more often... When you access memory, you want to be read as much data in as little time. Normal memory reading requires that a request is made before sending the data.. this is very inefficent.. When interleaving memory.. you can send data while the next bank is processing the read request. The delay between reading the next data will be reduced in this method. This data is generally sent in bursts (because you are accessing multiple chunks of memory). The more bursts you send, the more data. Interleaving allows the bursts to occur much more quicker.. which in the end will allow the memory used with the type of RAM (and I guess the bus width) to reach it's theoretical bandwidth.

Anyway, it does seem like this has been out there for a while - can we assume that it's already implemented in older designs?

It doesn't sound like it was ever really implemented on video cards... I remember the Geforce4 specifically marketed memory precharging or something like that. This would somehow boost memory efficiency. This technology has been out for the memory controllers on mobos for the most part...
 
Last edited by a moderator:
Xmas said:
Well, PCI-Express does allow extension cables and hot-swapping.

PCI doesn't either, but there still is hot-swap PCI, right?

all it would require is a slot that can be powered off (as soon as the bracket is lifter for instance) and an opperating system that would support it (win XP is allready capable of this.)
 
Deathlike2 said:

Thanks, I get the interleaving bit, but I think I found the answer to my question. Just like with PC's, burst-mode on the GPU acts like a cache warmer for data adjacent to the requested location. I guess, even with interleaving this pre-fetch would still be very useful for subsequent reads. Can't believe there's such a shortage of thorough, yet accessible explanations of DRAM technology on the web (according to Google) :???:
 
neliz said:
PCI doesn't either, but there still is hot-swap PCI, right?

all it would require is a slot that can be powered off (as soon as the bracket is lifter for instance) and an opperating system that would support it (win XP is allready capable of this.)

Yes Compaq/HP servers have had this for years. With the purpose of high avalibility.
On some systems you could even replace memory risers on the fly if I remember correcty.
 
Back
Top