ATI Develops HyperMemory Technology to Reduce PC Costs

This really doesn't sound any different than what AGP offered. Is it really any surprise that once we move to a new graphics interface standard that graphics companies would seek to keep "AGP texturing" going?

Additionally, nVidia has had "virtual AGP texturing" available since the AGP1x days, where they could store textures in system memory on a PCI card. So this really is nothing new in a number of different ways.
 
It's a difference of granularity. Swapping entire textures over AGP has been done for ages buit paging in small blocks (4x4 texels or so) as they're needed is fairly new in the PC-space (3DLabs did it first, and ATI/ArtX did it for the Game Cube graphics chip).
 
PowerVR also did it with their Kyro series of products.

But regardless, at least from that press release, I don't see any evidence that ATI is doing this.
 
Chalnoth said:
Additionally, nVidia has had "virtual AGP texturing" available since the AGP1x days, where they could store textures in system memory on a PCI card. So this really is nothing new in a number of different ways.

Actually nVIDIA supported system memory textures before AGP cards existed. RIVA128 allows had the ability to texture out of system RAM.
 
What they need now is instantanious loseless suuuuper nifty hardware Compression. Something that can compress like "everything" sent to ram on a GPU. Thats the real way to combat the rising costs of high end hardware.

Lets say 4:1 Lossless Compression of everything that passes into GPU ram.
 
Actually, it probably would be a good idea to start having compression systems for most busses. The only problem is, of course, if you also want to keep latency low, the amount of compression possible is going to be quite limited.
 
Back on Usenet in the early/mid 90's there was a grad student at MIT who claimed to have found an algorithm that could provide lossless compression of any random data. :LOL:

Hellbinder, any data string has a value called "entropy" beyond which no compression is possible, so your idea isn't feasible. :)

Think about it this way, is there an invertible function mapping F:A->B where A&B are finite sets and B has fewer elements than A?
 
A recent issue of Scientific American stated that of the clearly BS patents that arrive at the patent office, 2/3 are Einstein (either going beyond him or proving him wrong) and the other 1/3 are perpetual motion machines. That's a great example of the latter :D
 
Well, I think Hellbinder was obviously joking, but, of course, the reality is that if it were possible to design a lossless compression algorithm that would give the same ratio of compression (no matter how small the ratio) to any data, one could simply reapply the algorithm multiple times until the data was reduced to one bit in size. This is clearly nonsense.
 
If its anything like the 3dlabs VM it should be a more sophisicated way of going about it..
http://www.anandtech.com/video/showdoc.aspx?i=1614&p=8
AGP texturing (in my unscholared impression) is trying to do a "brute force" thing on a rather weak bus..

This is what Sweeny had to say on 3dLabs VM:
This is something Carmack and I have been pushing 3D card makers to implement for a very long time. Basically it enables us to use far more textures than we currently can. You won't see immediate improvements with current games, because games always avoid using more textures than fit in video memory, otherwise you get into texture swapping and performance becomes totally unacceptable. Virtual texturing makes swapping performance acceptable, because only the blocks texels that are actually rendered are transferred to video memory, on demand.

Then video memory starts to look like a cache, and you can get away with less of it - typically you only need enough to hold the frame buffer, back buffer, and the blocks of texels that are rendered in the current scene, as opposed to all the textures in memory. So this should let IHV's include less video RAM without losing performance, and therefore faster RAM at less cost.

This does for rendering what virtual memory did for operating systems: it eliminates the hardcoded limitation on RAM (from the application's point of view.)

I assumed that HyperMemory was supposed to work a bit like 3dLabs VM, not just a "AGP texturing" for PCI-E..
same with nVidias equivalent, which I assume there will be one..
Anyone know exactly what its supposed to be?
 
I'm not sure I really believe him, though, since there's still a very large latency penalty on storing texture data in system memory. Only transferring small parts of the texture may help that penalty, but it doesn't really solve the problem.
 
Hellbinder said:
What they need now is instantanious loseless suuuuper nifty hardware Compression. Something that can compress like "everything" sent to ram on a GPU. Thats the real way to combat the rising costs of high end hardware.

Lets say 4:1 Lossless Compression of everything that passes into GPU ram.

I don't think people understand the true implications this has from a driver point of view. With compression you get varying results so you have to allocate enough space on the card for the worst compression result. Now you can utilize that space and not worry if one frame out of fifty overflows the cards memory. I think this would be a huge plus for people who like high resolution with all the eye candy turned on.
 
I read this as almost two seperate announcements, 1) faster data path allows for low end cards to rely more on onboard memory and 2) here come a virtual memory system for high end (and mid range cards).

But does VMM require cache memory (accessed differently - by contents not address) rather than normal super fast memory be added on board your card to control the cache pool properly?
 
3DFX develops MegaHyperMemory technology to reduce PC costs
MegaHyperMemory uses PCI Express to enable maximum graphics processing performance while lowering overall PC cost
MARKHAM, ON/ Munich, Germany - September 17, 2004 - 3DFX Technologies (TSX:ATY, NASDAQ:ATYT) today announced MegaHyperMemory, an innovative technology that reduces PC system costs by allowing its visual processors to use system memory for graphics processing. MegaHyperMemory uses the high-speed bi-directional data transfer capabilities of PCI Express to store and access graphics data on the system disk, leading to less of a dependence on expensive graphics memory and system disk ultimately a lower overall system cost.

Under previous interconnect standards, the data transfer between the visual processor and the CPU was not fast enough for real-time graphics applications, so graphics cards have shipped with up to 256 MB of dedicated graphics memory to store textures and rendering data required by the graphics processor. MegaHyperMemory gives 3DFX and its board partners the option to deliver cards with less on-board memory and instead use system diskspace to handle the graphics storage requirements and expanding them to several gigabytes. The result is a lower overall PC cost for the same great graphics performance.

MegaHyperMemory uses intelligent disk allocation algorithms to optimize the use of available disk drive and ensure critical components are placed in fast local memory when required. Optimal assignment of data to local or disk storage is determined dynamically to ensure the best user experience. MegaHyperMemory also increases the performance of system bus data transfers, making accessing the system disk faster than ever before.

Graphics cards featuring MegaHyperMemory technology will be announced later this year. For more information about 3DFX's current products and technologies, please visit www.3DFX.com.
 
akira888 said:
Back on Usenet in the early/mid 90's there was a grad student at MIT who claimed to have found an algorithm that could provide lossless compression of any random data. :LOL:
What? Provided that "random" data had be generated by
Code:
while(1)
{
    printf("%d\n", rand());
}
:p

I.E it isn't random.
 
If you have a predetermined Block size and can have free delimiterting ( such as a file so the length of the compressed block doesn't get counted into the size) you can compress random data on average to size-0.5 or somethign close to that. Compression and decompress would be expensive.

Now you can't keep recompress cause your compressed size wouldn't equal the block size and if you were to string blocks together ect you would need to delimite the different stuff ect and it dies.
 
bloodbob said:
If you have a predetermined Block size and can have free delimiterting ( such as a file so the length of the compressed block doesn't get counted into the size) you can compress random data on average to size-0.5 or somethign close to that.
How, pray tell?

For random data, the probability of bit 'N' being 1 is 1/2 and does not depend on the previous (0..N-1) or future (N+1...) bits. It will take one bit to represent it. Similarly any sequence of K bits in that stream will have equal probability of occuring as any other K bits, so you still have the pigeon hole problem in that representing a particular block of K bits with <K bits will cause expansion of some other set.
 
If anyone says he knows how to do compression of random data just step away very gently ... nothing good has ever come from talking to them, god knows Ive done it enough times to be sure about that.
 
Well, sure you can do compression of random data. That is, if you know the random number generator used to create it and the seed....
 
Back
Top