ATI Develops HyperMemory Technology to Reduce PC Costs

Discussion in 'Architecture and Products' started by Sabastian, Sep 17, 2004.

  1. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    This really doesn't sound any different than what AGP offered. Is it really any surprise that once we move to a new graphics interface standard that graphics companies would seek to keep "AGP texturing" going?

    Additionally, nVidia has had "virtual AGP texturing" available since the AGP1x days, where they could store textures in system memory on a PCI card. So this really is nothing new in a number of different ways.
     
  2. GameCat

    Newcomer

    Joined:
    Aug 18, 2003
    Messages:
    185
    Likes Received:
    0
    Location:
    Stockholm, Sweden
    It's a difference of granularity. Swapping entire textures over AGP has been done for ages buit paging in small blocks (4x4 texels or so) as they're needed is fairly new in the PC-space (3DLabs did it first, and ATI/ArtX did it for the Game Cube graphics chip).
     
  3. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    PowerVR also did it with their Kyro series of products.

    But regardless, at least from that press release, I don't see any evidence that ATI is doing this.
     
  4. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    Actually nVIDIA supported system memory textures before AGP cards existed. RIVA128 allows had the ability to texture out of system RAM.
     
  5. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    What they need now is instantanious loseless suuuuper nifty hardware Compression. Something that can compress like "everything" sent to ram on a GPU. Thats the real way to combat the rising costs of high end hardware.

    Lets say 4:1 Lossless Compression of everything that passes into GPU ram.
     
  6. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Actually, it probably would be a good idea to start having compression systems for most busses. The only problem is, of course, if you also want to keep latency low, the amount of compression possible is going to be quite limited.
     
  7. Fafalada

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,773
    Likes Received:
    49
    Powered by Infinite improbability engine(tm)? :p
     
  8. akira888

    Regular

    Joined:
    Jul 15, 2003
    Messages:
    652
    Likes Received:
    11
    Location:
    Houston
    Back on Usenet in the early/mid 90's there was a grad student at MIT who claimed to have found an algorithm that could provide lossless compression of any random data. :lol:

    Hellbinder, any data string has a value called "entropy" beyond which no compression is possible, so your idea isn't feasible. :)

    Think about it this way, is there an invertible function mapping F:A->B where A&B are finite sets and B has fewer elements than A?
     
  9. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    A recent issue of Scientific American stated that of the clearly BS patents that arrive at the patent office, 2/3 are Einstein (either going beyond him or proving him wrong) and the other 1/3 are perpetual motion machines. That's a great example of the latter :D
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, I think Hellbinder was obviously joking, but, of course, the reality is that if it were possible to design a lossless compression algorithm that would give the same ratio of compression (no matter how small the ratio) to any data, one could simply reapply the algorithm multiple times until the data was reduced to one bit in size. This is clearly nonsense.
     
  11. jolle

    Newcomer

    Joined:
    Apr 18, 2004
    Messages:
    145
    Likes Received:
    0
    If its anything like the 3dlabs VM it should be a more sophisicated way of going about it..
    http://www.anandtech.com/video/showdoc.aspx?i=1614&p=8
    AGP texturing (in my unscholared impression) is trying to do a "brute force" thing on a rather weak bus..

    This is what Sweeny had to say on 3dLabs VM:
    I assumed that HyperMemory was supposed to work a bit like 3dLabs VM, not just a "AGP texturing" for PCI-E..
    same with nVidias equivalent, which I assume there will be one..
    Anyone know exactly what its supposed to be?
     
  12. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I'm not sure I really believe him, though, since there's still a very large latency penalty on storing texture data in system memory. Only transferring small parts of the texture may help that penalty, but it doesn't really solve the problem.
     
  13. rwolf

    rwolf Rock Star
    Regular

    Joined:
    Oct 25, 2002
    Messages:
    968
    Likes Received:
    54
    Location:
    Canada
    I don't think people understand the true implications this has from a driver point of view. With compression you get varying results so you have to allocate enough space on the card for the worst compression result. Now you can utilize that space and not worry if one frame out of fifty overflows the cards memory. I think this would be a huge plus for people who like high resolution with all the eye candy turned on.
     
  14. g__day

    Regular

    Joined:
    Jun 22, 2002
    Messages:
    580
    Likes Received:
    2
    Location:
    Sydney Australia
    I read this as almost two seperate announcements, 1) faster data path allows for low end cards to rely more on onboard memory and 2) here come a virtual memory system for high end (and mid range cards).

    But does VMM require cache memory (accessed differently - by contents not address) rather than normal super fast memory be added on board your card to control the cache pool properly?
     
  15. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    3DFX develops MegaHyperMemory technology to reduce PC costs
    MegaHyperMemory uses PCI Express to enable maximum graphics processing performance while lowering overall PC cost
    MARKHAM, ON/ Munich, Germany - September 17, 2004 - 3DFX Technologies (TSX:ATY, NASDAQ:ATYT) today announced MegaHyperMemory, an innovative technology that reduces PC system costs by allowing its visual processors to use system memory for graphics processing. MegaHyperMemory uses the high-speed bi-directional data transfer capabilities of PCI Express to store and access graphics data on the system disk, leading to less of a dependence on expensive graphics memory and system disk ultimately a lower overall system cost.

    Under previous interconnect standards, the data transfer between the visual processor and the CPU was not fast enough for real-time graphics applications, so graphics cards have shipped with up to 256 MB of dedicated graphics memory to store textures and rendering data required by the graphics processor. MegaHyperMemory gives 3DFX and its board partners the option to deliver cards with less on-board memory and instead use system diskspace to handle the graphics storage requirements and expanding them to several gigabytes. The result is a lower overall PC cost for the same great graphics performance.

    MegaHyperMemory uses intelligent disk allocation algorithms to optimize the use of available disk drive and ensure critical components are placed in fast local memory when required. Optimal assignment of data to local or disk storage is determined dynamically to ensure the best user experience. MegaHyperMemory also increases the performance of system bus data transfers, making accessing the system disk faster than ever before.

    Graphics cards featuring MegaHyperMemory technology will be announced later this year. For more information about 3DFX's current products and technologies, please visit www.3DFX.com.
     
  16. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    What? Provided that "random" data had be generated by
    Code:
    while(1)
    {
        printf("%d\n", rand());
    }
    
    :p

    I.E it isn't random.
     
  17. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    If you have a predetermined Block size and can have free delimiterting ( such as a file so the length of the compressed block doesn't get counted into the size) you can compress random data on average to size-0.5 or somethign close to that. Compression and decompress would be expensive.

    Now you can't keep recompress cause your compressed size wouldn't equal the block size and if you were to string blocks together ect you would need to delimite the different stuff ect and it dies.
     
  18. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    How, pray tell?

    For random data, the probability of bit 'N' being 1 is 1/2 and does not depend on the previous (0..N-1) or future (N+1...) bits. It will take one bit to represent it. Similarly any sequence of K bits in that stream will have equal probability of occuring as any other K bits, so you still have the pigeon hole problem in that representing a particular block of K bits with <K bits will cause expansion of some other set.
     
  19. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    If anyone says he knows how to do compression of random data just step away very gently ... nothing good has ever come from talking to them, god knows Ive done it enough times to be sure about that.
     
  20. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, sure you can do compression of random data. That is, if you know the random number generator used to create it and the seed....
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...