New Memory Interface for R5x0...

Discussion in 'Architecture and Products' started by Dave Baumann, Oct 29, 2004.

  1. Inane_Dork

    Inane_Dork Rebmem Roines
    Veteran

    Joined:
    Sep 14, 2004
    Messages:
    1,987
    Likes Received:
    46
    AFAIK:

    It makes memory addressed by the GPU able to be physically located on the GPU or in system RAM (or in a swap file, I guess). Basically, it makes memory aquiring and addressing like it is for the CPU.

    The advantage is not being so confined to video RAM. It allows for gigabytes of data to be in the active set for the video card, though it does not promise exceptional performance for that (again, just like the CPU). I'm guessing the driver would manage what gets into video RAM and how long it stays there.
     
  2. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Actually, Anand was talking about RS600, chipsets. That fits.
     
  3. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    So what's Kaleidoscope?
     
  4. oeLangOetan

    Newcomer

    Joined:
    Nov 13, 2003
    Messages:
    76
    Likes Received:
    0
    Is this possible with WinXP and DX9c?

    Hm, could be in it for the derived value parts launching with the R600 when longhorn arrives :roll:
     
  5. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Geez, you think you're gonna get results around here with that direct method? :lol:

    See http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2222

    Boy Wonder speculates it is a multi-vpu technology. Of course Boy Wonder has great sources. . but doesn't always understand them when they get to talking in simile/analogy/parables as well as some others we could name.

    So this topic started with new memory technology. . .and Kaleidoscope comes up, which Anand says might be multi-VPU tech.

    So, since I'm only slightly behind Digi in the "village idiot" sweepstakes with absolutely no reputation to damage for looking foolish, I will speculate that Kaleidoscope is some way to share memory between multiple VPUs so that you don't have to have, say 512MB or 1GB of video memory when you do an SLI/AFR/TLA/PDQ or whatever.
     
  6. tEd

    tEd Casual Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,105
    Likes Received:
    70
    Location:
    switzerland
    3dlabs has this and with earlier cards permedia 3 /glint r3 they called it vitual texturing engine(theres pdf about it) and they said in d3d(d3d 6.x) developers are given a choice whether they wanna use the virtual texture engine of the card or managing textures by themself.
     
  7. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,344
    Likes Received:
    176
    Location:
    On the path to wisdom
    Virtual memory on a GPU can be as transparent as virtual memory on a CPU is to applications.

    Currently, the graphics driver usually does several things to do some kind of "pseudo virtual memory", i.e. it does swapping in and out manually so the overall set of textures can be larger than the graphics memory.
    But "real" virtual memory will bring hardware support and much smaller granularity, meaning you don't need to swap whole textures with all mipmaps, but only those parts really need for rendering.
     
  8. Headstone

    Newcomer

    Joined:
    Sep 29, 2003
    Messages:
    123
    Likes Received:
    0
    So how about this for speculation:

    24 pipe card using a 256bit bus.
    Thing is the gpu itself is split into 2 sets of 3 quads that can act independatly or together depending on the app. Both sets will have independant access to the memory. I think that there may be some eDRAM but not as much as others think, just enough to reduce latencies from the split core.

    So in effect a one card SLI setup using one core but having the capabilities using the MAXX bridge to .... :D
     
  9. oeLangOetan

    Newcomer

    Joined:
    Nov 13, 2003
    Messages:
    76
    Likes Received:
    0
    If I recall correctly, the P10 has it aswell but what practial use could it have (except for what's comming in Longhorn) it's already being done in the drivers.

    Xmas:
    Maybe but there normally is little swapping going on between the graphics card and the ram and if it happens to be the case there is little performance hit (looking at D3’s ultra setting, only unused textures get swapped).

    You still need to have all the scène data in the memory (normally not the case for workstations cards like P10 hence it's use) of the graphics card for it to render efficiently a virtualisation can only reduce latency so I don’t believe Dave is pointing at something like this.
     
  10. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Thanks for the geek-speak. :D I was at the "All previous consumer implementations of multi-GPU, like SLI and AFR, required 'dis is mine, and dat is yours, and you donna toucha mine, and I wonna toucha yours'" level. But I think we're going to the same place.

    Doesn't mean we're right tho. :lol:
     
  11. darkblu

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,642
    Likes Received:
    22
    what the drivers (can) do alone is a far way from real virtualisation.

    up until now texture swapping has been quite a stone in the developer's garden, regardless of what the dx or ogl or whatever-the-api resource manager does or does not do in this regard. that's where virtualisation steps in - the whole point is to achieve higher sustained texturing rates w/o the developer's sweat for it. look at gamecube's flipper - you think texture virtualisation was put there for longhorn?

    i completely missed your point in the above

    to 'only reduce latency' ? what do you think latencies are - a nuisance?
     
  12. oeLangOetan

    Newcomer

    Joined:
    Nov 13, 2003
    Messages:
    76
    Likes Received:
    0
    You missed the "except for what's comming in Longhorn".

    I assume that true virtual video memory will only be possible in WGF becouse MS made a different driver model for cards without hardware support for virtual video memory & cards with HW it. So it requires a big change in the OS and into the next directx.
    We don't know how far DX9 (let's forget OGL for once) supports virtual video memory but I don't recall it being one of it's features so I don't expect much.

    So if true virtualisation can't be done in DX9, why would ati put it in their chips so I don't think dave is pointing to something like that.
     
  13. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Virtualization is mostly an ease of development feature, not a huge performance panacea. Obviously, you're like to keep your working set of textures fully in vidram and avoid any page to superslow system memory, no matter how fine grained. All it does is lower the penalty for paging in texels, and makes texture management easier for the developer. Still, in the vast majority of cases , you'd prefer big vidram and plopping all your textures there.
     
  14. darkblu

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,642
    Likes Received:
    22
    tell that to your publisher.
     
  15. Trawler

    Regular

    Joined:
    Mar 24, 2002
    Messages:
    251
    Likes Received:
    1
    Could Kaleidoscope be multiple cores on a single die?
     
  16. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    A GPU is already heavily parallel. A CPU isn't so far, so that makes some sense. But you could say that each vertex shader and quad on a GPU is a separate core already.
     
  17. Padman

    Newcomer

    Joined:
    Jan 10, 2003
    Messages:
    30
    Likes Received:
    0
    Location:
    Netherlands
    How about separate memory busses for frame/z-buffer and for textures?

    I can remember the old 3Dlabs card having separate memory for frame buffer and for textures, like 128 MB frambuffer and 256 texture memory.

    I've been meaning to start a thread about this for a while now...

    Why aren't we seeing this more?

    Texturing should need far less bandwith than the frame/z-buffer, so why no 256 bit 600 Mhz DDR framebuffer bus (with only 128 MB of very expensive memory) and a 64 bit 300 MHz DDR texturing bus (with 512 MB of cheap memory)?
     
  18. Basic

    Regular

    Joined:
    Feb 8, 2002
    Messages:
    846
    Likes Received:
    13
    Location:
    Linköping, Sweden
    Well, DUH :!:

    Isn't it obvious that Kaleidoscope is just a pair of mirrors that you place on each side of your gfx card. That way it will look like a cool SLI rig through your case window.

    :D
     
  19. madshi

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    359
    Likes Received:
    0
    "Kaleidoscope" reminds me of lots of colors. So maybe it just means 32bit per color = SM3.0 support?
     
  20. Khronus

    Newcomer

    Joined:
    Apr 15, 2004
    Messages:
    62
    Likes Received:
    2
    I'm going way out on a limb here, but a Kaleidoscope is a mix of different colors swirled together, so might "Kaleidoscope" be the ability to mix and match a variety of ATI cards together? A sort of mix&match SLI with the drivers dynamically distributing the load, possibly it would even include integrated ATI graphics?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...