John Carmack: Doom 4 will support Partially Resident Textures (Tiled Resources)

Discussion in 'Console Technology' started by Cyan, Sep 21, 2013.

  1. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    8,313
    Likes Received:
    2,076
    John Carmack has said that he will implement Partially Resident Textures in Doom 4. Both the PlayStation 4 and Xbox One support this technology via hardware.

    This technology is expected to feature in most next gen engines, which is a biggy, taking into account most game designers will be more familiar with AMD tech now, and it will help to improve the performance of AMD cards on the PC, so it's good for both the consoles and the PC.

    Some of the advantages of the technology is that texture Sizes can be up to 32 TB :shock: now, that fast small pools of RAM like the 32MB of eSram of the Xbox One can store up to 6GB worth of textures alone ( http://www.giantbomb.com/forums/xbox-one-8450/x1-esram-dx-11-2-from-32mb-to-6gb-worth-of-texture-1448545/ ), and can be defined as some ultra high resolution streaming technology for the PS4 and Xbox One.

    This means massive textures without the use of massive amounts of memory, which is going to help both consoles tremendously in the long run.

    [​IMG]


    https://twitter.com/ID_AA_Carmack/status/157512179749371907
     
  2. RudeCurve

    Banned

    Joined:
    Jun 1, 2008
    Messages:
    2,831
    Likes Received:
    0
    I think the problem will be streaming the data from the HDD/ODD fast enough? Rage had the well known texture morphing problem during fast camera pans.
     
  3. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    8,313
    Likes Received:
    2,076
    The thing is that if you're using a very low amount of memory to store those textures you could have a lot of free memory -RAM- to utilise it as some kind of temporal storage without having to *touch* the HDD or disc at all, which is a significant advantage.
     
  4. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,700
    Likes Received:
    2,183
    So everybody has not read sebbbi's figures about virtual texturing. Some good soul should put links to his most enlightening posts on the matter here, just to save some time.
     
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    No. It seems every time you link to an article or post, it's got some very wrong conclusion. Tiled resources allows you to address more texture space than you can fit in RAM. It does not allow you to fit more textures in RAM. There's no correlation between RAM size and virtual texture resolution - the limiting factor is bandwidth (RAM and storage). There's no sane logic that fits the way the technology works that turns 32 MBs ESRAM into 6 GBs of texture data. 32 MBs ESRAM, or eDRAM, or RAM, can fit 32 MBs of textures to be accessed by the GPU at any given moment. The system can swap out those textures for new ones so the texture data isn't static, but capacity is unchanged.

    As for eSRAM's ultra fast BW being useful, the limiting factor will be HDD access speed. Or, if you've cached a load of tiles in RAM, the bottleneck for tiled textures will be DDR3 > ESRAM, or rather DDR3 > GPU because you wouldn't waste time and BW copying textures to eSRAM to read them again in the GPU, meaning a 68 GB/s cap to texture bandwidth. Which doesn't matter because you don't need that much with tiled resources! You only load and use the tiles you need to fit the pixels on screen.

    I heartily recommend in future that you believe any article or forum post you find on the 'net without running it by B3D first to learn if it's legitimate or hogswash. ;)
     
  6. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    So if I understand what you're saying, the use of the PRT technology means that 68GB/sec is ample bandwidth for reading textures into the GPU? Even at 60 fps and with other tasks contending for the bus you could still comfortably load 500MB of textures a frame into the GPU? And that seems a huge amount if you only need enough texture tiles to cover the pixels on the screen...a few MB?
    So 68GB/s ddr3 bandwidth is ample (not going to be a bottleneck) and the real bandwidth requirement is to the ESRAM as the GPU does it's thing? And there's bucket loads of that, with low latency to boot.
     
  7. eVolvE

    Newcomer

    Joined:
    Aug 31, 2013
    Messages:
    21
    Likes Received:
    0
    Now I think that bolded part would not necessarily be true. Considering that for typical scenes, the displayed picture doesn't change much during consecutive frames, it could be well worth having a small PRT "cache" in the ESRAM. Hence, it would be more efficient to have fast and low-latency access to the unchanged parts from the last frame in the ESRAM, while only loading the missing parts from external memory whenever they need to be updated.
    I haven't played with PRT programming myself yet, but I guess that having it in ESRAM could be beneficial if you don't want to stall your pipeline with many external lookups from the DRAM on each and every frame.
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    That's not the right way to think of it. There are ~2 million pixels on screen in a 1080p image. That requires 2 million individual texture samples (excluding AA) for a total of 6 megabytes a frame (no transparency, 24 bit colour, keeping things simple), 360 MB/s required bandwidth to draw every pixel. However, we are unable to access the textures on a per pixel level granularity, so we have to load in tiles which conveniently span multiple pixels. Theoretically, 360 MB/s is all it would take to texture a 1080p screen in perfect, per pixel fidelity. How a game actually performs depends on the textures and engines, and I won't hazard a guess what the real world in-game BW consumption on average is for a virtually textured game. Sebbbi probably covered it in his insightful post on the subject. The amount of textures you can access depends on how quickly you can load tiles. You can have a 32 TB texture map for the world and only need, I dunno, 1 GB/s to texture via PRT. What we won't need is every texture to be transferred in full to the GPU - we won't need hundreds of 1k and 2k textures streaming across the bus.

    If you fill the ESRAM up with textures, you won't have any room for anything else like render buffers and so be capped at 68 GB/s for all your FB ops. ;) The ESRAM is there for read/write bandwidth and it'll be used as such. Conceptually one could cache frequently used tiles there, but given the BW requirement is so low for PRT, I don't see the advantage. Passes using the textures will be few and far between relative to total workload, so low latency wouldn't gain you much benefit at all.
     
  9. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    8,313
    Likes Received:
    2,076
    First of all, thanks for the clarifications. Ok, I shall take your advice from now on. Some of the info is just too juicy to ignore though. It's not that easy to discern what constitute legitimate or hogswash when you find great articles on a particular subject.

    I think it is going to be an essential feature on the PS4 and Xbox One to run such attractive games, not to mention it is also a highly-touted feature in Windows 8.1. I think you can achieve unprecedented levels of detail with it, as shown in some games using the new iD engine.

    As I understand it, it basically works like some kind of culling, because the code instructs your console to render the fully detailed textures in the areas that your character is focusing on, while at the same time it sheds detail in other areas of the game that you can't immediately see. When you move around you need to have a very fast pool of memory or cache to have most detail at all times, it is pretty fascinating stuff if you ask me.

    That part is what I don't understand about this technology.

    Do you mean that you store -let's say- a very large 4GB texture, for example, in the main RAM to be accessible at all times and divide it in very small tiles which can fit small pools of RAM or save memory usage?

    ...or simply that you have ....say a 32TB texture, and it gets cached from the disk to the main RAM just utilising the parts or tiles that it needs at any given time?

    I am not trying to be flippant here, but in order to have that incredible and massive amount of textures in memory first you need some kind of massive storage for such a large texture. I mean, if you want to use a textures that weighs 32TB of data, you need a physical storage or RAM to store it fully first, right?

    Somehow, I just don't get this part regardless of how creative I want to be thinking of the possibilities.
     
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    This. Take for example a texture for a person, on a high resolution 4096 by 4096 texture. Conventional texturing would store that entire texture in RAM. PRT would chop it into little tiles and only load the tiles you're seeing. So if you are looking at the character from the front and only their top half, only the tiles including the face and front torso texture would load, and the back of the head and body and leg textures wouldn't be loaded. As you then change viewpoint, those other tiles have to be loaded. If they are on HDD, you need to load them from there which can cause pop-in (see RAGE). If there is plenty of RAM, you can cache them there. RAM is plenty fast enough to serve up as many tiles as needed given the limits rendering resolution imposes, so as long as all the tiles are cached, you effectively get 'perfect' texturing.

    Also PRT as a concept isn't limited to textures. Lionhead Studios has been experimenting on the same concept for meshes, so you can have higher resolution models without needing to store them in RAM. Likewise, volumes can be stored as tiled data for lighting, allowing for SVO lighting to fit in RAM without massive overheads. PRT is a significant optimisation over the brute-force methods of storing everything you are using in RAM.

    Indeed! Although the 32 TB figure quoted would be uncompressed data in its raw form. That'd get compressed down to whatever size (which would reduce some of the texture clarity, so I guess we won't quite get perfect textures just yet ;)).
     
  11. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    8,313
    Likes Received:
    2,076
    Once again, thanks for your enlightening reply to my post, it was very informative, and thanks for all your candid and thoughtful explanations and comments overall, and politely suggesting me some things.

    I guess from your posts that developers need to work out the details on how it functions, but I get from reading you that let's say we want to draw a character on screen, so we load the parts of the superb large texture on RAM involved in the scene and we ask the game to draw the eyes, the nose, chin, etc etc, so we have a lookup table and the texture is coded in a very special and unique way where every tile has an unique name.

    So let's say we want to draw both eyes, so we tell the gaming console to load the tile named "Left eye" -from x pixel to Y pixel within the massive texture-, the right eye -from X pixel to Y pixel- and the "nose" tile, etc etc. It sounds as if you were cropping a picture in an image editor --excuse me if I am wrong.

    This is a very exciting technology, I think, and Carmack has always been ahead of its time and really smart. I'd like to play Rage someday but it seems like a game which is certainly more suited for the PS4 and XB1 technology tbh. The console technology wasn't ready then.

    Cheers Shifty.
     
  12. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    Yep.
     
  13. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,073
    Likes Received:
    2,141
    Location:
    Wrong thread
    I can't help feeling that eVolvE might be onto something. With virtual texturing there can be a lot of baking of decals and transcoding. By making sure that all writes (and reads in the case of baking transparent decals to the tile buffer) are done using the esram I bet you could improve performance a great deal - especially if you're using trilinear + high aniso during rendering.

    With more power at your disposal you could even do expensive per-texel lighting on the tiles. Despite having to light texels that wouldn't necessarily be used in each frame, re-usability could possibly be a net win. You could almost start to treat the tile cache as an intermediate buffer and look at the cost that way. And the cost of copying out your tile cache if you wanted to use esram for something else would be tiny relative to the available BW.
     
  14. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    Ok, so I think the question I was really asking was "if PRT has low ddr3 bandwidth requirements, where is the bottleneck going to be?"

    For work I run large mission critical Oracle databases, on big tin. Performance (Eg end user response time) is always bottlenecked somewhere, if the sql is optimised, I like this to be the cpu. Therefore if the clock speed of the cpu is increased, the end user gets their data faster (assuming the bottleneck doesn't move elsewhere...).

    So with the x1's architecture, using PRT, where do you (or anyone else for that matter!) See the bottleneck being? To my untrained eye it's looking like it will be keeping the various parts of the GPU supplied with data, which if the ESRAM is employed correctly, will be mostly serviced from there. Which makes this very high bandwidth, low latency ram the bottleneck...which seems to me to be a "spot on" design decision. Or am I completely up the pole?
     
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    Are you thinking of allocating a portion of ESRAM to dedicated texture cache, or writing and reading in tiles per frame?

    Depends on the game. ;) We'll also still have simple textures in addition to PRT textures, so there will be BW demands for texturing as well as everything else. PRT is definitely a big win though, and for all platforms going forwards - PC and PS4. We don't know what advantage if any XB1 has regards PRT versus PS4.
     
  16. oldschoolnerd

    Newcomer

    Joined:
    Sep 13, 2013
    Messages:
    65
    Likes Received:
    8
    Ha! Yes it always depends on the software...all devs were not created equal after all! But if MS have targeted bits of their architecture towards getting a lot of the grunt work involved in PRT done in hardware (move engines - tile/un-tile, esram etc) to the point where it effectively nullifies the PS4 on paper spec advantage then hats off to them. Judging by comments they have made "there's no way we are giving up a 30% performance difference to the ps4" and "games look the same if not better blah blah blah", they think they have achieved it.
     
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,925
    Likes Received:
    10,048
    Location:
    Under my bridge
    This isn't a versus thread. Also GCN supports DMA and tiled memory access (AFAIK), so it's not like XB1 has PRT and PS4 hasn't. XB1 may have a PRT advantage, which is a discussion in itself and one which hasn't born fruit. Don't jump to conclusions about what box can do what, though.
     
  18. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,073
    Likes Received:
    2,141
    Location:
    Wrong thread
    I was thinking of a dedicated, fixed size texture tile cache in esram (for either all tiles immediately required or some constant proportion of them) with the possibility of having a "none render" based period of the update cycle for compute when you could transfer them all out.

    I agree that dedicating a portion of esram to textures seems expensive, but with the number of reads and writes that could possibly be involved on a relatively small set of data it just seems like it might be a good fit. And with the news today from Digital Foundry that 1Bone can split a single render target over both esram and ddr, maybe you could look on BW saved by locating tiles permanently in esram as freeing up DDR bandwidth for partial render targets ...? Not to mention that it could (should?) speed up GPU modification and management of the tile cache.
     
  19. DRS

    DRS
    Newcomer

    Joined:
    May 22, 2009
    Messages:
    135
    Likes Received:
    0
    So could the texturing benefit from the eSRAM, considered that rendering graphics involves a predictable memory access pattern and the DDR bw is high enough to keep up with the required (compressed) texel rates? Or is the XB1's DDR bw far from optimal for this virtual tiling thing?

    I think Shifty had a good point mentioning to skip the SRAM, though I could be misunderstanding the complete thing totally ofcourse:)
     
  20. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    8,313
    Likes Received:
    2,076
    Doom 4 has been confirmed as the first game to use Tiled Resources! :shocked::happy2: :yep2: :smile:

    Plus it will use DirectX12, expected to launch late 2015, early 2016, will run at 1080p and 60 fps.

    I want to see the Tiled Resources technology running in real time on a PC or consoles.

    John Carmack said time ago that something like the Megatexture will win in the end:

    http://gamingbolt.com/john-carmack-something-like-mega-texture-will-win-in-the-end
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...