John Carmack: Doom 4 will support Partially Resident Textures (Tiled Resources)

lol, are we seriously talking about directx in a doom thread?

This isn't John Carmack's id anymore. Zenimax is in charge now and they're a brutal bunch of suits and bean counters.
 
Like Rage before, Doom will have a DirectX rendering back-end to address the Xbox platform (and a GNM back-end for PS4). But I guess that on Windows it will still use OpenGL.
 
It would if Bethsoft simply refuses to use the available hardware features on PS4.

They're just evil enough that I might believe they would do something like that. And Bethy has long-standing asslickery relationship with MS, so they might have financial incentive to make this version look/run better for whatever reason (this stretching back to the days of Oblivion launching first on 360 IIRC, all expansions for Skyrim launched first on 360 as well.)

Oblivion was PC+360 initially. PS3 wasn't even out yet.

The main evilness I've read of is Morrowind Xbox getting special half assed QA allowances to make a timely appearance on Xbox. And back then there weren't patches for console games, even on Xbox. The smart person doesn't buy a Bethesda HQ-developed game that can't receive patches or mods.
 
How so? It has no advantage over other platforms.

Could the jpeg decoder logic on the XB1 be of benefit here? In terms offloading the gpu or cpu?

MegaTexture does involve using jpeg or jpeg like variable bit compression to more efficiently use the HDD bandwidth and then transcode to DXT on the fly.
 
Last edited by a moderator:
It would if Bethsoft simply refuses to use the available hardware features on PS4.

They're just evil enough that I might believe they would do something like that. And Bethy has long-standing asslickery relationship with MS, so they might have financial incentive to make this version look/run better for whatever reason (this stretching back to the days of Oblivion launching first on 360 IIRC, all expansions for Skyrim launched first on 360 as well.)

Oblivion launched +8 months before PS3 was on the market; 14 months if you look at PAL territories. But the games always ran like total ass, indeed.

I'll never buy a game of that developer because of all the game breaking bugs that are never patched out. I can tell you; even with 8 (5.5) GB of RAM, they will still have the retail 1.20 game crashing PS4's and Xbox Ones.
 
Could the jpeg decoder logic on the XB1 be of benefit here? In terms offloading the gpu or cpu?

Jpeg is probably just used for the Kinect camera.

MegaTexture does involve using jpeg or jpeg like variable bit compression to more efficiently use the HDD bandwidth and then transcode to DXT on the fly.
I don't believe the chip can do it between the HDD and the RAM.

How clipmap works, the textures are already organized and stored in a way that it's efficient to be 'found' on the media/drive and loaded into the RAM. Besides you can still have compressed textures stored in the RAM (and they are, always), as the decompression happens on the GPU anyways. (and it was essentially just trolling, so please just ignore)
 
Last edited by a moderator:
Jpeg is probably just used for the Kinect camera.

I don't believe the chip can do it between the HDD and the RAM.

How clipmap works, the textures are already organized and stored in a way that it's efficient to be 'found' on the media/drive and loaded into the RAM. Besides you can still have compressed textures stored in the RAM (and they are, always), as the decompression happens on the GPU anyways. (and it was essentially just trolling, so please just ignore)

Well, i only asked because the vgleak documents place the dmas associated with compression logic under exclusive control of the running title while the system shares the 4th with the running title. The docs describes using the one DMA during system's time slice of the gpu and specifically references the use of jpeg decompression on textures.

"The same move engine that supports LZ decoding also supports JPEG decoding. Just as with LZ, JPEG decoding operates as an extension on top of the standard DMA modes. For instance, a title may decode from main RAM directly into a sub-rectangle of a tiled texture in ESRAM. The move engines contain no hardware JPEG encoder, only a decoder.

Read more at: http://www.vgleaks.com/world-exclusive-durangos-move-engines"
 
Well, i only asked because the vgleak documents place the dmas associated with compression logic under exclusive control of the running title while the system shares the 4th with the running title. The docs describes using the one DMA during system's time slice of the gpu and specifically references the use of jpeg decompression on textures.

"The same move engine that supports LZ decoding also supports JPEG decoding. Just as with LZ, JPEG decoding operates as an extension on top of the standard DMA modes. For instance, a title may decode from main RAM directly into a sub-rectangle of a tiled texture in ESRAM. The move engines contain no hardware JPEG encoder, only a decoder."

That's what I said, it has nothing to do with the HDD, which was what you asked earlier.
Jpeg and DXT are both compression, it makes sense to store on the media in the best form to be consumed, it makes no sense to transcode from Jpeg to DXT.

MegaTexture does involve using jpeg or jpeg like variable bit compression to more efficiently use the HDD bandwidth and then transcode to DXT on the fly.
 
Last edited by a moderator:
That's what I said, it has nothing to do with the HDD, which was what you asked earlier.
Jpeg and DXT are both compression, it makes sense to store on the media in the best form to be consumed, it makes no sense to transcode from Jpeg to DXT.

But isnt megatexture using jpeg basically as a way to maximize HDD bandwidth. Jpeg compresses at higher rates at better quality versus DXT while being useless as a format that a gpu can directly use.

Plus isnt transcoding from jpeg to dxt exactly whats going on. Thats what i interpreted when i read up on megatextures ( id/nvidia version). I may be wrong though but i watched a video of a google guy that discouraged jpeg to dxt transcoding becuase of the extra artifacting it introduced. They advocated an enhanced form of compression by simply taking a dxt texture and applying entropy encoding and delta coding to chop it down to almost jpeg like levels.

https://code.google.com/p/crunch/
This is something i ran into while reading up the subject months ago.

I not asking if the xb1 can directly decode jpegs off the hdd, im asking if the decoder can be realized as a benefit because it relieves the gpu or cpu from having to perform that duty.
 
Last edited by a moderator:
So if we take a step back and make some assumptions, here's what the flow would be like

HDD (jpeg) -> DDR3 (jpeg) -> decoder -> ESRAM (raw) -> CPU/GPU (transcode) -> DDR3 (DXT)

I'm not convinced that this is more ideal.
 
But isnt megatexture using jpeg basically as a way to maximize HDD bandwidth. Jpeg compresses at higher rates at better quality versus DXT while being useless as a format that a gpu can directly use.

Plus isnt transcoding from jpeg to dxt exactly whats going on. Thats what i interpreted when i read up on megatextures ( id/nvidia version). I may be wrong though but i watched a video of a google guy that discouraged jpeg to dxt transcoding becuase of the extra artifacting it introduced. They advocated an enhanced form of compression by simply taking a dxt texture and applying entropy encoding and delta coding to chop it down to almost jpeg like levels.

https://code.google.com/p/crunch/
This is something i ran into while reading up the subject months ago.

I not asking if the xb1 can directly decode jpegs off the hdd, im asking if the decoder can be realized as a benefit because it relieves the gpu or cpu from having to perform that duty.
Yup, reason for HDPhoto compression was loading speed and storage limitations.
http://mrelusive.com/publications/papers/Real-Time-Texture-Streaming-&-Decompression.pdf

You might want to look into Jon Olicks research on DXT re-compression as well.
http://www.jonolick.com/
Seems to be nice alternative to HDPhoto.
So if we take a step back and make some assumptions, here's what the flow would be like

HDD (jpeg) -> DDR3 (jpeg) -> decoder -> ESRAM (raw) -> CPU/GPU (transcode) -> DDR3 (DXT)

I'm not convinced that this is more ideal.
I'm pretty sure it would work something like this..
HDD (jpeg) -> DDR3 (jpg) -> decoder -> CPU/GPU (transcode) -> DDR3 (DXT)
No need to use ESRAM for this, especially for raw data.
JPEG might be good to store to main memory to provide fast access to larger area in texture.

What comes to actual usage of PRT in idtech game they need to use several PRT textures in a scene as it doesn't support large enough textures..
I wonder if additional layer of indirection would be the right way to go? (Vertex level?)
 
Last edited by a moderator:
I'm pretty sure it would work something like this..
HDD (jpeg) -> DDR3 (jpg) -> decoder -> CPU/GPU (transcode) -> DDR3 (DXT)
You would have to go back to RAM after decoder stage, as it's a DMA engine. You couldn't pipe it straight into the CPU (or GPU); these devices can't store anything per se, they fetch their data as-needed from RAM.
 
I'm pretty sure it would work something like this..
HDD (jpeg) -> DDR3 (jpg) -> decoder -> CPU/GPU (transcode) -> DDR3 (DXT)
In addition to Grall,

XB1
HDD (jpeg) -> DDR3 (jpg) -> decoder -> DDR3 (RGB data) -> CPU/GPU (transcode) -> DDR3 (DXT)

PS4
HDD (jpeg) -> GDDR5 (jpg) -> CPU/GPU (decode + transcode) -> GDDR5 (DXT)

It depends on how much effort the decode is as to whether it's a saving or not. An obvious HW progression seems to be inclusion for support for JPEG decoding on GPU, so JPEG'd tile data can be used directly.
 
Last edited by a moderator:
Didn't Rage have a CUDA implementation on Nvidia systems for texture de/recompression?

Because... if the DMAs could be used to transcode the texture data for free, that work could be done by Compute on PS4 "for free" (as in the same game on both consoles 12 vs 18 ALU, 6 left over for free).
 
It would make sense for both console to just do the transcoding with the combination of GPU/CPU.

Does anyone know if the unified memory and the bigger standard media remove the need to store the texture in jpeg form?
 
well, I'd assume there's some sort of streaming involved before getting into the actual clipmap, didn't they manage to make it happen on the 360/PS3 with Rage?
 
Back
Top