Doom (2016) Graphics Study

Sorry for the simple question but not being an expert with the newer feature set api, do you think the same goal of Doom 2016 could have been done with a Directx 9 level of hw/api? Which really visible effects and corrisponding newer feature, would have been impossible on previous hardwares/api?
 
Not a graphics expert either, so I can't go into any specifics, but DX9 was much more constricted in the way of programming features and limitations compared to versions that came later on. You could work around the limitations* no doubt, but the penalty would be reduced performance, and perhaps also reduced image quality in the form of visible artifacts due to precision loss from repeated blending operations and so on. In a DX11, 12 or Vulcan world, what can be done in a single step might require several steps of operations in DX9.

*John Carmack allegedly proved mathematically that any conceivable graphics operation could be performed just by using the standard OpenGL set of blending operations. This was before the era where PC graphics cards started having pixel shading hardware btw, just to put things into perspective. So you might need horrific number of blending passes for advanced effects, such as running Crysis, but it would work, in theory.

Of course, back then PC graphics was stuck at 8 integer bits per channel (IE 24/32 bits total per pixel). You can't do too many blending ops at that precision level before the screen looks like a brown muddy mess (IE, your average id Software game.... :LOL:), so this was all theoretical at the time. Even today, with floating point math available and deep color buffers (up to 128 bits/pixel afaik), such an approach would be really slow because pixel fillrate and memory bandwidth is limited. Realtime effects would also be hard to implement I suspect, as you might have to calculate (perhaps multitudes of) texture maps in real time to simulate said effect.

Oh well. Just food for thought. Nobody ever said this approach would be practical. :)
 
*John Carmack allegedly proved mathematically that any conceivable graphics operation could be performed just by using the standard OpenGL set of blending operations. This was before the era where PC graphics cards started having pixel shading hardware btw, just to put things into perspective. So you might need horrific number of blending passes for advanced effects, such as running Crysis, but it would work, in theory.
Isn't that how the PS2 GPU was utilized and worked around the lack of programmable pipeline? Each pixel op was attached to a texture read full time, since the EDRAM had the bandwidth to drive thought all the overdraw.
 
Not a graphics expert either, so I can't go into any specifics, but DX9 was much more constricted in the way of programming features and limitations compared to versions that came later on. You could work around the limitations* no doubt, but the penalty would be reduced performance, and perhaps also reduced image quality in the form of visible artifacts due to precision loss from repeated blending operations and so on. In a DX11, 12 or Vulcan world, what can be done in a single step might require several steps of operations in DX9.

*John Carmack allegedly proved mathematically that any conceivable graphics operation could be performed just by using the standard OpenGL set of blending operations. This was before the era where PC graphics cards started having pixel shading hardware btw, just to put things into perspective. So you might need horrific number of blending passes for advanced effects, such as running Crysis, but it would work, in theory.

Of course, back then PC graphics was stuck at 8 integer bits per channel (IE 24/32 bits total per pixel). You can't do too many blending ops at that precision level before the screen looks like a brown muddy mess (IE, your average id Software game.... :LOL:), so this was all theoretical at the time. Even today, with floating point math available and deep color buffers (up to 128 bits/pixel afaik), such an approach would be really slow because pixel fillrate and memory bandwidth is limited. Realtime effects would also be hard to implement I suspect, as you might have to calculate (perhaps multitudes of) texture maps in real time to simulate said effect.

Oh well. Just food for thought. Nobody ever said this approach would be practical. :)
Thank. I ask this cause I've the feeling that beside any newer features implemented in newer games, I don't see much road done into the "realism" goal. As you said much was done in a game like Crysis (PC) that probably would have not needed newer complex architectures or features (?). I always waited to see how photorealism improved in games, beside all the resolution war (1080p,UHD,UHD4K-TURBO-QVGA at 760fps.. wow... who care?) or the "post processing cinema effects" like the motion blur (I usually didn't like it even back in Quake 3 T-Buffer times..) or how the camera simulate the movements of the head of the player inside the game point of view.
When I saw some tech demo using I don't know, global indirect illumination I can clearly say "finally some realism!" but not so much with all these futuristic never ending combat games even if I still play the awesome original 1993 Doom game and its console ports.
 
Last edited:
Isn't that how the PS2 GPU was utilized and worked around the lack of programmable pipeline?
From what I understand, yes, except because the lack of programmability features the PS2's graphics chip isn't what we call a GPU today. :) It's pretty much just a dumb pixel rendering device from my understanding. It executes the commands exactly as given to it in the display list, it can't process anything any further than that...
 
Thank. I ask this cause I've the feeling that beside any newer features implemented in newer games, I don't see much road done into the "realism" goal.
Realism has been good, but ultimately it's a question of performance. Polygon counts can be rather high and ray tracing handles lighting well enough.

While not photorealism, I'd say the bigger concern is detail and physics in the scene. The latest APIs can help with that through shear number of draws. Open world and VR being the next big step IMHO.
 
wolf 2 is 60fps on consoles? If so, the idea that CPU is not enough (for 60fps), is probably an interesting argument that has less merit going into the future.

This appears to be the strength of Vulkan/DX12 rendering pipelines.

I wonder how much money is being saved on tools and assets
 
wolf 2 is 60fps on consoles? If so, the idea that CPU is not enough (for 60fps), is probably an interesting argument that has less merit going into the future.

This appears to be the strength of Vulkan/DX12 rendering pipelines.

I wonder how much money is being saved on tools and assets

Wolf 2 is 60 FPS on consoles
 
wolf 2 is 60fps on consoles? If so, the idea that CPU is not enough (for 60fps), is probably an interesting argument that has less merit going into the future.

To be fair, the devs are targeting said CPU in the first place, so it's a question of what more they could do if they had more CPU power available.
 
To be fair, the devs are targeting said CPU in the first place, so it's a question of what more they could do if they had more CPU power available.
Maybe this can give us some insight
AMD FX-8320 8 core 3.5 Ghz.

If we scale down? maybe we know % of Jaguar is taken up with draw calls?

 
Back
Top