Infinisearch
Veteran
I've had some thoughts and Ideas in regards to rendering over the years and I was hoping the developers and armchair developers here would chime in on some of them.
1. Around 2003 on gamedev.net I made a post about z-compositing that I'd like to rehash here and now.
The basic idea was to render the static geometry of the scene at one frame rate and the dynamic geometry at another and then composite the two at the higher frame rate. For an first person camera static geometry at 60fps and the dynamic at 30. For a third person camera 30 for the static, and 60 for the dynamic.
a. Assuming you can keep up the frame rates and its a game with no overtly fast moving objects, camera, or lights can you think of any rendering artifacts? Do you think players would complain of something akin to micro-stutter or something bothering them?
b. Related to a. given modern rendering techniques do you feel it would be worth the trouble? In addition at what stage would you do the compositing and why i.e. before or after lighting and shadowing?
2. I kind of skipped over how HDR was implemented until recently. I've only looked into tone mapping recently so far so essentially all I really know is it seems sometimes you need to calculate the average luminance. So I was wondering if the was possible given either the current hardware or the current hardware/API combination to compute the sum portion of the average as you do your light accumulation?
It seems rather inefficient to do it after the fact.
3. I've always had a thing for ID's primitive/patch id's, object/sub obect id's and... Its seems some rendering techniques require multipassing (light indexed) others have an option for it (clustered) and then some ("classical deferred" with z-only pass) seem like they might benefit from it but wind up with too high a geometry load. So I was wondering if anybody has and is it currently possible (api/hardware) to do a Z, primitive id, and whatever other id is necessary for the specific rendering technique, to generate a sorted list of visible primitives to be used as the basis of what to draw in subsequent passes?
Essentially Z would be read or read modify write, framebuffer output would be write only with no texture reads to accomplish the feat. So speed would be pretty close to a Zonly, and the subsequent pass/passes would only try to render what is visible and the z-buffer would handle the rest. What do you think?
Thanks in advance for any comments, criticisms, advice, and analysis's.
1. Around 2003 on gamedev.net I made a post about z-compositing that I'd like to rehash here and now.
The basic idea was to render the static geometry of the scene at one frame rate and the dynamic geometry at another and then composite the two at the higher frame rate. For an first person camera static geometry at 60fps and the dynamic at 30. For a third person camera 30 for the static, and 60 for the dynamic.
a. Assuming you can keep up the frame rates and its a game with no overtly fast moving objects, camera, or lights can you think of any rendering artifacts? Do you think players would complain of something akin to micro-stutter or something bothering them?
b. Related to a. given modern rendering techniques do you feel it would be worth the trouble? In addition at what stage would you do the compositing and why i.e. before or after lighting and shadowing?
2. I kind of skipped over how HDR was implemented until recently. I've only looked into tone mapping recently so far so essentially all I really know is it seems sometimes you need to calculate the average luminance. So I was wondering if the was possible given either the current hardware or the current hardware/API combination to compute the sum portion of the average as you do your light accumulation?
It seems rather inefficient to do it after the fact.
3. I've always had a thing for ID's primitive/patch id's, object/sub obect id's and... Its seems some rendering techniques require multipassing (light indexed) others have an option for it (clustered) and then some ("classical deferred" with z-only pass) seem like they might benefit from it but wind up with too high a geometry load. So I was wondering if anybody has and is it currently possible (api/hardware) to do a Z, primitive id, and whatever other id is necessary for the specific rendering technique, to generate a sorted list of visible primitives to be used as the basis of what to draw in subsequent passes?
Essentially Z would be read or read modify write, framebuffer output would be write only with no texture reads to accomplish the feat. So speed would be pretty close to a Zonly, and the subsequent pass/passes would only try to render what is visible and the z-buffer would handle the rest. What do you think?
Thanks in advance for any comments, criticisms, advice, and analysis's.