In a third demo illustrating another powerful use of Tiled Resources, Microsoft showed a program developed by our content technology group. It showed a power plant filled with fine shadows that dynamically shift to reveal the detailed environment.
In this example, an enormous shadow depth map is used to precisely compute the shadows in the scene.
To avoid the allocation of a prohibitive amount of memory, Tiled Resources are used to allocate only the memory required to compute the shadows for the current view, a tiny fraction of the total.
The residency map, on the left, depicts the allocation map; the black region represents high resolution and shades of gray represent progressively lower resolutions of the shadow hierarchy.
There are over 90 million NVIDIA GPUs capable of supporting Tiled Resources.
Is "Tiled Resources" a new DirectX 11.2 feature? I did see this will be very useful for terrain in games.
Eh? We test in a closed case. A Thermaltake Spedo, to be precise.
From link above:
Are Kepler sales such high or is Fermi supported to?
Mods, please remove the Radeon stuff, this is the Nvidia Kepler thread, if people didn't notice, it's not for discussing Radeon, go to the proper thread.
http://blogs.nvidia.com/blog/2013/0...graphics-with-less-memory-at-microsoft-build/
Mods, please remove the Radeon stuff, this is the Nvidia Kepler thread, if people didn't notice, it's not for discussing Radeon, go to the proper thread.
Grr! That looks exactly like my secret tech . It seems that great minds think alike. Virtual shadow mapping will definitely become a very popular technique in the future. I wonder how they are handling the fine grained culling (or are they just brute force rendering it, because the scene is so simple).
Could you give any more details about these feature levels, or pointers to more detailed documentation (I didn't find any from MS site)? Also Haswell supports 11_1 feature level. Does this also mean that it supports same tiled resource features as GCN?Tiled resources has two feature levels - the first requires DX11_0 hardware feature level as the base while the second tier requires DX11_1 feature level as the base.
That's really not clear. And even if some tiled resources are supported, there are obviously differences in the extent of the support. But details are scarce right now.looks like any DX11 card will support tiled resources?
http://www.youtube.com/watch?v=EswYdzsHKMc&feature=youtu.be&t=2m45s
typedef struct D3D11_FEATURE_DATA_D3D11_OPTIONS1 {
D3D11_TILED_RESOURCES_TIER TiledResourcesTier;
BOOL MinMaxFiltering;
BOOL ClearViewAlsoSupportsDepthOnlyFormats;
BOOL MapOnDefaultBuffers;
} D3D11_FEATURE_DATA_D3D11_OPTIONS1;
Also Haswell supports 11_1 feature level. Does this also mean that it supports same tiled resource features as GCN?
So if not all DX11.1 GPUs are tier 2, then can we still assume all DX11 GPUs are tier 1, particularly older GPUs like Evergreen, Fermi, and Ivy Bridge? Or is that hit or miss as well?Sounds like no (still no confirmation).
I think only amd 11.1 cards are tier 2 (i.e. tier 2 = amd's prt).
I would think, Evergreen is tier 0 = no support. But that's just my feeling.So if not all DX11.1 GPUs are tier 2, then can we still assume all DX11 GPUs are tier 1, particularly older GPUs like Evergreen, Fermi, and Ivy Bridge? Or is that hit or miss as well?
So if not all DX11.1 GPUs are tier 2, then can we still assume all DX11 GPUs are tier 1, particularly older GPUs like Evergreen, Fermi, and Ivy Bridge? Or is that hit or miss as well?
It is useful as evident from the demo running on a nV GPU. The question would be what the restrictions are exactly and if devs put in the effort to support also tier 2 to offer higher performance or higher texture resolution or whatever one could use it for.I guess the question remains that is tier 1 useful to developers or would they be interested in tier 2 only.