poop
float noisebuffer[256][256];
inline float4 sample(int x, int y)
{
int x0 = (x + 0) & 255;
int x1 = (x + 1) & 255;
int y0 = (y + 0) & 255;
int y1 = (y + 1) & 255;
return float4(
noisebuffer[y0][x0],
noisebuffer[y0][x1],
noisebuffer[y1][x0],
noisebuffer[y1][x1]);
}
static inline float cubic(float v)
{
return (3.0f - 2.0f * v) * v * v;
}
static float getv(float x, float y)
{
int ix = static_cast<int>(x);
int iy = static_cast<int>
;
float fx = x - ix;
float fy = y - iy;
float4 v = sample(ix,iy);
float i = cubic(1.0f - fx);
float j = cubic(fx);
float s = (v.x * i + v.y * j) * cubic(1.0f - fy) +
(v.z * i + v.w * j) * cubic(fy);
return s;
}
This implements one octave of perlin noise using 256x256 lookup table. If we replace the "cubic" function with "lerp", it's nothing but BILINEAR texture lookup!
With programmable sampler could implement much more wider range of things than just bicubic filtering.
But I said: ignore, because:
1. This can already be done with pixel program, just take four point samples and implement the blending in pixel program, no fat deal-- it is a bit more tricky to compute the cubic interpolator between[0.0,1.0] from fractional texture coordinates than if integer texel coordinates were used.. in hardware computing the lerp factors is trivial and fast, so I figured those variables would be accessible in "more convenient, already computed" -format, so massaging them through curve re-programming like linear-to-cubic would be feasible, but not fast... (see below)
2. Would most propably be very inefficient to have programmable samplers anyway
3. The benefits of programmable samplers in practise are questionable, besides this limited case of implementing noise, I can't think of anything else at the moment- therefore I said: ignore, because I really meant it to be ignored except for mindless rant it was
Ie. I had no intention to imply, that doing so would be efficient or desirable in hardware, I was just amusing myself since just coded such noise function with GPU for normal map generation-in-GPU.
Speaking of this "normal map generation", the biggest "problem" was limited precision of pixel shaders.. couldn't use the "big range" of values I need for the heightfield to generate the noise, so had to play a lot of numeric value games to get the results I desired.. the prototype in C++ was clean but when transfered the idea to GPU, well,.. let's just say that it wasn't as trivial as the pseudo-code suggested. ;-)
I had this level-of-detail landscape with geomorphing and the problem with changing vertex density is that lighting solution changes, so the obvious solution was to do lighting in texture space-- trilinear filter guarantees seamless lighting level-of-detail solution. I was first worried that lighting would look "weird", when the normal map samples were filtered, but it turned out to look good-- even if it's not "correct", it looks good, moving over the field is very smooth and there are no artifacts, so I'm pleased with the results.
www.liimatta.org/misc/scape1.mpg
Looks stupid as it's currently tiling same texture over and over, but the normal maps are generated in the GPU now.. it used to very slow as it takes 3 octaves of noise and the normal maps are 512x512 for certain texture region (I call them texture region because the tesselation engine is not really quad based, except where there are texture changes).
The video renders only 2048x2048 heightfield, it's over the "treshold" of 1400x1400 (about) where 128 MB of memory at that resolution runs out for the different resources it uses for rendering, so the cache and normal map synthesizer are already @ work. Texture synthesis is the next step, I could use splatting but I'm rather using alphamask based texture composition.. alpha masks are generated with CPU as GPU really runs out of flexibility for that kind of stuff (unless want something really trivial).
Anyways, the I have NO CLUE what the final looks will be like, this is just for fun anyway, I am currently not doing any work that actually requires a landscape or see any work in the future where I would be doing anything like that. So it's just for fun and don't have all day to work on this, but it might get somewhere eventually. Or not, but that wouldn't be too bad.
OH, right, so basicly the remark about programmable samplers was just sarcastic comment based on what I been doing lately with DX9. Emphasis on SARCASTIC. If someone takes it seriously that's his problem.