What is stopping Next-Gen developers from using High-Res textures?

see colon said:
the size of a texure is determined by it's surface area. it's easy, you multiply the x by the y and you get the surface area.

Doesn't matter what's mathematically correct, take any video player, ask it 200%, you'll get twice the width and height, making it effectively 400% surface area, yet each & every app I tried say 200% or double...

VLC : "double size"
Media Player Classic : zoom - "200%"
Widows Media Player : Video Size - "200%"
 
200% refers to a single dimension. In a quadrilateral, the increase of a uniform scale is obviously therefore going to be the square of the scaling factor. That's 200% squared, which is 40,000%. In other words, doubling the size of your texture dimension on both axis creates a 400x increase in memory demands. And this mathematical proof show why we don't get higher res textures. ;)
 
see colon said:
i learned how to calculate the surface space of a quadralateral in 4th grade.

But artists aren't good at math. Hence the incorrect convention.
 
I think the biggest paradigm shift will be the migration from prerendered textures to shader-generated procedural textures. In many cases, they are way more realistical than hi-res textures, except for real-word stuf such as painted text, photographs and screenshots.

Just look at shader-enhanced surfaces (the sea, the rocks and the dragon) in the 3D Mark 2005 demo scene and then compare them to textured objects like the ship and the crew... yack! Shaders rule... GO ATI GO! ;)
 
Shifty Geezer said:
200% refers to a single dimension. In a quadrilateral, the increase of a uniform scale is obviously therefore going to be the square of the scaling factor. That's 200% squared, which is 40,000%. In other words, doubling the size of your texture dimension on both axis creates a 400x increase in memory demands. And this mathematical proof show why we don't get higher res textures. ;)

You guys are starting to make me weep bitter tears of melancholy. No wonder I had so many kids in 108 last semester...
 
I think the biggest paradigm shift will be the migration from prerendered textures to shader-generated procedural textures. In many cases, they are way more realistical than hi-res textures, except for real-word stuf such as painted text, photographs and screenshots.
The latency associated with texture reads is usually pretty well covered up. Computational shader instructions for a procedural texture directly translates to cost. There's not a whole lot you can do cheaply assuming that the texture generation is simply one phase of a whole mess of other junk including all your lighting, scattering, convolutions, multisampling, etc. You'd have to take into consideration how much everything else that you anyway have to do is costing you before you decide to procedurally texture.
 
Couldn't you alleviate some of that cost if you generated the textures on the CPU? I mean with respect to the shader load already on the GPU.
 
How would you feed the texture units from the CPU? The GPU requests a texel coordinate and fetches it, right? So the texel has to be available. For the CPU to feed the GPU directly you'd need it to be synchronised, producing the data for the texels on request. I imagine procedural texture creation will need writing to a texture buffer in RAM to be read. In the case of PS3 this could be stored on LS if RSX can read that directly. Overall I'm unclear how CPU generated procedural textures area to be implemented.
 
Well I certainly wouldn't know :)

However, I think MS has actually been more vocal about procedural texture generation with Xenon than Sony has been with Cell. Referring to MS's "procedural synthesis" claims. It seems fair to think they have some idea as to how such would be accomplished. Maybe it's a similar method where with Xenon a portion of the cache is locked off and Xenos can read directly from that...or something.

I wonder is 10.8Gb/s read/write is really sufficient for this to work between Xenon and Xenos? I mean Xenon could consume half of that for it's own needs but then again if only some textures are generated procedurally and only the portions of the texture you immediately need maybe what's left over is enough to get the job done.

Cell and RSX seem to be in a better position at least as far a bandwidth goes. To RSX Cell LS could appear like another memory pool for texture data with 25.6Gb/s bandwidth to it.

I don't get how it's going to work though still unless the textures can be generated really fast by both CPUs. With limited space in the LS and cache it would seem you'd be forced to wait for a query and then generate the texture data as needed.

Writing data memory seeming defeats the purpose of procedurally generated textures as far as memory space is concerned but perhaps they can be generated and purged faster than could be streamed off the game disk. Going to memory though incurs latency somewhere though...

I'm just confusing myself. Someone smarter than me want to take a stab at how that could actually work?
 
Couldn't you alleviate some of that cost if you generated the textures on the CPU? I mean with respect to the shader load already on the GPU.
Well, with respect to shader load, sure... and for more complicated procedural textures, it would be a lot more powerful... but that doesn't really buy you much of anything since doing it on the CPU means buffering it off in memory as a texture and still relying on that CPU<->GPU bandwidth to transfer the data. So there's no real gain between doing that and simply having a high res texture other than the fact that you won't have to load it from a slow optical drive.

The main reason you'd want to have procedural textures done in the GPU is because it wouldn't be affected by resolution limitations. That's not an advantage you get doing it on the CPU since you have to store it somewhere anyway so that the GPU can look it up.

Granted, I would probably still do that for things that persist, pervade the viewpoints, and have only to be calculated once -- e.g. clouds on a skydome.
 
scificube said:
Well I certainly wouldn't know :)

However, I think MS has actually been more vocal about procedural texture generation
But not on the CPU!
 
ShootMyMonkey said:
There's not a whole lot you can do cheaply assuming that the texture generation is simply one phase of a whole mess of other junk including all your lighting, scattering, convolutions, multisampling, etc.
I admit I was rather thinking about custom processing techniques, something that replaces traditional magnification filters with a shader that's more appropriate for a given kind of surface, especially dynamical one. That way you can get away with low-res textures. I'm no graphic artist, but I belive it was done with the Canyon scene from 3DMark 2005 - try to set a lower resolution and you'll see that the sean and the rocks look heavily pixelated, while in 1024x768 they look very natural.

BTW, 3DMark 2006 features the same sceen that to have greatly improved on the artwork side, so now there's less distinction with traditionally textured objects... http://www.pcper.com/article.php?aid=199&type=expert
 
This has to be the most natual looking light ive seen EVER....

snow_big.jpg


Could the next Gen's do that in a game enviroment???
 
Hardknock said:
Parallax Mapping ;)
!eVo!-X Ant UK said:
Deatil Texturing :)

Detail bump mapping uses 2 normal maps to perform lighting calculations. Just like in vanilla detail mapping one texture represents low frequency detail and covers larger area of an object and the second texture represents finer, high-frequency detail and is tiled across the surface of an object. By combining these two normal maps you get an illusion of more detail variation across a surface, just like detail mapping.
Parallax mapping can be added just like with usual bump mapping. Height maps for parallax mapping are usually smooth, so they can be easily combined with low-frequency bump maps.
 
Alstrong said:
I see text graphics... someone have a "non-stolen" pic? ;)
Just visit the link above (the server is configured so that the referering page must be from the same site).
 
Simon F said:
But not on the CPU!

You sure about that? I seem to recall them explicity noting this is something their CPU would be good at and furthermore has been tweaked to be able to do. I don't know much about them but perhaps the dx instructions in the CPU's ISA may shed some light on this as well.
 
Back
Top