Only tech

Frank

Certified not a majority
Veteran
Let's turn the current guessing game around. Or just see this thread as a place to share your thoughts about the way you think GPU's should evolve.

For a start, it is all about benchmarks, isn't it? So, what do you think has to happen to maximize the amount of pixels and frames? Better ways to gather the data (textures, mostly), more efficient data structures (like procedural textures), decoupling the quads and efficient branching, or just a much improved way to handle memory access?

But then again, we want quality as well, of course. So, what things have to be implemented to get the best images and animations? Storing the whole scene on the GPU and only updating changes? Better and faster AA and AF? Higher (virtual) resolutions and more and better render targets and ways to combine them? Very shader-heavy pipes and a way to handle textures as functions instead of lookups, with all the memory and cycles eating intermediate formats needed to pull that off while still making it look good? Or do we have to look for something new, like REYES?

Ultimately, we want a solution that does both. Good quality and blazing speed. Is virtual memory and a very clever memory manager the way to go? Or an extended caching method, from the z-buffer on-chip and some ed-ram, through some sram cache to memory? Or is improving the amount of new features best, as providing hardware solutions to currently hot problems (like HDR and AA) will improve the throughput as well?
 
Last edited by a moderator:
Jew said:
Whatever it takes to get lifelike animation.
Well, that's more a question of content generation and physics than it is of graphics technology.

As far as graphics technology goes, the only thing that I can think of that would be nice to have, beyond what DX10, will offer would be some better shadowing algorithms. Stencil shadow volumes have performance that scales dramatically differently depending on the location of the light source with respect to the geometry, and shadow maps have quality issues in similar situations where shadow volumes start to perform badly.

So what we need are ways to get efficient and good-looking shadowing for all situations. One possible answer would be the irregular z-buffer that has been discussed on these forums (a search should get you there). Another possible solution may be a move to raytracing engines (it'd be nice to see an investigation on how good DX10 hardware will be at raytracing).

But other than that, DX10-level hardware should get us all the programmability we should need for some time to come.
 
Well, I think Chal and I have discussed this before, from somewhat different perspectives, but I think the development end of things is where the bottleneck is more and more. The tools aren't good enuf.

Tho I'd be curious to know at what performance level/memory requirements from here that you are talking about "indistinguishable from reality" at playable frame rates, limited only by the developers time/resources to "make it so".

10x R520 and 10GB of memory? I think not, because the massive simulators are beyond that, right? 200x R520 and 1TB of memory? :LOL:
 
I want cards to can handle insane amounts of geometry, so "game" models can disappear and games can use real models. Sick of looking at cheesy models in games, all low polygon with texture tricks to try and make up for it. I don't blame the developers, they can only go with what works, and on the PC there is the lowest common denominator to deal with.

That's my vote, fancy shadows are great, HDR is cool, but you are still looking at crude polygons a lot of the time.
 
Himself said:
I want cards to can handle insane amounts of geometry, so "game" models can disappear and games can use real models. Sick of looking at cheesy models in games, all low polygon with texture tricks to try and make up for it. I don't blame the developers, they can only go with what works, and on the PC there is the lowest common denominator to deal with.

That's my vote, fancy shadows are great, HDR is cool, but you are still looking at crude polygons a lot of the time.

I heard that, seeing the sharp angular profile of some nice displacement mapped well lit model just kills it. What ever happened to hardware tesselation, anyone even use puffyform anymore?

I also enjoyed the physics comment, would be nice to see some sort of "bone engine" like the Aegia stuff tightly coupled into the vertex front end of a graphics pipeline.
 
The problem with attaching a hardware physics engine with a GPU is basically just that the CPU needs to know the result of physics calculations. Not that it's undoable...it's just not easy.
 
Can you give an example Chalnoth? I was thinking you could put rules on the GPU like gravity so an object would fall until it hit another object etc. Why would you need to send the results back to the CPU. If you were moving through a level collision detection would tell you if you hit the object. There is no reason why you couldn't have some code running on the GPU itself.
 
rwolf said:
Can you give an example Chalnoth? I was thinking you could put rules on the GPU like gravity so an object would fall until it hit another object etc. Why would you need to send the results back to the CPU. If you were moving through a level collision detection would tell you if you hit the object. There is no reason why you couldn't have some code running on the GPU itself.
For example, AI entities need to know about the positions of objects in the world. There may also be hardcoded reactions to particular things, such as if button A is depressed (by a player, a rock, or whatever), then explosion B goes off.
 
What I'd like to see would be more of a combination of software and hardware changes, basically making code easier to write.

An example would be if you wanted to blur a texture. Currently it's not the easiest thing to do, especially if you are using FP pixel formats, and can't guarentee filtering. Being able to tell directX/gl 'here is a filter kernel, apply it to this texture'.
This would basically just be a higher level api, but this I have no problem with. I would like to be able to render to 16 bit textures without having to do RT-ping-pong when I want to blend on PS2.0 parts. A higher level lib could handle this, and likly do it better than I could anyway.

Something that would solve countless problems is have a pixel shader register with the destination colour. Easier write-to / read-from depth buffers would be nice. Ie, treat depth as a texture. (I know nvidia has a GL extension to do this last one)


One thing that *really* needs work is shader fragmenting. DX has the shader fragment linker class, which *is* very useful, but the syntax and usage is a bit of a mess. It's got to the extent where in my current personal project I've actually written *my own* shader fragment language, I shoudln't have to do that! (although it was fun ;)). It's utterly rediculous, for example if you go into battlefield 2's shader cache directory, *bam* something like 100mb of cached shaders, all very slight variations. Utterly rediculous. Far cry can apparently generate about 2gb of shaders, when maybe 10k of fragments would work just as well, if not better. *sigh*


Also bloody Capabilities too. If a card is 'directX 9 compliant' I shouldn't have to check if it supports X,Y, or Z ability, a,b or c texture format, etc. I really look forward to DX 10 finally fixing this.


I heard that, seeing the sharp angular profile of some nice displacement mapped well lit model just kills it. What ever happened to hardware tesselation, anyone even use puffyform anymore?

as far as I know, at least x360 and maybe dx10 can do 'dynamic' tesselation, so edge geometry can be highly tesselated. Etc. Not sure of the details though.
 
Its nice to see the progress in gpus. We are not far away from a geoemetry processor, and possibly some flow control of the outputs from various stages.

Too many methods have come up for shadows, soft shadows, but none of them seem to be really satisfactory. I wonder if it will really be solved - since they have been used for so long in offline rendering too and they had to be tweaked all the time.

I would like to see new types of storing texture like info. Instead of the regular grid we have, maybe have some sort of linked list, which will support for example sparse samples, or variable sampling rate of the data. This is not to replace the traditional textures, but something that is needed in addition. So 3d textures could also be usable if the data is sparsely distributed. Extending this, we could also render multiple z layers using render to sparse 3d texture, and use that for transparent shadow maps. I think this type of texture would be useful for intermediate storage for things which may be sparsely distributed. This could be faster than ordinary textures in those cases as they will take less memory and it would be faster to find the next element.
 
krychek said:
I would like to see new types of storing texture like info. Instead of the regular grid we have, maybe have some sort of linked list, which will support for example sparse samples, or variable sampling rate of the data.
I'm not sure a linked list makes sense in the context of graphics. Linked lists are meant to be traversed linearly, and you're just not going to do more than a couple of steps down that list in a normal shader program. Regular grids make much more sense for performance because you can randomly access any point in the texture.
 
Yes of course current regular grid textures are good for random access. But they are bad if you want to find certain elements satisfying some criteria. And if the density of the information varies, the the with regular grid textures would need to be allocated at maximum density everywhere, and we have to search though the pixel to get the relevant pixels.

For things like translucent shadow maps or order independent transparency, we could something like a regular grid texture, except that at each pixel location we have a list of elements instead of a simple pixel. And the requirement being that all the elements of the list are stored close to each other in memory, and that this texture can be rendered to. This image is broken down into small tiles, so to find a list you can directly find the tile first and then retrieve the elements.

Then again I am not sure exactly how many methods would really benifit from this to justify implementation cost :).
 
  • Like
Reactions: Geo
Oh, yeah, I think that makes sense. But the primary thing that you need is just the ability to store pointers to textures. You'd still be best-off storing, at a low level, textures. Higher-level tools may use data textures and textures of pointers to those data textures to implement higher constructs.
 
Himself said:
I want cards to can handle insane amounts of geometry, so "game" models can disappear and games can use real models. Sick of looking at cheesy models in games, all low polygon with texture tricks to try and make up for it. I don't blame the developers, they can only go with what works, and on the PC there is the lowest common denominator to deal with.

That's my vote, fancy shadows are great, HDR is cool, but you are still looking at crude polygons a lot of the time.


You said it - low poly sucks. And I really loathe the fake shaded textures that developers plaster all over surfaces. As for "indistinguishable from reality", needless to say, that's pretty much impossible to define, but there is one way of looking at it. Regardless of what NV and ATI might say, current technology is miles away from rendering, say, the Final Fantasy movie in real time. And in turn that movie is miles away from "indistinguishable from reality" (the sheer perceivable detail of the world is, er, quite a lot). Thus, I conclude that we are currently 2x miles away from "indistinguishable from reality". Case closed I think.
 
Well, I'd say the Final Fantasy movie was only far from indistinguishable from reality because of the animation. The rendering tech was quite good.
 
Back
Top