Hardcoded and Shaders ?

V3

Veteran
Yes but there are tons of things that are hardcoded into the r300 . There are also tons of bandwitdh saving tech , fsaa tech , aniso tech , bump maping , displacement maping and other stuff that is all hardcoded into the chip. It makes designing games faster and many times performance of those effects are better in hardware than software .

From Jvd quote on another thread, I am wondering, for XBOX2, which graphic features should be hardcoded and which ones should be done via shaders. Your thought ?

For example, Displacement mapping on R300, I always thought its done through shaders, but I guess I was wrong since according to Jvd R300 has a hardcoded displacement mapping unit.
 
V3 said:
Yes but there are tons of things that are hardcoded into the r300 . There are also tons of bandwitdh saving tech , fsaa tech , aniso tech , bump maping , displacement maping and other stuff that is all hardcoded into the chip. It makes designing games faster and many times performance of those effects are better in hardware than software .

From Jvd quote on another thread, I am wondering, for XBOX2, which graphic features should be hardcoded and which ones should be done via shaders. Your thought ?

For example, Displacement mapping on R300, I always thought its done through shaders, but I guess I was wrong since according to Jvd R300 has a hardcoded displacement mapping unit.
I'm pretty sure it has hardware displacement mapping which is how they fixed the fsaa issue with half life 2. Nvidia has to emulate it through the pixel shaders causeing an even bigger performance hit.
 
anyway i'm wondering too, which aspects of the graphics are going to be done "in software" and what will be done "in hardware" in the next generation... guess we'll have to wait and see...

i thought the R300 did NOT have HW displacement, which is "exclusive" to the Matrox Parelia (i know there's an H somewhere in the name but i can never remember where)... any clarification?
 
london-boy said:
i thought the R300 did NOT have HW displacement, which is "exclusive" to the Matrox Parelia (i know there's an H somewhere in the name but i can never remember where)... any clarification?


According to this article is uses both hardware and software.


Displacement Mapping was first shown to us by Matrox with the Parhelia. Essentially, since this is now a feature set of DirectX 9, ATi supports it with the R300. We were told that there is hardware support on chip but that some of it has to be done in software, since the part of the technology is Matrox proprietary. Regardless, we were ensured that the performance was more than acceptable with Displacement Mapping technology running on the Radeon 9700.
 
I'd think AA and various filtering (anisotropic, bi- trilinerar) would be good if done in hardware.
Those functions that affect mostly on image quality, and are more or less 'standard' in every game, I can't imagine benefitting from being programmable.
 
According to OpenGL Guy, R300 does embm/dot-product bumpmapping through pixel shaders, and it does not have the per-pixel displacement map tech the Parhelia does, it only supports some simpler variety (same technique as NV3x uses). This is not due to some aspects being Matrox-proprietary (how could it be when the tech is included in DX?), but simply due to the fact ATi and NV skimped on it on purpose since they figured it wouldn't be much used by devs anyway I'm willing to wager.

Otoh, not even Parhelia supports the Parhelia method, since there's apparantly no driver support for the feature! :LOL:


*G*
 
I guess features such as filtering, AI, and the like will remain hardcoded for some time. The image qualities of these features will only get better when more advanced filtering and AA become supported in hardware.

Many things will go to shaders in the future, especially when considering next gen systems. I'm not sure if you (V3) meant only pixel shaders or vertex shaders or both. Seems to be all geometry and lighting calculations are moving to the shader. Bump mapping and the like are now done on shaders.

Seems to be that the R500 will have a very competent set of shaders, vertex and pixel, and will be able to do many things on the shader that used to be or still is done hardcoded. I guess in a sense it's still done in hardware just that the shaders are more flexible hardware. Maybe better methods of filtering can even be done on shaders in the future, but for right now that is beyond me.
 
would it be safe to say that we're moving to architectures where the line between software and hardware rendering is somewhat blurred?

and that Ps2 was the "victim" of a very early vision of this from Sony?

("victim" as in, to make up for the "vision" it had to compromise on some valuable aspects, personally i'm very happy with what i'm seeing now)
 
I'm not sure about that question london-boy. The line is getting blurred for sure, but shaders used to be done in software by powerful workstations and renderfarms and still are. Now they are being done in hardware albeit in a very limited way compared to what is required for a movie. We have hardware capable of doing shaders, but the actual funtionality of the effect or feature isn't done in hardware. I guess that would mean the line is blurring. The PS2 may be an example of this, but the two differ greatly. PS3's concept seems to be heading this way, but on a more grandiose scale.
 
Sonic said:
I'm not sure about that question london-boy. The line is getting blurred for sure, but shaders used to be done in software by powerful workstations and renderfarms and still are. Now they are being done in hardware albeit in a very limited way compared to what is required for a movie. We have hardware capable of doing shaders, but the actual funtionality of the effect or feature isn't done in hardware. I guess that would mean the line is blurring. The PS2 may be an example of this, but the two differ greatly. PS3's concept seems to be heading this way, but on a more grandiose scale.


well, i mean that this generation it all came down to the "quick fix" Hardware features provided, with the Xbox leading in the graphics department mostly because of its hardware functionality, but next generation it looks like all consoles will head to this "software-driven" DX10 approach.

what will decide the "best" (whatever that means) graphics performance then?
 
We'll have to wait and see for more information regarding the graphics technologies going into next gen systems. We have more solid info on R500 right now then we do for the GPU's going into N5 and PS3. All we really have is speculation and discussion. I doubt everything will be completely level in the shading departments. There's still other things to be implemented like a primitive processor. I for once would like to see other rendering methods tried or expiremented around with the next wave of consoles.

Is it possible for shaders to speed up things like voxels? Not so sure about NURBS either, that would be lovely to see in real time.
 
Sonic said:
We'll have to wait and see for more information regarding the graphics technologies going into next gen systems. We have more solid info on R500 right now then we do for the GPU's going into N5 and PS3. All we really have is speculation and discussion. I doubt everything will be completely level in the shading departments. There's still other things to be implemented like a primitive processor. I for once would like to see other rendering methods tried or expiremented around with the next wave of consoles.

Is it possible for shaders to speed up things like voxels? Not so sure about NURBS either, that would be lovely to see in real time.


I thought NURBS were already "feasible" to some extent in this gen of consoles. Of course, never used, but hey... also, don't NURBS create problems when "breaking" the models?
Also, with all this talk of Primitive Processors, especially in the 3D Graphics Cards boards, but i still don't really know what it is and what it does... help anyone?
 
london-boy said:
.... what will decide the "best" (whatever that means) graphics performance then?
The platform with the most programable calculation resources, backed up by loads of bandwidth and fast memory?
 
I'm pretty sure it has hardware displacement mapping which is how they fixed the fsaa issue with half life 2. Nvidia has to emulate it through the pixel shaders causeing an even bigger performance hit.

I used the search function on this forum and found the thread

Link here

I'll quote Humus

If there's still any doubt, I sent an email to ATi asking about this and got it confirmed that the 9700 does NOT support any other form of DM than presampled.

R300, doesn't seem to have a dedicated displacement mapping unit.

As for Half Life 2

This interview

ZEN:
Although the displacement mapping can be accelerated by ATI RADEON 9500 or greater using TRUFROM 2.0, does the game utilize this function?

BVB:
No. The function is processed by the software.

So I've no idea what you're talking about.
 
I guess features such as filtering, AI, and the like will remain hardcoded for some time. The image qualities of these features will only get better when more advanced filtering and AA become supported in hardware.

Don't R300 already support some post processing, which you can use to run filters you created.

Many things will go to shaders in the future, especially when considering next gen systems. I'm not sure if you (V3) meant only pixel shaders or vertex shaders or both. Seems to be all geometry and lighting calculations are moving to the shader. Bump mapping and the like are now done on shaders.

Well with the talk about DX10 unites vertex and pixel shader under one roof, I thought I'll just call it shader units.

Seems to be that the R500 will have a very competent set of shaders, vertex and pixel, and will be able to do many things on the shader that used to be or still is done hardcoded. I guess in a sense it's still done in hardware just that the shaders are more flexible hardware. Maybe better methods of filtering can even be done on shaders in the future, but for right now that is beyond me.

So basically what you're saying, all ATI has to do, is to concentrate on increasing their shader units execution speed (and maybe increase the available number of shader units) and they will be set for Xbox 2 ?
 
Sort of, ATI still needs to develop the shaders to add further functionality to be compliant and compatible with 3.0 of the shaders and even 4.0. But it seems to me that ATI doesn't have problems getting a good amount of performance from their shaders and will have a gradual progression that will be very strong when the R500 comes along. To me it does seem like ATI will have strong shaders for the R500, much higher funcntionality and performance over the R300. Bah, I am getting confused on my own.
 
london-boy said:
Also, with all this talk of Primitive Processors, especially in the 3D Graphics Cards boards, but i still don't really know what it is and what it does... help anyone?
One use of a Programmable Primitive Processors is as a compression mechanism. The app sends a few vertices to the GPU where they are tesselated into smaller triangles. Truform is an example of this that is not programmable.
 
london-boy said:
Sonic said:
I'm not sure about that question london-boy. The line is getting blurred for sure, but shaders used to be done in software by powerful workstations and renderfarms and still are. Now they are being done in hardware albeit in a very limited way compared to what is required for a movie. We have hardware capable of doing shaders, but the actual funtionality of the effect or feature isn't done in hardware. I guess that would mean the line is blurring. The PS2 may be an example of this, but the two differ greatly. PS3's concept seems to be heading this way, but on a more grandiose scale.


well, i mean that this generation it all came down to the "quick fix" Hardware features provided, with the Xbox leading in the graphics department mostly because of its hardware functionality, but next generation it looks like all consoles will head to this "software-driven" DX10 approach.
what will decide the "best" (whatever that means) graphics performance then?

DX10 (X-Box2) vs. Open GL 2.0, (N5) the eternal question. Personally I prefer Open GL & it's flexibility, for the simple fact that if MS ever makes a bad decision on the DX API feature set, it is locked that way for an approximate year. Or no set API, which is what the PS3 features correct?
Ram amounts & type, bandwidth, efficiency, ease of development & performance extraction, respective Cpu capability & various calculatory speeds, among others will determine the best overall graphics capable machine.
 
Back
Top