Unreal engine 3:there will be a revolutionary of rander???

There is a big diversity of technologies being used now to improve the graphical aspect. But all of them are really diverging and could be called hacks. Of course when you combine them all it looks revolutionary, but frankly it's not. We've seen it all before, just separately, in tech demos.

To have something revolutionary we have to go back to the source and use one approach that does it all. One step in the right direction would be ray-tracing, where the mentioned hacks (shadow, environment mapping/refelection, relief mapping, etc) all fit into the same elegant algorithm. Curved surfaces, anti-aliasing, geometric operations, can all be done at a lower relative cost than with rasterization. It's not a silver bullet, but at least we could call it revolutionary.
 
I agree with you.i think geforce 256/3 and radeon 9700 were the revolutionary of graphics hardware.just like doom3 and ue3.
 
wzh100 said:
I agree with you.i think geforce 256/3 and radeon 9700 were the revolutionary of graphics hardware.just like doom3 and ue3.
GF3 wasn't that revolutionary. If you remember, GF256 already supported a primitive form of pixel shaders. So GF3 was more of an advanced GF256 than something revolutionary.

I fully agree with R300 though, but not for SM2 support (imho SM2 support in NV30 was way more revolutionary than in R300, but it was simply f*cked up during the design of the chip). R300 was simply the best 3D-chip design ever made it out of ATI or NVIDIA. A great balance b/w functionality and speed.
 
Are we comparing a game tht we can play now with one that will only ship on 2006?! :?

Either way, Doom3 already has support for HDR processing. But not everyone has a 6800... :rolleyes: In 2006, that can be a diferent story... :D
 
Sigma said:
Either way, Doom3 already has support for HDR processing. But not everyone has a 6800... :rolleyes: In 2006, that can be a diferent story... :D
No it's not. r_hdr_ extensions are disabled in Doom 3, they are experimental. Even 6800 won't help here. Trust me, i tried :)
 
DegustatoR said:
GF3 wasn't that revolutionary. If you remember, GF256 already supported a primitive form of pixel shaders. So GF3 was more of an advanced GF256 than something revolutionary.
The GeForce 256 supported some (limited) ALU ops that could modify the output.

The GeForce3 supported operations that could change how textures were addressed (not to mention more ALU ops). That is a very significant difference.

R300 was simply the best 3D-chip design ever made it out of ATI or NVIDIA. A great balance b/w functionality and speed.
I think the NV40 is vastly better than the R300.
 
Chalnoth said:
The GeForce 256 supported some (limited) ALU ops that could modify the output.

The GeForce3 supported operations that could change how textures were addressed (not to mention more ALU ops). That is a very significant difference.
Yes, but it's evolutionary, not revolutionary.

I think the NV40 is vastly better than the R300.
Too low core clockspeeds for it's process, to high power stability requirements. Some issues with texture filtering (too much noise sometimes). MSAA could be better.

NV40 is great chip but it has it's own setbacks. R300 really hadn't any. At least i can't remember anything negative about it.
 
DegustatoR said:
Too low core clockspeeds for it's process
In addition to process, I thought die size and transistor count played a major roll in setting clockspeed limitations. Therefore to say that its clockspeed is too low without taking into account all the factors required to determine maximum clockspeed potential is somewhat of a fallacy; "too" should be used in relation to the potential of an individual chip, not for comparative qualifications, as it does not give a clear picture of the final performance metric that qualifies "too." How would we know whether Nvidia maximize/not the clockspeed-to-heat ratio with NV40; clockspeed alone tells nothing about underlying architectural design (whether ambitious or not) and the size and heat requirements required to support it.
DegustatorR said:
NV40 is great chip but it has it's own setbacks. R300 really hadn't any. At least i can't remember anything negative about it.
What about the alpha blending precision issue?
 
Have no best, only have better.
transform&lighting was an evolution,pixel shader&vertex shader was an evolution,r300(dx9) was the evolution,so what is next???the key is:how do you define evolution?how do you define revolution?
 
DegustatoR said:
Chalnoth said:
The GeForce 256 supported some (limited) ALU ops that could modify the output.

The GeForce3 supported operations that could change how textures were addressed (not to mention more ALU ops). That is a very significant difference.
Yes, but it's evolutionary, not revolutionary.
Evolutionary vs. revolutionary is the absolute stupidest, most idiotic distinction one can possibly make. It's very simple, really. There's no such thing as a product or idea that is completely and utterly new. You can always draw parallels to something that came before.

NV40 is great chip but it has it's own setbacks. R300 really hadn't any. At least i can't remember anything negative about it.
Okay, let's see:
1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
2. Angle-dependent anisotropic filtering degree selection.
3. No supersampling FSAA modes.
4. Poor Linux drivers.
5. (If I remember correctly) first board to require an external power connector.
6. Lack of precision in texture filtering/blending operations.
7. Only 24-bit floating point support.

...and I'm sure I could find more drawbacks of the R3xx if I thought about it.
 
Luminescent said:
In addition to process, I thought die size and transistor count played a major roll in setting clockspeed limitations. Therefore to say that its clockspeed is too low without taking into account factors such as additional functionality is a fallacy. How do you know they didn't maximize the clockspeed-to-heat ratio. That is what's important since clockspeed alone tells nothing about ambitious architectural design requirements and breadth of functionality.
In early days of NV40 (it was NV45 back then, but who cares now) NV wanted to reach 475MHz core clocks but it failed eventually. So i consider NV40's core clocks as comparatively low.

What about the alpha blending precision issue?
In what real-world game or application can i see this issue?
 
DegustatoR said:
In early days of NV40 (it was NV45 back then, but who cares now) NV wanted to reach 475MHz core clocks but it failed eventually. So i consider NV40's core clocks as comparatively low.
Fair enough.
DegustatorR said:
In what real-world game or application can i see this issue?
If I remember correctly, Tom's hardware encountered it in a later version of the Aquamark bench which showed discrepencies between it and the reference with the rendering of smoke. It has to do with accumulated error (which we may better labeled "differences") between Ati's method of computing the final blend and that of the reference rasterizer, but don't quote me on that.

Anyways, pointing that out might be splitting hairs, but we must be fair to both R300 and NV40; both were and are great architectures. R300 did seem like a bigger technological slide, since it added both fp precision, full shader 2.0 support, and double the pipeline counts of the previous generation.
 
Revolution was shift from 2D environments to "true" 3D.
Like comparing Commander Keen with The Need for Speed.
Shifting from Mechwarrior 2 graphics to Mechwarrior 3 is an evolution.
The same applies to all current graphic inventions.
Even human only evolved from a monkey. There was no revolution :p
 
Chalnoth said:
Evolutionary vs. revolutionary is the absolute stupidest, most idiotic distinction one can possibly make. It's very simple, really. There's no such thing as a product or idea that is completely and utterly new. You can always draw parallels to something that came before.
It is simple, yeah. Evolution is improveing something that already existed before. Revolution is doing something completely new, knowing all the previous work in this field.

Okay, let's see:
1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
2. Angle-dependent anisotropic filtering degree selection.
3. No supersampling FSAA modes.
4. Poor Linux drivers.
5. (If I remember correctly) first board to require an external power connector.
6. Lack of precision in texture filtering/blending operations.
7. Only 24-bit floating point support.
1. Didn't see it. Some screenshots maybe?
2. Balance of quality vs speed.
3. Could be done in drivers. Doesn't need them anyway with 6x SG MSAA.
4. Not the R300 problem.
5. I had a whole bunch of them available even back then ;) In addition to that, all cards had splitters with them.
6. Not visible 99% of the time. Good Q vs S balance again.
7. Enough even for now, not speaking about 2 years ago.
 
Chalnoth said:
I think the NV40 is vastly better than the R300.
It is, but let's be honest, NV40 is a new generation. There's, what, two years between the two? For being 'revolutionary', I think R300 deserves the crown.
 
1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
Wasn't morrowwind actually due to vertex shader issues with the software?

2. Angle-dependent anisotropic filtering degree selection.
Count the same for nv40.

3. No supersampling FSAA modes.
Not a hardware issue with "r300", just software (its available on the mac versions).

4. Poor Linux drivers.
Not an issue with "r300", just software.

5. (If I remember correctly) first board to require an external power connector.
voodoo5 required one beforehand. nvidia were close behind on virtually all their boards (even the low end ones!).

6. Lack of precision in texture filtering/blending operations.
Hardly actually shows up.

7. Only 24-bit floating point support.
Alternatively - first chip to offer full speed, full precision in directx9 all the time, and offer higher precision than anything else in less than dx9 ops. Guess that ones a matter of persepctive.
 
Chalnoth said:
1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
As far as I know, any chip supporting z-buffering supports w-buffering. In the former case, z/w is interpolated, in the latter it's 1/w. So simply make z equal to 1 by adjusting the projection matrix and you're done. Anything I missed?
 
Revolution will happen when we finally dump the triangle rendering system (replacing it with god knows which technique). Until then everything will be only evolution. Even curved vertexes (or how they call those things in english ) would mean only evolution.
 
Back
Top