slightly off topic: will ue3 support multiple monitors?
of courseepicstruggle said:slightly off topic: will ue3 support multiple monitors?
GF3 wasn't that revolutionary. If you remember, GF256 already supported a primitive form of pixel shaders. So GF3 was more of an advanced GF256 than something revolutionary.wzh100 said:I agree with you.i think geforce 256/3 and radeon 9700 were the revolutionary of graphics hardware.just like doom3 and ue3.
No it's not. r_hdr_ extensions are disabled in Doom 3, they are experimental. Even 6800 won't help here. Trust me, i triedSigma said:Either way, Doom3 already has support for HDR processing. But not everyone has a 6800... In 2006, that can be a diferent story...
The GeForce 256 supported some (limited) ALU ops that could modify the output.DegustatoR said:GF3 wasn't that revolutionary. If you remember, GF256 already supported a primitive form of pixel shaders. So GF3 was more of an advanced GF256 than something revolutionary.
I think the NV40 is vastly better than the R300.R300 was simply the best 3D-chip design ever made it out of ATI or NVIDIA. A great balance b/w functionality and speed.
Yes, but it's evolutionary, not revolutionary.Chalnoth said:The GeForce 256 supported some (limited) ALU ops that could modify the output.
The GeForce3 supported operations that could change how textures were addressed (not to mention more ALU ops). That is a very significant difference.
Too low core clockspeeds for it's process, to high power stability requirements. Some issues with texture filtering (too much noise sometimes). MSAA could be better.I think the NV40 is vastly better than the R300.
In addition to process, I thought die size and transistor count played a major roll in setting clockspeed limitations. Therefore to say that its clockspeed is too low without taking into account all the factors required to determine maximum clockspeed potential is somewhat of a fallacy; "too" should be used in relation to the potential of an individual chip, not for comparative qualifications, as it does not give a clear picture of the final performance metric that qualifies "too." How would we know whether Nvidia maximize/not the clockspeed-to-heat ratio with NV40; clockspeed alone tells nothing about underlying architectural design (whether ambitious or not) and the size and heat requirements required to support it.DegustatoR said:Too low core clockspeeds for it's process
What about the alpha blending precision issue?DegustatorR said:NV40 is great chip but it has it's own setbacks. R300 really hadn't any. At least i can't remember anything negative about it.
Evolutionary vs. revolutionary is the absolute stupidest, most idiotic distinction one can possibly make. It's very simple, really. There's no such thing as a product or idea that is completely and utterly new. You can always draw parallels to something that came before.DegustatoR said:Yes, but it's evolutionary, not revolutionary.Chalnoth said:The GeForce 256 supported some (limited) ALU ops that could modify the output.
The GeForce3 supported operations that could change how textures were addressed (not to mention more ALU ops). That is a very significant difference.
Okay, let's see:NV40 is great chip but it has it's own setbacks. R300 really hadn't any. At least i can't remember anything negative about it.
In early days of NV40 (it was NV45 back then, but who cares now) NV wanted to reach 475MHz core clocks but it failed eventually. So i consider NV40's core clocks as comparatively low.Luminescent said:In addition to process, I thought die size and transistor count played a major roll in setting clockspeed limitations. Therefore to say that its clockspeed is too low without taking into account factors such as additional functionality is a fallacy. How do you know they didn't maximize the clockspeed-to-heat ratio. That is what's important since clockspeed alone tells nothing about ambitious architectural design requirements and breadth of functionality.
In what real-world game or application can i see this issue?What about the alpha blending precision issue?
Fair enough.DegustatoR said:In early days of NV40 (it was NV45 back then, but who cares now) NV wanted to reach 475MHz core clocks but it failed eventually. So i consider NV40's core clocks as comparatively low.
If I remember correctly, Tom's hardware encountered it in a later version of the Aquamark bench which showed discrepencies between it and the reference with the rendering of smoke. It has to do with accumulated error (which we may better labeled "differences") between Ati's method of computing the final blend and that of the reference rasterizer, but don't quote me on that.DegustatorR said:In what real-world game or application can i see this issue?
It is simple, yeah. Evolution is improveing something that already existed before. Revolution is doing something completely new, knowing all the previous work in this field.Chalnoth said:Evolutionary vs. revolutionary is the absolute stupidest, most idiotic distinction one can possibly make. It's very simple, really. There's no such thing as a product or idea that is completely and utterly new. You can always draw parallels to something that came before.
1. Didn't see it. Some screenshots maybe?Okay, let's see:
1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
2. Angle-dependent anisotropic filtering degree selection.
3. No supersampling FSAA modes.
4. Poor Linux drivers.
5. (If I remember correctly) first board to require an external power connector.
6. Lack of precision in texture filtering/blending operations.
7. Only 24-bit floating point support.
It is, but let's be honest, NV40 is a new generation. There's, what, two years between the two? For being 'revolutionary', I think R300 deserves the crown.Chalnoth said:I think the NV40 is vastly better than the R300.
Wasn't morrowwind actually due to vertex shader issues with the software?1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)
Count the same for nv40.2. Angle-dependent anisotropic filtering degree selection.
Not a hardware issue with "r300", just software (its available on the mac versions).3. No supersampling FSAA modes.
Not an issue with "r300", just software.4. Poor Linux drivers.
voodoo5 required one beforehand. nvidia were close behind on virtually all their boards (even the low end ones!).5. (If I remember correctly) first board to require an external power connector.
Hardly actually shows up.6. Lack of precision in texture filtering/blending operations.
Alternatively - first chip to offer full speed, full precision in directx9 all the time, and offer higher precision than anything else in less than dx9 ops. Guess that ones a matter of persepctive.7. Only 24-bit floating point support.
As far as I know, any chip supporting z-buffering supports w-buffering. In the former case, z/w is interpolated, in the latter it's 1/w. So simply make z equal to 1 by adjusting the projection matrix and you're done. Anything I missed?Chalnoth said:1. Lack of w-buffering (caused z-buffer errors in Morrowind, for example)