nVidias Dawn now works on R3x0

OpenGL guy said:
I don't see it that way at all. From what I recall of Carmack's comments on the matter, he had spent two weeks trying to port his software engine to D3D and gave up in frustration. He then looked at OpenGL and had the port completed in a weekend. Now, if he had used D3D to begin with, maybe MS wouldn't have improved the API to the extent that it has. Can you imagine if all we had was retained mode programming?! Egads!

Carmack made the right choice, then, because OpenGL was a better tool than D3D was. However, I think that has changed, except that OpenGL is the only cross-platform 3D API (you won't find D3D on Linux or Solaris).

Which is precisely why 3dfx developed Glide, too...;) In fact, I recall a lot of speculation that Microsoft would incorporate Glide as "the" Windows 3D API prior to M$'s D3d announcement & initiative. But my recollection of the time is really hazy--I thought Carmack went through an extensive GLIDE period with his software (after his 2D software) before moving to his first OpenGL offshoot & engine attempt...? Didn't 3dfx employ a miniGL driver for a long time (3 parts GLIDE, 1 part OpenGL)? GLIDE itself as I recall was influenced by OpenGL. Any correction or refresh here is appreciated...(the old neurons don't fire like they used to....;))
 
DemoCoder said:
Yeah, the procedural interface is alot cleaner, and the method naming is logical and clean.

With Microsoft you get #define D3DLONGCONSTANTFLAGYOUCANTREADVERYEASILY
I'm not sure that some of the longer GL constants are much better, and they are undoubtedly the clumsier for having _EXT or _ARB stuck on the end of everything.

The procedural interface should also die. Immediate mode GL is totally legacy and if I had my way would be deprecated yesterday. But there's too much that supports it, so I fear we're stuck with it. And nobody cares what I think anyway :) probably for good reason!
 
Well, if your hardware is a streaming immediate mode HW, I would argue that you should support an API (e.g. procedural) that reflects this hardware reality.

When we get hardware capable of dealing with scenegraphs, then we can eliminate the command/stream based procedural interface. But you can't take away too much the developers freedom to address the hardware as it actually works today.
 
WaltC said:
In fact, I recall a lot of speculation that Microsoft would incorporate Glide as "the" Windows 3D API prior to M$'s D3d announcement & initiative. But my recollection of the time is really hazy--I thought Carmack went through an extensive GLIDE period with his software (after his 2D software) before moving to his first OpenGL offshoot & engine attempt...? Didn't 3dfx employ a miniGL driver for a long time (3 parts GLIDE, 1 part OpenGL)? GLIDE itself as I recall was influenced by OpenGL. Any correction or refresh here is appreciated...(the old neurons don't fire like they used to....;))

Wow...lots of bad memory there. :)

AFAIK, MS never ever considered Glide as the "Windows 3D API." D3D initiative pre-dated Glide hardware. MS bought some company (name escapes me: EDIT: RenderMorphics) that had already mostly devloped the basics of the first Direct3D revision.

Carmack, at least publically, never had a Glide version of Quake. There was a Rendition Verite (RRedline) version first. Then, in cooperation with 3dfx, Carmack ported Quake to GL. 3dfx agreed to write "enough" of a GL driver (mini-gl), so that Carmack could get GLQuake up and running, and the rest is history...
 
(3dfx's miniGL driver just translated the OGL calls Quake made to Glide, right?)

It would seem to me Carmack conceivably continues to use OpenGL for two reasons: one, to support other OSes than Windows; two, to influence hardware makers. I can see both in selfish and unselfish terms. I think he started using OGL for mostly unselfish reasons, and continues to use it for mostly selfish reasons. I'm not sure that JC's now-selfish reasons are entirely bad for PC gaming, though. I mean, do we want MS to hold all the cards in terms of how 3D gaming advances?

I'm sorry, we were talking about Dawn on Radeons. So when are we going to see a quality and framerate analysis? :)
 
Pete said:
I'm sorry, we were talking about Dawn on Radeons. So when are we going to see a quality and framerate analysis? :)

Geforce FX users are afraid to post their performance because they well know it's WAY below Ati's even with Ati emulating Nvidia proprietary extensions.

They are also afraid to post screenshots as to show the shortcomings of FP16 and FX12 in which Dawn was most likely coded :)

I remember all the SS being posted everywhere when Dawn first came out. Where are those same people now??? Hiding under a rock somewhere?
 
Pete said:
(3dfx's miniGL driver just translated the OGL calls Quake made to Glide, right?)

All of 3dfx's ICD's did that. =)

MiniGL just means, that it isn't a full OpenGL implementation.

It only supports a limited number of instructions... in this case, only the instructions that Carmack used in Quake. Everything else... well, put it this way, it isn't even ignored, because you can't ignore what doesn't even exist! ;)
 
Hellbinder[CE said:
]Btw,
Carmack's quotes which say that its really hard to note the differences.
And using lower precision != reduced quality if your eye cant detect the difference. In fact it is called intelligent programming since you are only using the precision you need and improving performance. It is not Carmack's (or NV's) fault if ATI decided on doing fp24 all along and giving him no choice in situations where fp16 or even fx12 precision would have been enough.
Nice attempt at a Spin Job. I think the democrats could use a man like you in 2004 ;)

Make no mistake, Nvidia uses lower Percision modes becuase THEY HAVE TO.

Not trying to spin anything here ;). The lower precision may be used due to HW problems or just because its there and a higher one is not needed or whatever. My point was that rendering in lower precision does not directly imply reduced visual quality which is what your post implied.

Also your post implies that an "apples to apples" comparison will be possible only when both R3xx and nv3x render use the ARB2 path. But the actual apples to apples comparison will be when they both provide the same visual quality to the gamer.

So a comparison with nv3x using a path where all rendering is done in FX12 and that of R3xx where all rendering is done in 24bits is valid as long as the visual quality is the same. In the end its all about what the gamer sees and not about what the gpu does to produce the image (not talking about fixed path benches like 3dmark03 but actual games ;) ) .

This is very similar to the AA/AF comparison that anand did for nv30 where he compared a higher AA/AF setting ( in the CP) for the nv30 to a lower one for R3xx because they both offered the same visual quality.
 
Call me paranoid, but I made sure to grab the wrapper from Rage3d and the demo from nVidia before anything caused either to disappear.
 
Joe DeFuria said:
Wow...lots of bad memory there. :)

AFAIK, MS never ever considered Glide as the "Windows 3D API." D3D initiative pre-dated Glide hardware. MS bought some company (name escapes me: EDIT: RenderMorphics) that had already mostly devloped the basics of the first Direct3D revision.

Carmack, at least publically, never had a Glide version of Quake. There was a Rendition Verite (RRedline) version first. Then, in cooperation with 3dfx, Carmack ported Quake to GL. 3dfx agreed to write "enough" of a GL driver (mini-gl), so that Carmack could get GLQuake up and running, and the rest is history...

*chuckle* Yea, it is very hazy...;) I know that M$ never actually entertained using Glide as their API--what I was talking about was speculation at the time that M$ would use Glide--which was commonly rumored prior to their official D3d announcement. Aren't you talking about the DX initiative predating Glide, as opposed to D3d...? As I recall it took D3d quite a few versions to even generally match Glide functionality (D3d 5 0r 6, it seems like.) Anyway--you're certainly right that Glide was never an actual Windows 3D API contender (although I do remember people thinking it was going to be.)

Well, somebody else hit it right in stating that 3dfx's miniGL was basically a wrapper for Glide--and just translated the calls to Glide--so that's what I remember. I'm pretty sure Carmack's stuff ran first and best on 3dfx hardware using the miniGL (Glide) wrapper--wasn't it GL Quake--?? Ah, well, a long time past...;) Thanks for the clarifications...;)
 
Joe DeFuria said:
As a GAMER (repeat 5 times: G-A-M-E-R. Not gmae "developer", not "3d Professional", not "demo coder"....GAMER). No, I don't care about having a choice between D3D or GL. I just want to play a game with minimal fuss.

Carmack had less fuss for himself by coding GL.

IHVs had more fuss by coding Gl drivers, (pass cost onto consumer) and consumers had more fuss with downloading GL drivers, mini-gl drivers, GL drivers for this game, for that game.....). At one point, there was a concerted effort (forget the name) "to make sure you got the right GL driver...)

One thing you're missing as a GAMER is that developer time is not infinite. Time spent wrangling the 3D API is time NOT spent improving the algorithms and games being expressed in that API. Sure you could have had your D3D port.. but the game would not have been as good.

It's not merely the coding that would have suffured at the time but also the artwork. Remember that 3D packages such as 3D Studio Max and Maya where not supported to run under Windows95/98 at the time and D3D didn't work under Windows NT until Windows 2000.

Artists having to reboot into a different operating system to see thier stuff ingame wasn't uncommon. This of course means less time spent making the artwork and much less time spent checking to make sure it looks good ingame since rebooting was such a hassle. It was a MAJOR problem at many game companies using D3D until the release of Windows 2000.

This could all have been avoided of course if instead of making thier own 3D API Microsoft had used the mature, well-specified and Already Working Across Thier Entire Product Line API they had right there. Hardware vendors could have concentrated on a single API which some of them had already shipped drivers for. Microsoft could have released proper MCD and ICD toolkits (which they stopped when it became clear that OpenGL was competition for D3D). John Carmack is not to blame for your pain. Microsoft is.
 
Luminescent said:
How do you run the demo in ultra mode? I have the Cat 3.4 drivers and it seems only the regular mode runs.

I'm theoretically running in "ultra" mode. What is the difference?

Also, is it just me, or is this demo kind of lame? It really requires 4x FSAA to even look good. Yes, she is pretty hot (for graphicaly generated), but I expect more than a silly branch with a single person in terms of animation.


P.S.: I have a 9800 w/ 3.4 cat.
 
I finally got the ultra version running. The lighting on dawn, her skin details, and her wings all seem more detailed. 4x FSAA is definitely needed.
 
Windfire said:
Also, is it just me, or is this demo kind of lame? It really requires 4x FSAA to even look good. Yes, she is pretty hot (for graphicaly generated), but I expect more than a silly branch with a single person in terms of animation.
How dare you mock the power of Cg and CineFX!
It is a graphics revolution and the dawn of cinematic computing!
 
Windfire said:
Also, is it just me, or is this demo kind of lame? It really requires 4x FSAA to even look good. Yes, she is pretty hot (for graphicaly generated), but I expect more than a silly branch with a single person in terms of animation.
That, my friend, is why it is a "Technology Demo".

Personally, I found the single "silly branch" and "person" fascinating, and not just in terms of animation per se.

Once "Dawn" is applied to an entire game with all the components that comprises a game in various possible scenarios, your opinion and preconception of what "Dawn" means may change.

"Dawn", as a technology percussor of things to come in consumer 3D graphics (no more, no less), is an admirable attempt as such, IMO.
 
Back to the demo - has anyone with an older AMD processor got it working - I use 1.2GHz AMD Athlon-C models - I hear rumour that their implementation of SSE may be the source of the program crashes and there is no way around this - can anyone confirm or deny this?
 
I couldn't get it to run on my computer too, it also had a 1.33 Ghz Athlon thunderbird. This is my system:

1.33 Ghz Athlon Thunderbird
Abit NR-133 Nforce mobo(using 2.42 drivers)
2x120 GB WD HDD raided
Radeon 9800(Finally upgraded from my 8500LE!!) with Cat3.4 drivers.
512(2x256) MB PC2100 DDR RAM

Also, my 3DMarks score seem to be low, these are at default setting.

3DMark2001SE: 8995
3DMark2003: 4650
 
Doomtrooper said:
You can see how many proprietary extensions Nvidia has vs. the rest of the ARB (which is none)

Actually, there's like four or so proprietary SGI extensions too.
 
Back
Top