What other hardware/Technology is on the horizon?

DemoCoder said:
I find the overuse of auto-increment, ",", and other operators in C to be highly annoying. They just subtract from readability and don't add to compiler optimization at all. Some C coders think it is quite clever to be able to reduce 3-4 lines of code to a single glob of operators, but I think it's bad coding.

I more or less agree. pre/post increment/decrement in C is clearly INC and DEC legacy from age old instuction architectures. That in itself isn't bad, but the post increment/decrement semantics is just messed up. The fact that the statement has execution after evaluation *really* fsck things up. Just try to do post inc/dec operator overloading in C++ and you'll realize how fscked up the semantics are.

I agree on the ",", but really like the "? :" operators (where appropiate).

I find b = a ? c : d; more readable than

if(a)
b=c;
else
b=d;

In particular when the ? : is inline in some other statement.

DemoCoder said:
I also think pointer arithmetic is one of the worst language features. There's no reason why you can use an array abstraction for the same purpose, and static type checking tools can often check the bounds for you.

Pointer arithmetic is very useful for low level stuff, like OS programming. The problem is that people shouldn't be using C when they are. C++ has a firm array concept if you use it (ie. don't cast to <fubar*>), and if you need the bounds checking, either use Safe C or roll your own [] operators.

Cheers
Gubbi
 
Warning: Off-topic ranting ahead!

DemoCoder said:
There is nothing wrong with the WinNT/XP kernel. In many ways the NT kernel was and is more advanced than the Linux. NT always had the ability to dynamically load modules, NT is a microkernel architecture, NT always supported asynchronous I/O, and NT had support for threading from the beginning. Some of these features were only recently added to Linux, and took several releases to get right.

I call shenaningans on that one. Sure, NT kernel might be microkernel versus Linux'es monolithic kernel. But that does _not_ automatically make it better. In theory microkernel is better, but in reality it always isn't so. A good monolithic kernel will wipe the floor with a bad microkernel.

You are right when it come to a-sync IO (it was implemented in Linux in 2.5.32).

Linux has support for threading. Has had it for a long time. What they did to it recently is to redesign it. The end result is a system that is order of magnitude better than the old implementation (linuxthreads).

I could say that Linux kernel is alot better than NT-kernel since it's more advanced that NT-kernel is. Does NT-kernel support as many architectures and Linux does? How about filesystem-support? Flexibility? Scalability? Stability?

You need alittle history lesson. Linux didn't suddenly appear overnight. The kernel followed a long evolution, and most of the core packages (e.g. user mode stuff) that come with it to make it usable evolved over an even longer period, like the Xserver and all the GNU tools.

Well, yes it did. Sure, there were alot of tools available for a long time (GCC, glibc etc.) but they weren't directly related to Linux (the kernel). they are part of a Linux OS (and you could say that Linux OS'es did appear through evolution), but the final piece of the puzzle, the Kernel came along in relatively short time after Torvalds started working on it.

Linux borrowed heavily from the BSD world and from previously written GNU tools that have been around for almost 2 decades.

And that's wrong because....? MS has also borrowed heavily from BSD. FTP and TCP/IP-stacks of Windows are straight out of BSD.

But the MS haters have selective memory and ignore any buggy or security hole ridden releases in Linux distributions.

Selective memories? Nah. We just remember being getting repeatedly screwed by MS and we just don't want it to happen again. Sure there are bugs in Linux (and I don't think anyone claims otherwise).

I ran NT almost a full year without a single crash/blue screen.

Because you didn't have problems doesn't mean that it's stable for everyone. Only with XP and W2K has MS been moving in the right direction when it comes to stability, but it's still nowhere near Linux or UNIX.

My XP system is so stable that I never have any problems unless I am using a beta display driver. And I guarantee you, if you link in a beta quality driver in the Linux kernel, you can crash it just as easily.

It might crash X, but the system itself remains stable. One of the benfits of having separated GUI from the kernel ;).

Until very recently, XWindows didn't even support antialiased fonts, and before the recent XRender and and Direct Rendering Interface, doing high quality and high performance 2D and 3D graphics in X was a real hack.

And only recently Windows has got features that has been in X for decades already... And it just happens that 3D on Linux is now-a-days good enough for the likes of ILM (who did the CGI on Attack of The Clones on Linux). Sure, for a while 3D wasn't as good as on other platforms, but it is really good these days.

Add on top atrocious GUI toolkits, ugly as sin, and hard for users to use,

How exactly is KDE or Gnome (for example) "ugly as sin and hard for users to use"? If anything, they look alot better than Windows and are just as easy to use (or is clicking icons somehow more difficult on Linux than on Windows :rolleyes:?)

I just don't see Linux noobies as being qualified to talk about how MS operating systems "suck" Unix was there before Linux, it didn't happen overnight, lessons were learned over decades.

And those lessons are now applied to Linux.

OS X is everything you ever wanted in a Desktop Unix. Very very nice GUI, apps, and programming interface, but with the familar Emacs and Bourne shell one click away.

And you get tied to a specific overpriced hardware...
 
I agree, if you are writing driver drivers or hacking the kernel. But 99% of the time, people aren't. Even in 3D code, the vast majority of the CPU is either burned in the driver or burned in some inner loop, so there's no reason why people can't use safe C++ arrays, pointers, and STL collections for non-hotspot code. It increased the stability and maintainability tremendously.

I can't help but think how many buffer overflows we could have avoided in non-performance critical apps. "Enter Filename: " (scanf into char buf[] onto stack), ooops! security gone.

Microsoft seems to think you can achieve 95% of C++ speed by using the Managed DirectX9 api (from C++, or C#, etc) Save the 5% where your CPU spends 90% of its time for unmanaged code sections. That way the number of lines of code you have to check for buffer/pointer bugs is minimized.
 
DemoCoder said:
I agree, if you are writing driver drivers or hacking the kernel. But 99% of the time, people aren't. Even in 3D code, the vast majority of the CPU is either burned in the driver or burned in some inner loop, so there's no reason why people can't use safe C++ arrays, pointers, and STL collections for non-hotspot code. It increased the stability and maintainability tremendously.

We'er in violent agreement here. It's even more tragic in that STL and other standard C++ constructs usually results in better performance than the C counterpart. Stuff like aliasing analysis is easier for the compiler in C++ than C.

Cheers
Gubbi
 
Because you didn't have problems doesn't mean that it's stable for everyone. Only with XP and W2K has MS been moving in the right direction when it comes to stability, but it's still nowhere near Linux or UNIX.

I don't agree with that. At least not afa a using it as a replacement for windows at home (as in, running demos, games, graphic cards drivers and so forth and so on). I have tried that a couple of time, usually after a new major release and i tend to get just as many lockups/problems as with a windows machine.
 
allow me to interfere on this topic.

DemoCoder said:
OS/2 at the time it was designed was way too heavy for 286/386 machines.

snip

I had first hand experience with the early versions of OS/2 as I was an IBM employee at the time and we were all forced to use it. Conceptually, OS/2 is a great OS, and I loved the Unixy feel of it, but it performed way too poorly and IBM did nothing to get other hardware vendors on board.

probably your early experience left you with that impression. os/2 2.0, and especially 3.0 (warp) was architecturally brilliant for its time. something which could not be said about win95. and NT 3.5 effectively being an os2 2.0 "me-too", i don't see what architectural supperiority it could hold against os2. really. in the beginning NT had better drivers insulation, but that was later removed with moving the drivers to ring0.

Windows NT was too "heavy" for consumers back then also. Windows NT is a microkernel unlike most Unices (except Mach based ones), and offered more "protection" against crashes by isolating even device drivers into their own protected space. However, not running device drivers in RING0 led to huge performance problems on the hardware of that era, as a result, Microsoft couldn't ship NT to the consumer until hardware caught up. Microsoft also moved device drivers into RING0, eliminating some of the context switching overhead at the cost of robustness, but they did this because annoying little 2D/3D gamers were whining about a few frames per second lost, at the expense of less stability of your database server.

have you noticed that each time you talk of NT tech superiority you compare it to unices and each time you compare comodity value you compare NT to os2 and beos. of course unix being a lot older had its archaisms, and of course os2 & beos, being newer, needed time to stack comodity value. now you may ask yourself what happened that those newer oses failed in doing that, and what was MS's role in that failure.

On top of that, simply writing a memory manager, interupt manager, process manager, filesystem, and windowing system does not make what consumers consider a full OS these days. That's the commodity core. What's more important is the APIs, tools, and apps available, and in that regard, BeOS just blows. You can oogle all over the "elegance" of BeOS, but I'm telling you, there are about 50 BeOSes sitting on campus FTP servers. You need more than that.

excuse me, but windows doesn't hold a candle to beos when it comes to API design and coding productivity! as re the comodity value, you logic on this account is pretty circular, you realize that? try answering for yourself my question from the previous paragraph.

My beliefs are grounded in the fact that I've been working in the industry for almost two decades now. I have worked on almost every Unix you can name, Apple, Windows (various), almost 7 years of Java (a platform unto itself) now, AS 400, OS/2, and many others. I've ran farms of computers on various architectures, and I think I have enough cross-platform development experience to know the relative "stability" between OSes. In other words, I've lived through it.

do you believe you did enough coding on each of those platforms?

If you have something factual, concrete, and informative about the architectural differences between Windows XP and say, BeOS, that you think makes one BETTER TECHNOLOGICALLY than the other, then please share it.

ok, apparently you don't have enough coding experience with beos (as i assume you do with NT and the likes) as otherwise you wouldn't be asking such a question. but first of all, latest beos, as you see it today, is a circa-1998 techology. XP got released last fall.

why not compare the tech merits of BeOS to, say, win 2k?
how about fully object oriented system APIs (windows management, file management, media management, etc) for a start?
how about IPC performance undreamed of under NT (message passing throughput close to the system memory bandwidth, not to mention NT's concept of messaging is like baby-talk compared to beos messaging)?
how about 0.002 msec latencies on the sound producer side on a modest pentium-class pc @ 500mhz?
how about the linkable media nodes, forming producer-consumer chains of arbitrary length? ..ok, that more-or-less made it into directsound recently.
how about low system overhead so, e.g. watching mpeg4 in fullscreen is possible on a 400MHz celeron without YUV overlays?
how about dynamically loadable/unloadable drivers at the time when windows insisted on system reboot each time you touched anything in its hw manager?

This thread started by bashing Microsoft's code quality, but now has devolved into politics. Let's return to the original point: MS software does/does not suck (compared to X)

see, problem is ms software not sucking is only skin deep. you peek below the surface and it gets scary. and this stems from the fact that
(A) MS seldom get things done right from the start, and
(B) legacy code/stuff is innate for software development.
those two things combined is a recipe for developers' (and eventually, users) grief.

ok, to be honest, the one API i liked under windows, which MS got right from the start was ddraw (but not d3d!). i also like ms' vc since 4.0 onwards, although i get amazed how they manage to re-introduce past bugs in the compiler.
 
Bjorn said:
Because you didn't have problems doesn't mean that it's stable for everyone. Only with XP and W2K has MS been moving in the right direction when it comes to stability, but it's still nowhere near Linux or UNIX.

I don't agree with that. At least not afa a using it as a replacement for windows at home (as in, running demos, games, graphic cards drivers and so forth and so on). I have tried that a couple of time, usually after a new major release and i tend to get just as many lockups/problems as with a windows machine.

You mean using Linux as a replacement for Windows? Yep, it's more stable, altrough I was mainly referring to servers (where uptime really matters). As to the desktop... Sure I have seen buggy Linux-apps that crash. there are buggy apps in every platform. But the OS is more stable. I haven't had lock-ups or crashes in Linux. In individual apps maybe (I usually run cutting-edge software so there are bound to be few bugs), but the OS itself is amazingly resilient.

On servers Windows has got better when it comes to stability. But Linux and UNIX are still order of magnitude more stable.
 
darkblu said:
ok, apparently you don't have enough coding experience with beos (as i assume you do with NT and the likes) as otherwise you wouldn't be asking such a question. but first of all, latest beos, as you see it today, is a circa-1998 techology. XP got released last fall.

Yes, I haven't coded on BeOS, but I perused the developer docs. And no, I actually don't like the MS System APIs. I prefer to use Java, Delphi, C#, or some other higher level wrapper. I rarely deal with the Win32 or MFC apis directly.

why not compare the tech merits of BeOS to, say, win 2k?
how about fully object oriented system APIs (windows management, file management, media management, etc) for a start?

I'll give ya that, but NT is a microkernel, and those are supposed to be provided by user-mode libraries.

how about IPC performance undreamed of under NT (message passing throughput close to the system memory bandwidth, not to mention NT's concept of messaging is like baby-talk compared to beos messaging)?

Well, neither Unix nor NT are really optimized for inter-thread message passing (I assume you are talking about between threads), Unix has followed the general trend of trying to speed up fork() and reduce context switch times between processes. NT has tried to support both and optimimize. But NT's message passing performance, like Mach (behind NeXT/OS X) is restricted by its microkernel architecture. They trade off performance for safety.

But since you mention it, just what are the benchmarks between BeOS vs XP?

how about 0.002 msec latencies on the sound producer side on a modest pentium-class pc @ 500mhz?

I don't know. What's the latencies on a modern 5.1 DirectSound card, that delivers HRTF's, doppler effects, reverb, occlusions, per-channel equilization, and all of the other overhead in the API? And does BeOS even support this level of functionality?


how about the linkable media nodes, forming producer-consumer chains of arbitrary length? ..ok, that more-or-less made it into directsound recently.

Also present outside of sound as well. Also, the Java-Media-Frame has had this for awhile. It looks like it came from IBM/Taligent, so I assume there's some share code/design with Be.

how about low system overhead so, e.g. watching mpeg4 in fullscreen is possible on a 400MHz celeron without YUV overlays?
What's the overhead for doing this using DirectX?

how about dynamically loadable/unloadable drivers at the time when windows insisted on system reboot each time you touched anything in its hw manager?

Some XP drivers can be dynamically unloaded. It all depends if any applications have them open or not. If any apps have a driver open, then Windows will ask for a reboot. if not, it won't IIRC.

This all sounds nice for a developer, but again, how is this gonna be sold to the consumer?
 
Just a note. Both BeOS and Windows NT claims to be micro kernel systems. Neither are.

Windows NT in particular is not micro, and since version 4 where Microsoft stuffed the window manager and GDI into the kernel it certainly hasn't been. Not in the sense MACH or QNX are anyway.

A true microkernel only manages processes/threads (dispatch and maybe scheduling), communications between these and a way to map hardware to external modules. Some even run virtual memory management as an external module. I/O has nothing to do in a microkernel.

A microkernel should facilitate very fine grain thread level parallism. However, in real life they are usually used as the foundation of a monolithic subsystem as vitnessed in NeXT (MACH with a BSD subsystem) and in OS X (same ?). And hence the advantage is gone and you're left with more overhead when using the monolithic subsystem.

Cheers
Gubbi
 
Chalnoth said:
misae said:
Im just trying to say that if MS tomorrow released another OS that was flawed in a way that it no longer met the needs of the general public MS would not be able to sell it... well thats my own opinion of course.

Like Windows ME? Windows ME was generally considered Microsoft's crappiest OS attempt in the last couple of years, and yet it still made it into lots and lots of PCs, primarily because it was bundled with new PCs.

Due to these OEM deals, in part, Microsoft can severely screw up and still sell lots and lots of units.

It is not as bad as some make out... You know I feel really dirty defending Windows ME. However looking at it from a support point of view it does have a couple of nifty features.. like System Restore. And generally speakig if you do not ask too much of it - it behaves.

The original horror stories I think were based on the fact that many device drivesr stopped working correctly when people upgraded from Win98 to ME. It took a little wile for WinME drivers to come out.

When your modem stops working or your gfx card because you upgraded your OS - I can see why you are going to get mad and call it a piece of crap.

It was a filler product and perhaps just a cash cow - supporting the OS has made me see it in a different light. Not supportying it as in supporting broadband devices/internet with WinME machines, but the actual OS itself built in to many OEM machines.


WRT OEM deals.. I agree with you 100%

Right that's it I am going to stop defending MS 'cos I feel dirty for some reason.
 
Meanwhile...

Questions I've asked
continue to be ignored every time I mention to you that directly answering such questions and examples might be more productive...as we spam on and on with comments that don't seem connected to the point but by its volume will eventually redefine or end the discussion.

DemoCoder said:
You keep harping on BeOS, so I'll address your point (do you have one?). Let us suppose that BeOS is a "quality" implementation. The question is: What benefits does using BeOS deliver above and beyond Windows, MacOS, and Linux such that I, as an end user, would clearly want to choose it over the rivals? How will it make me more productive?More entertained? More secure?

Are you retracting your comments about elegance? In any case, the layout of the OS and operation was much cleaner than Windows. Since you insist on illustrations of why, I'll point out such things as a modular filesystem (well, modular everything really), true SMP before it was anything but a joke on Windows NT (I presume XP is more extensively multi-threaded nowadays?), inter-process scripting tools as an OS standard (see my Amiga example), similar functionality as the Amiga datatype system (see my other Amiga example below). My point has and continues to be that it did not fail because "elegance" didn't matter, but because Microsoft controlled and does control the marketplace. This control is extending, and you are saying this does not matter and we are not missing features as a result. I think this is silly. I still do. I also think your comments completely and utterly fail to address this and try to talk about other things in its stead.

Here is an of what I took for granted when my main OS for my uses at the time offered me more functionality than Windows still does.

ARexx: On the Amiga, there was an inter-process scripting language, ARexx (based on the IBM main frame language of similar name). Things that resulted from this functionality were similar to things such as you can use Visual Basic for in Office applications, except of course any application could use it extremely easily...and it didn't require much memory (it was small, and this includes GUI functionality for ARexx scripts). Digging through my memory, the functionality this allowed included my text editor doing things such as compile code (simply by implementing ARexx support), spellcheck, or anything another ARexx enabled program could do or could be done by ARexx (which would include any CLI functionality as well). This also allowed such a thing as my directory manager literally being able to add any functionality, including FTP and batch processing of files (and I mean about any type of processing). More importantly, it allowed me to add such functionality or download it myself to my existing programs. Other things included allowing multiple imedia related programs such as 3D renderer and an image processor to automate rendering and processing in any combination of compatible formats.

This worked on a 68000 processor, and required enough memory to have the applicable programs running.

Datatypes: on the Amiga, viewing functionality was redundant. A datatype could be written and any program that wanted to view that image type would use it. Functionality that could be exposed included loading, saving, and editing (and playing and viewing, etc). There were many different datatypes available, and what you got as a result were datatypes progressing in speed and features over time. This took the place of some plugins as we have for web browsers, except it happened in every program that used datatypes (libraries were re-entrant and typically multi-threaded). There were datatypes such as html and other hypertext formats (including the late "Amigaguide" format, may it R.I.P.), all audio, image, and video formats I knew of at the time (therefore all audio and vidoe programs that supported datatypes supported all formats you had datatypes for).

On Windows, I need to find programs with the functionality I want, and inter-operability to this degree is not an open OS feature because Microsoft gains more from making inter-operability a feature of their applications and making it difficult for other applications to have the same functionality.

Your question is kind of like asking "Be Inc. just shipped a new hamburger. How come most people are still eating McDonalds, Burger King, and Wendy's?" My question would be, does this Hamburger taste significantly better to the audience it is intended for?

No, my question isn't like asking that at all. If you could choose OSes as readily as you could choose burgers, Microsoft would not be a monopoly. For example, from the outset of your example, McDonalds, Burger King, and Wendy's all compete with one another, and this does not parallel Windows at all. Is there a reason you made an analogy so weakly linked to the discussion?

So can you tell me why I should choose BeOS over others?

I see, so when you said BeOS's elegance didn't matter you didn't mean it was more elegant and other factors caused it to fail, but that elegance itself does not matter? I did say earlier if you don't think quality matters, we shouldn't bother to have this discussion.

Let me quote another post you made as I think it might be illustrative of how your perspective on the history of computing is skewed:

One more comment: Frequently people make the claim that today's software requires way more resources but does the same thing thing. It is best phrased as "How could word processors could run on 386s, but now they require 1Ghz and 256mb of RAM?"

Well, the statement is not true. Back when you were running on a 386, your word processor couldn't render antialiased TrueType fonts on the screen and at 600DPI on the printer. It didn't have support for international languages (e.g. BIDI text, Unicode, Chinese input method), not did it do spell checking and grammar checking in all these languages, if it did any of them at all. You could not use the WYSIWYG word processor generate a presentation or publish electronically. It didn't support anywhere near the number of layout options available nowadays. Could it merge in data from database? Could it forward the document in email? Did it have revision control? Did it have a scripting language? Collaboration and workflow tracking? Document sharing? The list goes on.

Actually, let me mention what I could do on my 68020 AmigaOS machine with about 4 MB of memory (I had more, which allowed me to multi-task some bigger apps, but we'll use 4 MB as the base requirement).

I had Wysiwig word processing, including TrueType and unicode support. I do admit I'm not sure if it rendered anti-aliased, I suppose I'd have to check the font library specifications and see if it offered that.
I had international language support (the keyboard handler was modular and the locale library system allowed applications to offload handling of different languages, so you'd write the application once, and provide locale files to allow it to support another language. You'd simply add localization files to add languages.
There were foreign language spell checkers (I'm puzzled as to why you perceive this as a hurdle), though I don't have a complete list of which languages they existed for.
What layout options to do your refer to? It compared favorably to the last time I used Microsoft Word, and I'm not sure what layout additions you think require a modern computer. I will point out I had a WYSIWIG DTP program on this computer as well (Pagestream), and I'm pretty sure that would have whatever you had in mind covered (the blasted thing required like 3-8 megs of memory though).
Your merging data from a database, forwarding documents in email, and revision control examples are laughable...see my ARexx mention for how much further than this I could go on my Amiga.
Scripting language, see what I mentioned already.
Collaboration and workflow tracking and document sharing, there you have a point. It wasn't emphasized then, you'd have had to cobble something together with ARexx and the already existing revision control mechanisms. Actually, there were public domain Revision systems that supported ARexx that should have made this easy, but I never used it outside of coding. This was a big feature for Pagestream atleast, but I don't recall for certain a word processor supporting such. I have a vague recollection but it has been years.

Your list is really rather puzzling. You think these things are new or require lots of computing power? For "proof" use "Amiga" and some of these keywords and do some searches and you should find substantion for most of this.

The concept that one company being able to dictate the evolution of so many paths of software development to suite its own profitability is less desirable for consumers seems pretty clear cut. I really would prefer to be playing games and typing this under BeOS, but you'd rather dismiss the impact Microsoft's monopoly had on the ability of that OS to exist in the marketplace. To me, this makes me suspect that our discussion is not going to anywhere. Perhaps you could answer that first post I made in reply to you and we could maybe progress there?

First, consumers don't care about software development, they care about the end application. For most of the short history of consumer electronics, consumers have interacted with closed systems: VCRs, Microwaves, Consoles, DVDs, TVs, etc. They never had to think about what kind of OS was running inside. They just want to their microwave oven to work for them. It is only the PC that has introduced the notion of some general purpose (and brittle/buggy) device that is reconfigurable for a given task by software installed by the enduser.

It's strange...
who has the VCR, Microwave, DVD, TV monopolies? I could have sworn people could buy any damned brand they please based on whichever was best. This parallels Microsoft's monopoly how? Oh, wait, you also mentioned consoles. For the Microsoft situation to parallel this we'd have to have a throw out prior OSes and have a new set coming out every few years to compete. But, we don't. What was the point of this comment?

So, I'd like to know why you'd rather be playing a given game under BeOS? In the past, most games booted the operating system right out of the way and went straight to the hardware.

Today, games do not boot the operating system right out of the way. Why did you make that comment?

Usually when I am playing a game, I am not really concerned about the GUI or kernel underlying it, so I'd really be interested in why you are so keen on playing games on BeOS. Just what benefit do you think you'd derive?

Hmm...well, let's see. If I could play games and utilize BeOS for all tasks, I'd be using it right now. Since I can't, I don't, as I'd have to reboot in between applications. Microsoft has gained this position not because their OS is the highest quality, but because they have enough control to prevent another OS platform from successfully competing for applications. Witness the substantiated commentary from the Antitrust case. Maybe we could find a web site with an itemized list of what was substantiated in the case and save some time?

You really think MS killed BeOS in the market place, and not the fact that #1 Apple killed it,

Hmm...well, I always appreciate how your statements are facts and not opinions, especially with all the justification you provide.

and #2 consumers didn't even know about it (Be's business plan based on selling to Apple or selling multi-CPU "hacker boxes" to elite developers!)

I never used a Power PC BeOS version, I only used an x86 version. You know there was an x86 version of BeOS, right? Why are you discussing Apple then?

and #3 consumer's weren't the audience?

Consumer's weren't the audience? Hmm...oh wait, you said this is a fact so it must be true..

I mean, do you parents really covet a dual-CPU box? Do they care that BeOS markets features like "multi-processing", "multithreading", and "memory protection", features that every OS I know of now has?

So on the one hand, having those features means it wasn't targeted at consumers, and on the other every OS now has it. Does that mean XP isn't targetted at consumers? You understand why I think you sound a bit wacky? It sounds like you are saying it only counts as good and useful when Windows does it, which is the type of reasoning that will validate any lack Windows has no matter what could be proposed could be done if the functionality was there (don't take my word for it, that's what you just did).

There are about 2 dozen FREE webservers out there and many of them compare favorably with Apache (also free). So why do you suppose that most of them aren't used? Cause Microsoft killed them with a free IIS? or is it because they are not sufficiently different from Apache to warrant using them!

Wouldn't this be a situation were there still exists competition? Why do you bring it up? Or are you really trying to say "here, look Microsoft has to compete here still so you saying they don't have to compete in some other arena is not true". Do I have to point out why this doesn't make sense?

Face the facts: BeOS is not sufficiently better or different (and in many ways, much inferior) to the present big three: Windows, MacOS, and *Nix to warrant anyone to use such a new and unproven platform to do anything. It is not revolutionary, and most of the features it markets itself as having are old hat and not new at all.

Well, I did say "Or, we could redefine the discussion and look at what Windows offers us right now and ignore what other OSes have offered us and when and on what systems they achieved it.", so I suppose I shouldn't be surprised. Obviously Windows offers everything I could ever need and all the functionality of Amiga OS and BeOS that I miss from 10 years ago.

Microsoft can only control path dependence if the path they are taking is mostly correct. They can't go against the inertia of their own userbase and they can't impose arbitrarily high costs on the consumer. They have been thwarted in the past and not every Microsoft project is instantly successful in leading the market.

They've failed in some efforts to establish a monopoly and profitability, so therefore the areas in which they have succeeded do not matter?

At one point in time, Microsoft was building MSN out to beat the internet, complete with proprietary non-IP based protocols, its own non-HTML language, etc (a big AOL). At another point in time, Microsoft was trying to corrupt the XML specs with their own extensions, but recanted in the end. Microsoft fought hard against the virtual machine concept, but in the end, adopted a Java-like language (C#) and VM (CLR), which turns out to be an improvement on Java.

An improvement is it? I thought Java was modular, what features do these have that can't be done in Java? I suppose the side effect that it prevents people using something that could then be used on an alternative to Windows is an innocent side-effect?

On the Web Services side, Microsoft is now very open and cooperative compared to some of their rivals.

Pardon me if I don't take your description of "very open and cooperative" seriously given the discussion I've had with you so far. Could you elaborate and provide some info on how this "very open and cooperative" attitude reflects something that won't directly result in more market share for Microsoft based on a proprietary standard of some sort?

And Microsoft's critics aren't always correct. Remember when Microsoft decided to drop support for MCD OpenGL drivers? Carmack got all enraged and wrote a bunch of missives against MS, letter writing/petition, the whole shebang? Well it turns out in the end, that dropping MCD's was the right decision all along and Microsoft's decision wasn't neccessarily a carefully calculated decision to kill OpenGL, but was more a technical decision not to support something that the IHV's weren't asking for.

No I don't remember. Is that something sort of like "glide" was for Voodoo cards, a partial set of OpenGL functionality? Why was Carmack insistent on this instead of the full ICD?
I suppose this is similar to not having OpenGL drivers shipping with XP?

Look, I don't like everything Microsoft puts out. I still don't like the DirectX C++ apis and still prefer OpenGL. (However with Direct9 Managed Extensions, the API is much nicer). And in fact, many areas of windows still needs improvement.

No kidding? Hey, which areas? Let's compare them to other OSes and see how long ago an improvement was offered elsewhere. Then lets ponder whether having had to compete on quality with other OSes might have resulted in Windows having the improvement already.

Or we could go through another batch of text talking around that simple concept.

But Microsoft has systematically been delivering vast improvements in their entire software line, and I am sick of people claiming MS is holding progress back. There is no one stopping any developing from writing the ultimate web browser or spreadsheet. Absolutely nothing.

Did you just say this about a "web browser" with a straight face?
 
madshi said:
Well, of course you can make it that complicated, if you like. How about this?
Code:
procedure MemClear(var buf: array of byte);
var i1 : integer;
begin
  for i1 := 0 to high(buf) do buf[i1] := 0;
end;
That's hardly longer than your C code, is it? Furthermore I think you don't need to know pascal to understand that code. Just a basic knowledge of *any* programming language is enough to read and understand it. But just look at your C code. You have to know C to understand that. Otherwise e.g. you wouldn't know how this "*dest++ = 0" statement works. Which of the three actions here (*, ++, =) is done first? It's a bit strange that the order in reality is "*, =, ++", isn't it? A programmer, which is experienced in e.g. basic or pascal, but doesn't know C would think that the ++ is evaluated before the assignment is done...

Yes, Pascal is often more readable, that's true.
I honestly didn't know you could write "var buf: array of byte" directly. I've only seriously used pascal in my early days of programming and while developing a licence/installation handling system (with live update and all the bells and whistles) for my former employer.

Another painful thing here, in Pascal you have to declare all variables in the top of the function, which reduces readability of larger function, makes it easier to make mistakes, plus makes it harder for the compiler to optimize. Like this for instance:

Code:
for (int i = ; i < len; i++){
   int temp = ... ;
   
   // use the temp var here ...
}

If you were to mistakenly use the temp variable after the for-loop the compiler would tell you so. Further, the compiler knows that the temp variable doesn't need to be preserved across loops, something it can't assume if it were declared outside this scope.

Also, operator overloading in C++ is a feature that's extremely useful, I couldn't live without it. It improve productivity and readability a lot.

Also, in C++ you can pass class arguments by reference and by value, in pascal you can only pass by reference. Well, you can manually create a copy and pass it and then manually destroy it, but that's not the point.

madshi said:
What low-level detail do you mean? About efficiency: Just for fun I've compiled your C function and my pascal memClear function. Here's the assembler code of your C function (compiled in BCB6):
Code:
push ebp
mov ebp,esp
jmp +9
mov eax,[ebp+8]
mov byte ptr [eax],0
inc dword ptr [ebp+8]
mov edx,[ebp+$c]
add dword ptr [ebp+$c],-1
test edx,edx
jnz -14
pop ebp
ret
And here's the assembler code of the pascal function:
Code:
test edx,edx
jl +8
inc edx
mov byte ptr [eax], 0
inc eax
dec edx
jnz -7
ret
Now which one is more efficient? ;)

This rather shows that you either haven't turned optimisations on, run in debug mode or something like that for the C code. It passes arguments on the stack, use variables on the stack etc. Anyway, my example was for the verboseness (<- a real word?) of the code.
 
DemoCoder said:
I find the overuse of auto-increment, ",", and other operators in C to be highly annoying. They just subtract from readability and don't add to compiler optimization at all. Some C coders think it is quite clever to be able to reduce 3-4 lines of code to a single glob of operators, but I think it's bad coding.

I also think pointer arithmetic is one of the worst language features. There's no reason why you can use an array abstraction for the same purpose, and static type checking tools can often check the bounds for you.

I don't agree at all.
The "," is quite useless yes, but the rest is very useful and often improves the readability IMO.
a++; is more readable than a = a + 1; or even a += 1;

For an unexperienced programmer statements like *dest++ = 0; will most likely be confusing. But for the experienced programmer it's no more a mystery than dest = 0; i++;
Carefully used these operators improve readability as it makes the code shorter and makes it easier to get an overview of. Of course one should avoid packing loads of these stuff into one fatass statement, like *(dest += 2) = (*((--src) + 2) + (*src2++);

Of course not all these contructs improve performance either, but if you know what's happening under the hood you can improve performance. For instance,

Code:
do {
  // some code
} while (--len);

will assemble into something like this:

Code:
start:
// some code
dec ecx
jnz start

Doing it like this,
Code:
for (int i = 0; i < len; i++){
 // code code
}

will generate something like this:

Code:
xor ecx, ecx
jmp mid
start:
// come code
inc ecx
mid:
cmp ecx, eax
jnc start

You have both lost a register and added an instruction to the inner loop.
 
Demalion, you are the one asserting a claim, you are are the one who has to provide the burden of proof. I can't even find the questions you are talking about, but in any case, you haven't answer my questions either.


I'll point out such things as a modular filesystem (well, modular everything really)
You mean like on how every other OS you can mount multiple file filesystems? For example, I have a PGP filesystem on my box today. Or do you mean like the Object File System coming up in Windows where the entire filesystem stores metadata in a database and can be queried like a database with a unified interface?


true SMP before it was anything but a joke on Windows NT (I presume XP is more extensively multi-threaded nowadays?)
SMP != Multithreading. If BeOS is so good at SMP, why aren't they selling BeOS servers to all those companies running Solaris for its SMP and NUMA support? How many endusers have 2-way SMP boxes?


, inter-process scripting tools as an OS standard (see my Amiga example), similar functionality as the Amiga datatype system (see my other Amiga example below).

#1 Windows has scripting tools as standard, it's called the Windows Scripting Host. Moreover, Windows can run ANY scripting language through this interface, including JScript, VBScript, PythonScript, PerlScript, TCL, or any you choose. This is far beyond ARexx.

#2 Not all Amiga apps understood ARexx commands nor did the builtin Amiga shell until later versions. ARexx was in fact, most capable when you used WShell from a third party. The level of ARexx support in AmigaDOS was less functional than the capabilities you can achieve today with WSH and COM components. There is far more scriptability in Windows. Windows development methodology encourages applications to be broken into reusable components, which encourages scriptability.

#3 The datatype system is no different than the system today in windows for invoking viewers for mime types. Windows can map each and every MimeType to an multiple associated viewing components (not just one). Because the browser is integrated into the shell, Windows can view any datatype for which a registered viewer is installed.



Here is an of what I took for granted when my main OS for my uses at the time offered me more functionality than Windows still does.

Arexx: example deleted

Things that resulted from this functionality were similar to things such as you can use Visual Basic for in Office applications, except of course any application could use it extremely easily.

Windows scripting can script almost any application or COM component which exposes an interface for it. It is far more ubiquitous than ARexx was. You're talking to a Rexx lover. I wrote a complete BBS in Arexx for my VT100 terminal, and used my terminal to drive all sorts of informations scraping apps. But Arexx is not an Amiga unique innovation. I coded on the Amiga for several years.


Datatypes: on the Amiga, viewing functionality was redundant. A datatype could be written and any program that wanted to view that image type would use it. Functionality that could be exposed included loading, saving, and editing (and playing and viewing, etc).

Datatypes weren't introduced until after AmigaDOS 2.0 and most Amiga users never saw them before Win95. You may as well start talking about other obscure Amiga APIs like the Commodities.library. By the time datatypes existed, I was already using MIME enabled viewer registries on Unix email clients.

Why not talk about AmigaDOS's short comings? Archaic BPTR based DOS API. No resource tracking (hey, don't forget to call CloseLibrary!) . No memory protection (Guru Meditation anyone?) No device independent graphics (hardwired to Amiga hardware so tightly that even Commodore couldn't replace the Amiga HW chipset without breaking 90% of apps) Preemptive multitasking *BROKEN* until DOS2.0 (obscure bug in exec.library) Layers.library used an N^3 algorithm naive algorithm for computing damage rectangles which slowed down any screen with more than a few open windows.

And worst of all, the Amiga's OFS filesystem was possibly the slowest and worst filesystem ever written! Yes, it was replaced by FFS later, but it still needed special hacks like DirCache in AmigaDOS3.1 to make it work fast.


No, my question isn't like asking that at all. If you could choose OSes as readily as you could choose burgers, Microsoft would not be a monopoly.

Tell me why you cannot choose your OS? No one is stopping you from downloading/buying BeOS and running it, just like they aren't stopping you from running Linux. Does Microsoft put you in jail or shoot you if you try to run another OS? No. So stop saying you don't have a choice. And you can avoid paying for Windows, there are vendors who ship OS-less boxes.



So can you tell me why I should choose BeOS over others?

I see, so when you said BeOS's elegance didn't matter you didn't mean it was more elegant and other factors caused it to fail, but that elegance itself does not matter? I did say earlier if you don't think quality matters, we shouldn't bother to have this discussion.

Quality != Elegance. You can have an elegant architecture, that is horrifically buggy. Secondly, why should my mother who just wants to send email, or my office worker, who just needs to write documents, choose BeOS over any other OS?

You haven't provided ANY compelling reason why the end user will benefit, be more productive, and happier with BeOS. All you can talk about is abstract concepts like SMP, messaging passing, threading, etc. All irrelevant to the end user. You may as well be talking about what kind of timing belt my car uses. I don't care. I just want to drive it.


Let me quote another post you made as I think it might be illustrative of how your perspective on the history of computing is skewed:

One more comment: Frequently people make the claim that today's software requires way more resources but does the same thing thing. It is best phrased as "How could word processors could run on 386s, but now they require 1Ghz and 256mb of RAM?"

Well, the statement is not true. Back when you were running on a 386, your word processor couldn't render antialiased TrueType fonts on the screen and at 600DPI on the printer. It didn't have support for international languages (e.g. BIDI text, Unicode, Chinese input method), not did it do spell checking and grammar checking in all these languages, if it did any of them at all. You could not use the WYSIWYG word processor generate a presentation or publish electronically. It didn't support anywhere near the number of layout options available nowadays. Could it merge in data from database? Could it forward the document in email? Did it have revision control? Did it have a scripting language? Collaboration and workflow tracking? Document sharing? The list goes on.

I had Wysiwig word processing, including TrueType and unicode support. I do admit I'm not sure if it rendered anti-aliased, I suppose I'd have to check the font library specifications and see if it offered that

Oh really? That's interesting given that the AmigaOS didn't support Unicode (Comodore in fact, died before Unicode gained industry adoption). AmigaOS's *Text() calls assumed 8-bit wide chars. AmigaOS itself was never localized for other regions (Chinese AmigaOS?) Perhaps you're talking about the PageStream hack where they litterally had to build their own mini-OS layer from the ground up to support this on the Amiga.

AmigaOS didn't support TrueType. It had its own proprietary fonts rendered by the outline.library. And the way the Amiga's archaic screen rendering and printer rendering worked, it basically converted a vectorized font into a HUGE bitmap font in memory so it could work with graphics.library and intuition.library which relied on bitmap fonts.



I had international language support (the keyboard handler was modular and the locale library system allowed applications to offload handling of different languages, so you'd write the application once, and provide locale files to allow it to support another language. You'd simply add localization files to add languages.

Amiga wasn't even UTF-8 capable. What the hell are you talking about. The proof is in the pudding: Could I write Chinese or Arabic in an Amiga app? Could I save filenames using native language? BIDI text support?


There were foreign language spell checkers (I'm puzzled as to why you perceive this as a hurdle), though I don't have a complete list of which languages they existed for.

Sure, there were some limited spell checking and grammar options. Now show me an Amiga word-processor that could spell-check and grammar check Hindi.

Could I load more than 1 or two outline fonts into a document?


Your merging data from a database, forwarding documents in email, and revision control examples are laughable...see my ARexx mention for how much further than this I could go on my Amiga.

In other words, it had none of these features and you had to write them yourself. Document management/revision control is "Laughable"?

Your list is really rather puzzling. You think these things are new or require lots of computing power? For "proof" use "Amiga" and some of these keywords and do some searches and you should find substantion for most of this.

I was an Amiga fanatic for several years, I don't need to run a search. Amiga Word Processors SUCKED. If I wanted good quality output I had to use TeX. Most Amiga WP's could even do kerning and ligatures correctly. Could I export a comma delimited datafile from MATLAB and graph it in my WP report on Amiga?



It's strange...
who has the VCR, Microwave, DVD, TV monopolies? I could have sworn people could buy any damned brand they please based on whichever was best. This parallels Microsoft's monopoly how?

You have a knack for not being able to read. END USERS DO NOT CARE ABOUT OPERATING SYSTEM ARCHITECTURE. Ok? Your GRANDMOTHER DOES NOT CARE ABOUT IPC, THREADING, AND SMP. Clear enough? Consumers buy WIDGETS THAT DO THINGS. BeOS is in the WRONG BUSINESS. They should be selling to EMBEDDED DEVICE MANUFACTURERS.

To most people, THE PC IS A BLACKBOX. They DON'T CARE HOW SOMETHING IS IMPLEMENTED, AS LONG AS IT DOES WHAT THEY WANT. That's why NO ONE CARES HOW ELEGANT/INELEGANT THE OPERATING SYSTEM IN THEIR MICROWAVE OVEN IS.




So, I'd like to know why you'd rather be playing a given game under BeOS? In the past, most games booted the operating system right out of the way and went straight to the hardware.

Today, games do not boot the operating system right out of the way. Why did you make that comment?

What advantages AS A USER, TODAY, will you gain from using BeOS. CONCRETE ADVANTAGES, not "Well, it has SMP" What does it ALLOW YOU TO DO BETTER/FASTER AS AN ENDUSER.

You really think MS killed BeOS in the market place, and not the fact that #1 Apple killed it,

Hmm...well, I always appreciate how your statements are facts and not opinions, especially with all the justification you provide.

and #2 consumers didn't even know about it (Be's business plan based on selling to Apple or selling multi-CPU "hacker boxes" to elite developers!)

I never used a Power PC BeOS version, I only used an x86 version. You know there was an x86 version of BeOS, right? Why are you discussing Apple then?

Because BeOS was originally written and marketed to niche users. Marketing materials were crafted to lure hackers to buy BeOS boxes, not average users. Because Be was started by an ex-Apple executive and spent a large amount of their time trying to sell it back to Apple? Because at one point, BeOS was going to be the basis for Mac OS X, and Steve Jobs came back and killed it, replacing it with NeXT?


and #3 consumer's weren't the audience?

Consumer's weren't the audience? Hmm...oh wait, you said this is a fact so it must be true..

[/quote]

Show me some fullpage magazine ads that Be took out that addressed average users directly, like the Mac and Windows ads you see.


So on the one hand, having those features means it wasn't targeted at consumers, and on the other every OS now has it. Does that mean XP isn't targetted at consumers?

Microsoft doesn't market SMP to end users in their XP Home advertising do they? No, they market tasks the user can accomplish with the OS, like media playing, DVD burning, instant messaging, email, etc.

Microsoft markets SMP features to ENTERPRISEs.

They have a clear and distinctive message tailored to the people they are selling to. Are you so dense that you can't understand that a business can fail not because of their technology, but because of the WAY THEY COMMUNICATE THAT TECHNOLOGY TO BUYERS!





Well, I did say "Or, we could redefine the discussion and look at what Windows offers us right now and ignore what other OSes have offered us and when and on what systems they achieved it.", so I suppose I shouldn't be surprised. Obviously Windows offers everything I could ever need and all the functionality of Amiga OS and BeOS that I miss from 10 years ago.

Oh gawd, you're right. Microsoft killed the Amiga! If only MS didn't exist, the Amiga would still be popular today? Oh, they must have killed the Atari ST as well! Oh my gawd, don't forget how they killed GEOS.


No I don't remember. Is that something sort of like "glide" was for Voodoo cards, a partial set of OpenGL functionality? Why was Carmack insistent on this instead of the full ICD?

Carmack thought that IHV's couldn't implement full, complete ICDs. http://www.d6.com/users/checker/openglpr.htm

But Microsoft has systematically been delivering vast improvements in their entire software line, and I am sick of people claiming MS is holding progress back. There is no one stopping any developing from writing the ultimate web browser or spreadsheet. Absolutely nothing.

Did you just say this about a "web browser" with a straight face?[/quote]

Yes. I'm gonna open up my text editor and start writing a web browser today. How will Microsoft stop me? Kill me? I may not be able to convince consumers to buy it (especially if it doesn't do anything that IE doesn't already do), but that doesn't limit my freedom to write one.

WinZip isn't distributed with Windows, yet the vast majority of people I know of use WinZip instead of the plethora if other free WinZip competitors. The fact that other ZIP authors have limited market share is a testament to the fact that WinZip had early mover advantage, got brand recognition, and that Unzipping stuff is a commodity.

Unlike you can provide something so compelling better to convince people to switch, they have no incentive to do so.

Microsoft could not stop AOL/Netscape from releasing a Mozilla that runs under Windows.
 
Humus,
The two examples you wrote are semantically differnet. In one case you are counting down a non-local variable, in the other case you are counting up to a possible data-dependent limit (depending on if the value of LEN can change or not, or if it is used by the loop). Why didn't you write the for loop as for(int i=len; i>0; i--) ? The two loops have different meanings and aren't equivalent. If the code body used the iteration variable, they would definately have different meanings.

In theory, a compiler could alter the loop to look like what you wrote depending on the body of code. In fact, without a loop body, the compiler could have optimized away the entire loop if "len" was a dead variable. The macho-C "code to the metal" attitude is more often than not, a recipe for de-optimization. Even experienced C developers do a bad job tweaking their C code to "assist" the compiler. They get cache alignment totally wrong, fubar up instruction scheduling, and made the compiler work harder to do global analysis.

In fact, what I said is not theory. I just tried it with GCC -O3 -S t.c on my Linux box, and here is the result of the loop you wrote with i++ counting upwards

Code:
.L5:
        decl    %eax
        jns     .L5


In fact, the two loops compiled to identical assembly.

Here's the code, try it in GCC -O3 -S yourself

Code:
main()
{
  int i;
  int len=10;
  for(i=0; i<len; i++)
    {
    }

  len=10;
  do
    {
    }while(--len);
}
 
Alright, my example sucked, should of course have the for-statement going the other way. Anyway, you're setting len to a constant, which of course gives the compiler additional optimisation possibilities, in this case it can remove the first len > 0 test before the loop. A more useful test would be to do a simple function with len as the in parameter, like
Code:
void foo(int len){
  for (int i = len; i > 0; i--){
  }
}

Now the compiler can't make any assumptions about the signedness of len and have to do the test anyway.

gcc says:

Code:
	testl	%eax, %eax
	jle	.L8
.L6:
	decl	%eax
	testl	%eax, %eax
	jg	.L6
.L8:

vs. the while loop:

Code:
.L2:
	decl	%eax
	jne	.L2

I even got a testl instruction there, (which is competely redudant though).

Either way, regardless of efficiency, using ++/-- and pointer aritmetic improve readability in many cases IMO.

Anyway, to go back to where all this coding stuff started, my comment about pascal not being suitable for an experienced programmer. My point was that the language holds you back imo, there's a lot of very useful stuff you can do in C++ that you can't do in delphi. Stuff like operator overloading, templates etc.
 
madshi said:
Now which one is more efficient? ;)

Now now, in C you just call the POSIX memset() function. It's already written for you. On most platforms its linked to assembly level code thats vastly supperior to what some/most people could code up. So it's efficient from the execution standpoint as well as from the speed to implementation standpoint. :)

Code:
memset(dest, NULL, len);

One of the beauties of Java, is the speed to implementation standpoint. C++ is getting there. It has come a long way since I first picked it up [~92], but it has a long way to go when compared to Java.
 
Well, I don't know why GCC is putting a spurious test instruction in there, but I just compiled it again with

Code:
main()
{
 
  int len=10;
  foo(len);
  foo2(len);

}

foo(int len)
{
 int i;
  for(i=0; i<len; i++)
    {
    }

}

foo2(int len)
{
  do
    {
    }while(--len);

}

And got identical results. With the decrementing loop, you of course get something different, but then again, your code contains an obscure bug that could come back to bite you if you pass an incorrect value to your return (e.g. you accidently pass 0, or -1). The compiler generated version (test parameter first) is more correct. You are getting an optimization by relying on the fact that you think this routine will never be called with an invalid value. I prefer to do error checking and I would even add an explicit ASSERT() in that routine, with a unit test during the build process.

Another reason why you get the extra test: do/while have different semantics than FOR. Do will always execute the loop body. For won't.


Even if the compiler sometimes makes mistakes, it's irrelevent. Unless this is an innermost loop, 90% of the time, your CPU won't even touch this extra test instruction, so the cummulative decrease in performance will be almost zero. I could just as easily argue that if I wrote the entire app in 100% assembly, I could share off a bunch of instructions, but I guarantee you, there won't be much performance increase, and likely a decrease depending on the CPU being used. Modern compilers can even compile and inline multiple versions of the same subroutine depending on the data passed it.


Knuth said "premature optimization is the root of all evil", and I think history has born this correct. Write your app first. Concentrate on using the right datastructures and algorithms. Profile, profile, profile. Go back and patch the hotspots with more optimized code, or better yet, an inline assembly routine.

Time spent worrying about whether the C compiler is going to emit an extra TESTL instruction could be better spend optimizing the way you send vertices to the driver, or designing a better visibility algorithm for your scene.
 
To go way back here to the other thread,

Typedef Enum Wrote...
I would also put MSDN up against any Help application out there. Having some 3GB worth of information @ your disposal is no joke.

I hope it's better in .net than it was in VC/vb5. It was easily the most useless online doumentation I've ever encounterred. Of course, something is better than nothing. Still, I typically ended up at the msdn website when I needed some reference. I installed several revisions of it (in their entirety, or so I thought) before finally giving up.
 
The problem with MSDN is the way the help is organizated, especially with regards to API docs. That's one thing I love about Java. Documentation is structured exactly like the class namespaces, so to find any documentation on any function you just go to dir/namespace1/namespace2/namespace3/classname#function. Sometimes to takes me alot of searching around in MSDN to find all the documentation relevant to a particular API.
 
Back
Top