What other hardware/Technology is on the horizon?

Well, I pretty much agree with everything you say, so this a quite irrelevant, but for a simple app like that the compiler can trace that the function will never be called with a negative argument, so it can still optimize in the same way, but for most real apps it wont be able to decide that at compile time for lots of functions. Setting len = -10 instead would give you different code.

You can't say that my code has an "obscure bug". If I call a function with an invalid parameter the error is elsewhere, not in that function. Adding an ASSERT() of course helps for debugging, and is enough for that and preferred over having unneccesary run-time checks. If I write a polygon class I know that the number of vertices will always be >= 3. Having a run-time check for that actually would both be unneccesary and would reduce the chance of detecting the error as it wont crash or otherwise misbehave when I have invalid data. An ASSERT() solve that.

Anyway, efficiency was never really my main point in this discussion that somehow went quite far in a direction I didn't expect. As you said, the real performance advantages comes from a proper datastructure. An über-l33t h4x0r assembler linear array reading lookup will be slower than a sloppy written C code binary search tree lookup for large data sets.
 
Well, I just wanted to point out that there is a subtle difference between the do and for loops that were written in terms of semantics.

One other point: no programmer is perfect, and rarely do people write all the ASSERTs they should. It is very helpful to have bounds/null dereference checking built into the compiler/language. In debug mode, every array and pointer dereference is checked, in production mode, this code can be removed.

Eiffel is a great example of this. Asserts are not C preprocessor hacks, they are fully part of the language and have their own inheritance semantics. They can be used by the IDE to produce useful documentation about the function (such as telling you what the illegal values are), and the compiler can actually use information in the assertions for optimizations! (e.g. if I assert that a value is never greater than 3, the compiler can use this information)

Eiffel also provides optional garbage collection, automatic struct inlining, bounds checks, etc and lots of other productivity increasers for developers. So, you can program in C and then purchase lots of tools like PCLint, BoundsChecker, Purify, ElectricFence, etc to try to find all your resource leaks and pointer bugs, or you can have it built into the language as standard.

I prefer the language to support these features by design, rather than have them glued on later via the C preprocessor.

C# delivers many of the same benefits. Garbage collection, bounds checks, null checks, but when you want to speedup a section of code, you go into "unmanaged" mode for a critical section, where you can do your pointer arithmetic, etc

That way, you expose only a small fragment of your overall source code to such errors, buffer overflow exploits, etc and not the entire codebase by default, and you are not at the mercy of the 90% of programmers in the world who are not uber-leet coders.
 
Humus said:
Another painful thing here, in Pascal you have to declare all variables in the top of the function, which reduces readability of larger function, makes it easier to make mistakes, plus makes it harder for the compiler to optimize.
I think the optimizer has no big problem with it. If I try to check local variables in Delphi's debugger, when they're not in use in the current code portion of the function, I'm always getting something like "this variable is currently not accessible due to optimization". Sometimes the optimizer is much more clever than one would think. Also modern programming rules tell us to make functions as short as possible to make them better testable. Finally personally I find it a good idea to seperate the code from the local variable definitions. Its better structured, I think. I see though, that the compiler can eventually put out some more clever warnings with the C style of defining local variables.
Humus said:
Also, operator overloading in C++ is a feature that's extremely useful, I couldn't live without it. It improve productivity and readability a lot.
Well, the Delphi community is always discussing new language features and operator overloading was discussed a lot. In the end the majority of programmers argumented against it, I don't know exactly what the reasons were, since I didn't participate in that specific discussions. However, with Delphi 4 (now the current version is 7) the language got some enhancements from the C++ world like predefined parameter and function/method overloading, which are really quite nice.
Humus said:
Also, in C++ you can pass class arguments by reference and by value, in pascal you can only pass by reference. Well, you can manually create a copy and pass it and then manually destroy it, but that's not the point.
To be honest, I'm not sure what you mean. In pascal you can pass any parameter by reference or by value. Or am I misunderstanding you?
Humus said:
This rather shows that you either haven't turned optimisations on, run in debug mode or something like that for the C code. It passes arguments on the stack, use variables on the stack etc.
Optimization is turned on. Not sure about debug mode, am not that familiar with BCB. I simply started a new BCB project, pasted your code in, compiled and looked at the asm code. Yes, it passes arguments on the stack. That's because the default calling convention in C is cdecl, isn't it? And cdecl always passes the parameter on the stack. That has nothing to do with optimization or debug mode. Also you're not using any local variables in your code. You're working with the parameters directly, only. The loop also uses a parameter as the loop counter, which is probably not very efficient. Using a local loop counter variable would have been more efficient. Or if one would use a difference calling convention like "register" or "pascal" or however it is called in C, then your code probably would also look better.

One thing (of the many) I don't like about C is that you can't directly define static API imports. You always have to use those lib files. What if you e.g. want to use Kernel32.LoadLibrary16 in win9x for thunking purposes? Calling GetProcAddress with an ordinal value on Kernel32 fails, Microsoft didn't allow that to make hacking harder. In C you now either need to parse the PE import table of Kernel32 manually or you have to get a lib file for the undocumented ordinal exports somewhere. In Delphi you don't need any external files nor any complicated code, just one line is enough:

function LoadLibrary16(libraryName: pchar) : dword; stdcall; external kernel32 index 35;

Isn't that nice? I love this easy possibility to statically import APIs.
 
That reminds me of another thing that's busted about C/C++, which is the whole "dumb linker" concept.

#1 The compiler operates from a "closed world" hypothesis, that its optimizations are performed during compilation ONCE by statically traversing every single declaration used by the program. Which means,

#2 Compiling against third party libraries for which you don't have the source (interface only) prevents optimizations, since the compiler has no knowledge of the actual structure of the code you are calling.

#3 Type checking "hacked" into old dumb linker by name mangling (ugh), linker makes no intelligent decisions when it encounters a problem.


The linker should be smart enough, and the object file format rich enough, such that the linker can perform additional optimizations after compile phase. I heard VC.NET now does this for C++. Java, Eiffel, Smalltalk, and many other languages do it to, because they did away with a 20+ year old linker concept.

In fact, Java goes a step further. Java can inline functions which are bound at load time, and even runtime. It does this by profiling code via a background thread and reoptimizing the in memory x86 code. If you don't want the runtime hit, you could do it on app install, or app load. You could also have the app run once in profile mode, then in subsequent runs, the app is optimized based on profile information collected by previous runs. Doing dynamic optimization is pretty much a requirement for the IA-64 architecture.


Imagine deferred linking capability for C/C++: While your application is loading (or installed by InstallShield, whatever) , a call to glDrawPrimitive is noticed and it is INLINED directly out of your ATI or NVidia .LIB file into the actual inner loop of the application. The whole dynamic DLL invocation overhead is avoided, and as we know, inlining is where most of the biggest compiler optimization gains come from. Imagine if you could remove the COM dispatch overhead from Direct3D.
 
I would image an optimizing linker is really hard to do.

In C++, passing by value, which creates alot of intermediates (passed values, return value), is faster with inlined functions because the intermediates can be optimized out. Whereas passing by reference costs when dereferencing the object. Dereferencing is also the reason why Java will never beat C++ in performance critical code.

One thing is to get the linker to inline the function, another is to optimize out redundant intermediate objects.

Cheers
Gubbi
 
Forgive any errors, this is pretty long.

DemoCoder said:
Demalion, you are the one asserting a claim, you are are the one who has to provide the burden of proof.

That is a pretty common conversational tactic for you to use without giving any reasoning to support it. As for the reasons you do give...you are saying you arenot making any claims? I must have misread your posts.

I can't even find the questions you are talking about, but in any case, you haven't answer my questions either.

Well, they were pretty easy to find the last 2 or 3 times I mentioned them and referred to them as my first post in the original thread to you. Strangely enough, they are still the first post in that thread to you. Here, let me provide a link again. To move this along, let me emphasize that quoting that post directly including the context of your replies that I am responding to when I pose the questions is what I am proposing will allow us to move forward, not taking the questions out of the text and performing a tap dance number.


I'll point out such things as a modular filesystem (well, modular everything really)
You mean like on how every other OS you can mount multiple file filesystems?

You're right, you can do this on NT, 2K, and XP. The same functionality was available on the Amiga OS since its release in 1985. :-? Back to BeOS, the x86 version was first released around 1997 I believe. Shall we compare the BeOS system requirements to those of NT and Windows 95 and discuss your assertion that BeOS did not offer quality to the consumer?

For example, I have a PGP filesystem on my box today. Or do you mean like the Object File System coming up in Windows where the entire filesystem stores metadata in a database and can be queried like a database with a unified interface?

Let me try to simplify the role of the passage of time in our discussion. The features and capabilities of Windows XP are not new. They are new with regards to Windows. The time disparity between when they were introduced on Windows and when they existed elsewhere is the result of Microsoft not having to compete on quality with OSes due to having solid control of the software side of PC evolution. If they had had competition, the features available today in XP/2k would have been needed to be introduced earlier. Stated another way, the features yet to be made available (or "coming up" as you put it) at the time it was competing with other OSes I mention (in this case, BeOS) would have resulted in it "losing" on the criteria of quality.

That is pretty simple, direct, and I've stated it in in prior replies to you in this discussion. Is there still some confusion? I can state it again I guess.

true SMP before it was anything but a joke on Windows NT (I presume XP is more extensively multi-threaded nowadays?)
SMP != Multithreading.

No kidding?
Multi-threaded OS design allows SMP to be utilized more efficiently. Are we going to dispute this? If not, why did you bother to make that comment? Are you saying 2k and XP are not any more multi-threaded on the OS level than earlier Windows NT versions?

If BeOS is so good at SMP, why aren't they selling BeOS servers to all those companies running Solaris for its SMP and NUMA support?

I don't know, I wasn't comparing BeOS to Solaris, but to Windows. This seems pretty obvious and clear.

How many endusers have 2-way SMP boxes?

Some subset of end users using OSes that offer improved performance on SMP configurations. That question is about as useful to this discussion as "How many endusers don't run Windows?".

, inter-process scripting tools as an OS standard (see my Amiga example), similar functionality as the Amiga datatype system (see my other Amiga example below).

#1 Windows has scripting tools as standard, it's called the Windows Scripting Host. Moreover, Windows can run ANY scripting language through this interface, including JScript, VBScript, PythonScript, PerlScript, TCL, or any you choose. This is far beyond ARexx.

You are right, it is. The problem just seems to be that no applications offer the functionality I was used to.

#2 Not all Amiga apps understood ARexx commands nor did the builtin Amiga shell until later versions.

Yes, and that (WShell style integration without buying WShell) occured about 1990.
Which demonstrates yet another way in which not having to compete on the OS front slowed the introduction of features for Windows. But you are right, I was incorrect in my belief that Windows still did not offer such functionality.

ARexx was in fact, most capable when you used WShell from a third party.

Yes, and I'm not familiar with how you can do the same under Windows, which perhaps explains why I didn't know about the range of functionality offered by WSH. Is there some WShell functionality available to me? Perhaps it is in 2k or XP?

The level of ARexx support in AmigaDOS was less functional than the capabilities you can achieve today with WSH and COM components.

I didn't even realize the functionality was there, so I'll take your word for it. Perhaps you can point me at some more info on it.

There is far more scriptability in Windows. Windows development methodology encourages applications to be broken into reusable components, which encourages scriptability.

Well, any object oriented methodology does that.

#3 The datatype system is no different than the system today in windows for invoking viewers for mime types. Windows can map each and every MimeType to an multiple associated viewing components (not just one). Because the browser is integrated into the shell, Windows can view any datatype for which a registered viewer is installed.

You seem to have missed the point. It sounds like you are talking about launching (i.e., the "Open With..." requester...if not, please clarify). I'm talking about all the functionality I listed, including web browsers automatically supporting a new image format for inline viewing by installing a new datatype. I know video codecs work exactly this way, but I'm talking about any and all data types.

Here is an of what I took for granted when my main OS for my uses at the time offered me more functionality than Windows still does.

Arexx: example deleted

Things that resulted from this functionality were similar to things such as you can use Visual Basic for in Office applications, except of course any application could use it extremely easily.

Windows scripting can script almost any application or COM component which exposes an interface for it. It is far more ubiquitous than ARexx was.

Yes, it is functionally equivalent to ARexx but offers more versatility, and I'm only mystified that I don't see applications offering it and have been frustrated in accomplishing the same things in applications that I was used to doing on the Amiga.

You're talking to a Rexx lover. I wrote a complete BBS in Arexx for my VT100 terminal, and used my terminal to drive all sorts of informations scraping apps. But Arexx is not an Amiga unique innovation. I coded on the Amiga for several years.

Of course it wasn't, Rexx was well established before being ported to the Amiga. It is my only extensive personal experience with such functionality on a personal computer, however.

Datatypes: on the Amiga, viewing functionality was redundant. A datatype could be written and any program that wanted to view that image type would use it. Functionality that could be exposed included loading, saving, and editing (and playing and viewing, etc).

Datatypes weren't introduced until after AmigaDOS 2.0 and most Amiga users never saw them before Win95.

Was Win 95 released in 1992, which is when Amiga OS 3.0 was released? There were datatypes within months (not to mention the ones it shipped with).

You may as well start talking about other obscure Amiga APIs like the Commodities.library.

"obscure"? Why'd you bring the commodities library up? Is it because you couldn't dismiss datatypes as "obscure" without a decoy? You wouldn't be trying to belittle elements of OS functionality that might not reflect favorably to what Windows did at the time, would you?

By the time datatypes existed, I was already using MIME enabled viewer registries on Unix email clients.

We were talking about Windows functionality as impacted by lack of competition, were we not? Why did you bring up Unix? You seem to hop all over the place when you don't have a coherent point.

Why not talk about AmigaDOS's short comings?

Strange how the discussion is shifting from what Windows lacks to what Amiga OS lacked in its time. I guess I made the assertion that Amiga OS was not lacking in quality in comparison to other OSes. Hmm..wait, no I did not. I thought I was actually addressing your assertion that Windows was not lacking in quality in comparison to other OSes.

Archaic BPTR based DOS API.

Much worse than MS-DOS and Win 9x with long/short file name, right?

No resource tracking (hey, don't forget to call CloseLibrary!) .

Yep.

No memory protection (Guru Meditation anyone?)

How about I respond in kind with "BSOD, anyone"?

No device independent graphics (hardwired to Amiga hardware so tightly that even Commodore couldn't replace the Amiga HW chipset without breaking 90% of apps)

That's what you get for a 1985 OS. However, due to its object oriented nature, OS 2.0 (1990) on had abstraction sufficient to allow any application that depended on GUI calls to work with graphics cards. OS 3.0 (1992) furthered the integration of these features. But as we're just wasting time and not discussing how Windows has been impacted by lack of competition, I suppose it doesn't matter.

Preemptive multitasking *BROKEN* until DOS2.0 (obscure bug in exec.library)

You mean I wasn't using pre-emptive multi-tasking before Amiga OS 2.0? What was I doing, the cooperative multi-tasking of Windows 3.x?

How does that big "*BROKEN*" go with an "obscure bug"?

Layers.library used an N^3 algorithm naive algorithm for computing damage rectangles which slowed down any screen with more than a few open windows.

When was this, and how "fast" was Windows on a similar CPU speed at the time?

And worst of all, the Amiga's OFS filesystem was possibly the slowest and worst filesystem ever written! Yes, it was replaced by FFS later, but it still needed special hacks like DirCache in AmigaDOS3.1 to make it work fast.

FFS was in Amiga OS 1.3, 1988. Why were you using OFS on AmigaDOS 3.1?

No, my question isn't like asking that at all. If you could choose OSes as readily as you could choose burgers, Microsoft would not be a monopoly.

Tell me why you cannot choose your OS? No one is stopping you from downloading/buying BeOS and running it, just like they aren't stopping you from running Linux.

The commodity an OS offers is applications, i.e., usage. Microsoft monopolizes this commodity. Not because its medium (Windows) or content (pick another Microsoft product) is of sufficiently higher quality when it achieve pre-eminence, but because they effect a stranglehold on the viability of any competition to either by having direct control of both.

An example related to this: In your way of looking at things, Microsoft's development environment is good now, so you don't remember when it performed dismally compared to the competition, and don't recall that the difficulty for these competitors was not superior code generation, but having to reverse engineer API behavior and interactions to compete on the Windows platform (which is why full API specification or source code disclosure would have been so helpful at the time). So, Microsoft competed on compatibility with the other code they wrote, i.e. control, (I wonder how they managed to win on that?) and not on quality of the compiler (Yes, I remember comparisons of compile times, code size, and code speed that showed this, are you going to say you do not?).

Does Microsoft put you in jail or shoot you if you try to run another OS? No.

Not yet. We'll see how Palladium pans out.

So stop saying you don't have a choice. And you can avoid paying for Windows, there are vendors who ship OS-less boxes.

With full API diclosure or source code disclosure 5 years ago, Linux with WINE, or maybe BeOS with a Windows abstraction layer would indeed be competing with Windows right now, instead of having spent the time trying to reverse engineer behavior. And faced with such competition, Windows would be further along than it is or I'd be using something else. Which is why I said the Antitrust ruling would have been helpful a few years ago. If Palladium and the DMCA weren't worrying me, I'd have a bit more hope it could still be helpful 5 years from now.

Is some aspect of my statement unclear?

So can you tell me why I should choose BeOS over others?

I see, so when you said BeOS's elegance didn't matter you didn't mean it was more elegant and other factors caused it to fail, but that elegance itself does not matter? I did say earlier if you don't think quality matters, we shouldn't bother to have this discussion.

Quality != Elegance. You can have an elegant architecture, that is horrifically buggy.

Are we talking about an elegant architecture that is horrifically buggy? No? So how does that possibility support that in what we are discussing, Quality is not related to Elegance?

Secondly, why should my mother who just wants to send email, or my office worker, who just needs to write documents, choose BeOS over any other OS?

Well, if we're living in a hypothetical world were computer usage can be restricted as you state:

If that's all your mother ever intends to do then she shouldn't even bother with Windows. A Linux distrubution can be installed to offer her that for less right now (there are PCs sold like this for such people as a matter of fact, which is why PC makers no longer being penalized by Microsoft for offering other OSes is a good thing...did that happen before BeOS gave up the ghost, by the way?).

If that's all the office worker ever intends to do (write documents), then they would be better served by a cheaper computer with less memory and a slower CPU running BeOS.

Less wasted cycles and RAM by greater elegance would allow them to achieve the same productivity on a cheaper system.

You haven't provided ANY compelling reason why the end user will benefit, be more productive, and happier with BeOS.

You ignoring my statements is not the same as my not having made them.

In any case, here is an example of the difference efficiency can make in my direct experience. Yes, I am sure about the comparison.
On a 68040 cpu at 25 MHz with I think about 8 MB of RAM (let's call that 486 DX 33 performance), except for jpeg decoding (pure computation) my Amiga web browsed faster than a Pentium II 300 with 128 MB of RAM running Windows 95. Both at 800x600 16 bit graphics. Yes, the Win 95 machine was defragmented, and yes this was on a wide variety of web pages.

With such a gross disparity in underlying architecture, what does that tell you about relative efficiency? Furthermore, haven't CPU and RAM requirements for acceptable performance of each successive Windows OS gone up significantly?

I propose to you that using a 1 GHz + CPU may hide such inefficiency, but does not mean it is not there.

All you can talk about is abstract concepts like SMP, messaging passing, threading, etc. All irrelevant to the end user.

If abstract concepts about the OS architecture aren't relevant to the end user, why isn't everyone using Windows 95? It doesn't look much different than Win 2k for example. You view the evolution of Windows only in relation to itself, and of course the lack of competition does not matter there.

You may as well be talking about what kind of timing belt my car uses. I don't care. I just want to drive it.

Yeah, I'm sure you don't notice "abstract" things like fuel efficiency, acceleration, and handling, and no consumer does. :-?

Let me tackle the next section in another post.
 
DemoCoder said:
Let me quote another post you made as I think it might be illustrative of how your perspective on the history of computing is skewed:

DemoCoder said:
One more comment: Frequently people make the claim that today's software requires way more resources but does the same thing thing. It is best phrased as "How could word processors could run on 386s, but now they require 1Ghz and 256mb of RAM?"

Well, the statement is not true. Back when you were running on a 386, your word processor couldn't render antialiased TrueType fonts on the screen and at 600DPI on the printer. It didn't have support for international languages (e.g. BIDI text, Unicode, Chinese input method), not did it do spell checking and grammar checking in all these languages, if it did any of them at all. You could not use the WYSIWYG word processor generate a presentation or publish electronically. It didn't support anywhere near the number of layout options available nowadays. Could it merge in data from database? Could it forward the document in email? Did it have revision control? Did it have a scripting language? Collaboration and workflow tracking? Document sharing? The list goes on.

I had Wysiwig word processing, including TrueType and unicode support. I do admit I'm not sure if it rendered anti-aliased, I suppose I'd have to check the font library specifications and see if it offered that

Oh really? That's interesting given that the AmigaOS didn't support Unicode (Comodore in fact, died before Unicode gained industry adoption).

Are you sure they died before Unicode was "adopted by the industry"? In any case, who said anything about Commodore? Modularity in action:

TrueType with anti-aliasing support that runs on a 68020

The actual True Type implementation I recall using with my web browser when I was using my Amiga

Both support UniCode. Any application that used them did as well.

AmigaOS's *Text() calls assumed 8-bit wide chars. AmigaOS itself was never localized for other regions (Chinese AmigaOS?)

Are you saying Amiga OS was never localized for other languages at all, or never for Chinese?

Perhaps you're talking about the PageStream hack where they litterally had to build their own mini-OS layer from the ground up to support this on the Amiga.

To quote you:

Back when you were running on a 386, your word processor couldn't render antialiased TrueType fonts on the screen and at 600DPI on the printer. It didn't have support for international languages (e.g. BIDI text, Unicode, Chinese input method), not did it do spell checking and grammar checking in all these languages, if it did any of them at all.

As I illustrated, your underlying concept is incorrect.

AmigaOS didn't support TrueType. It had its own proprietary fonts rendered by the outline.library.

No, Compugraphic fonts are not Amiga specific.

And the way the Amiga's archaic screen rendering and printer rendering worked, it basically converted a vectorized font into a HUGE bitmap font in memory so it could work with graphics.library and intuition.library which relied on bitmap fonts.

Which printer library version are you referring to?

I had international language support (the keyboard handler was modular and the locale library system allowed applications to offload handling of different languages, so you'd write the application once, and provide locale files to allow it to support another language. You'd simply add localization files to add languages.

Amiga wasn't even UTF-8 capable. What the hell are you talking about.

You seem stuck on the idea that these things are not possible on a slow CPU, as you initial argument asserts. Yes there was UTF-16 and UTF-32 on the "Amiga". I don't see UTF-8 listed explicitly, is it fully defined as a subset of UTF-16 and UTF-32?

The proof is in the pudding: Could I write Chinese or Arabic in an Amiga app?

Yes. Simple enough answer?

Could I save filenames using native language?

I guess that would depend on the language. Are you done dancing around what you incorrectly stated?

BIDI text support?

Well, I didn't give a BIDI example, I mentioned Unicode. I assume you mean bi-directional, as in allowing right to left (that all caps usage strikes me unusual if that's what you mean)? The type library supported negative kerning, so yes you could type from right to left. Yes, there were applications that allowed right to left editing on the Amiga.

There were foreign language spell checkers (I'm puzzled as to why you perceive this as a hurdle), though I don't have a complete list of which languages they existed for.

Sure, there were some limited spell checking and grammar options.

You were talking about what you couldn't do. It turns out that you can do almost all of what you said you couldn't on my example system.

Now show me an Amiga word-processor that could spell-check and grammar check Hindi.

As far as spell checking, a quick look at Aminet and I find the ability to use WordWorth (a word processor) with Afrikaans, Czech, Danish, Dutch, Espanol, French, German, Icelandic, Latin, Norweigian, Portuguese, Spanish, and Swedish, and no Hindi.

Do I need to quote you again?

Could I load more than 1 or two outline fonts into a document?

Are you referring to being able to run a word processor in 1 MB of memory and then not being able to load more than one font? I propose that is better than not even being able to load up the OS at all.

Your merging data from a database, forwarding documents in email, and revision control examples are laughable...see my ARexx mention for how much further than this I could go on my Amiga.

In other words, it had none of these features and you had to write them yourself.

Me? No, for something like that I'd download it. Then it did have those features. On a 286 class machine. So why is an argument based on this that Windows is inefficient wrong again?

Document management/revision control is "Laughable"?

Err, no, your examples of them as things that couldn't be done a 286 class machine is laughable. Did you skip forwarding documents in email and merging from a database because you realized how trivial they are?

Your list is really rather puzzling. You think these things are new or require lots of computing power? For "proof" use "Amiga" and some of these keywords and do some searches and you should find substantion for most of this.

I was an Amiga fanatic for several years, I don't need to run a search.

Then why are you so consitently incorrect in this set of assertions?

Amiga Word Processors SUCKED. If I wanted good quality output I had to use TeX. Most Amiga WP's could even do kerning and ligatures correctly.

What year did you leave the Amiga?

Could I export a comma delimited datafile from MATLAB and graph it in my WP report on Amiga?

"WP report on Amiga"? I guess since I'm not sure what you mean by that, I don't know the answer.


It's strange...
who has the VCR, Microwave, DVD, TV monopolies? I could have sworn people could buy any damned brand they please based on whichever was best. This parallels Microsoft's monopoly how?

You have a knack for not being able to read.

You have a knack for ignoring inconvenient comments you make, such as comparing an OS to VCRs, Microwaves, and TVs. But you didn't quote that here for some reason.

END USERS DO NOT CARE ABOUT OPERATING SYSTEM ARCHITECTURE.

Just as much as they don't care about the internal workings of the VCR, Microwave, or TV. What does that statement have to do with whether they care about the impact of competition on the quality and cost of the end product? Well, nothing. But it is good for wasting a lot of text when you can't address that concept.

Ok? Your GRANDMOTHER DOES NOT CARE ABOUT IPC, THREADING, AND SMP. Clear enough? Consumers buy WIDGETS THAT DO THINGS. BeOS is in the WRONG BUSINESS. They should be selling to EMBEDDED DEVICE MANUFACTURERS.

My, that is an eloquent and coherent logical sequence. Or not. Either way, it is an impressive amount of text, so it must be saying something pertinent. Or not.

To most people, THE PC IS A BLACKBOX. They DON'T CARE HOW SOMETHING IS IMPLEMENTED, AS LONG AS IT DOES WHAT THEY WANT. That's why NO ONE CARES HOW ELEGANT/INELEGANT THE OPERATING SYSTEM IN THEIR MICROWAVE OVEN IS.

"Just as much as they don't care about the internal workings of the VCR, Microwave, or TV. What does that statement have to do with whether they care about the impact of competition on the quality and cost of the end product? Well, nothing. But it is good for wasting a lot of text when you can't address that concept."

So, I'd like to know why you'd rather be playing a given game under BeOS? In the past, most games booted the operating system right out of the way and went straight to the hardware.

Today, games do not boot the operating system right out of the way. Why did you make that comment?

What advantages AS A USER, TODAY, will you gain from using BeOS.

What you quote was a reply to your silly example as quoted. My answer to the question you asked again because you snipped the text where I answered it before is:

"Hmm...well, let's see. If I could play games and utilize BeOS for all tasks, I'd be using it right now. Since I can't, I don't, as I'd have to reboot in between applications. Microsoft has gained this position not because their OS is the highest quality, but because they have enough control to prevent another OS platform from successfully competing for applications. Witness the substantiated commentary from the Antitrust case. Maybe we could find a web site with an itemized list of what was substantiated in the case and save some time? "

It's strange how you accuse me of not being able to read.

CONCRETE ADVANTAGES, not "Well, it has SMP" What does it ALLOW YOU TO DO BETTER/FASTER AS AN ENDUSER.

Maybe this reply will help more, though I doubt we'll progress very far.

You really think MS killed BeOS in the market place, and not the fact that #1 Apple killed it,

Hmm...well, I always appreciate how your statements are facts and not opinions, especially with all the justification you provide.

and #2 consumers didn't even know about it (Be's business plan based on selling to Apple or selling multi-CPU "hacker boxes" to elite developers!)

I never used a Power PC BeOS version, I only used an x86 version. You know there was an x86 version of BeOS, right? Why are you discussing Apple then?

Because BeOS was originally written and marketed to niche users.

No, it was not written to niche users, it was written to achieve quality. It was marketed to niche users, because Microsoft's control of the "average" user marketplace (because by maintaining control ofAPI specifications it can lock the "commodities" to its product) and prohibits any OS from addressing the same target.

Marketing materials were crafted to lure hackers to buy BeOS boxes, not average users.
Yep.

Because Be was started by an ex-Apple executive and spent a large amount of their time trying to sell it back to Apple? Because at one point, BeOS was going to be the basis for Mac OS X, and Steve Jobs came back and killed it, replacing it with NeXT?

Is this an answer to my first question? How did Apple kill the x86 BeOS I mentioned? Are you saying you weren't aware there was an x86 BeOS?

and #3 consumer's weren't the audience?

Consumer's weren't the audience? Hmm...oh wait, you said this is a fact so it must be true..

Show me some fullpage magazine ads that Be took out that addressed average users directly, like the Mac and Windows ads you see.

Oh, you mean marketing, not the OS itself. I never attested that they were able to successfully market it to users. Not that Microsoft's control has anything to do with that.

Since marketing has nothing to do with the quality of the OS, I thought we were on the same page.

So on the one hand, having those features means it wasn't targeted at consumers, and on the other every OS now has it. Does that mean XP isn't targetted at consumers?

Microsoft doesn't market SMP to end users in their XP Home advertising do they? No, they market tasks the user can accomplish with the OS, like media playing, DVD burning, instant messaging, email, etc.

You see, here is the problem with me talking about the OS, and you talking about marketing again.

Microsoft markets SMP features to ENTERPRISEs.

I'm sure Intel with Hyper Threading has no issue with Microsoft having delayed this capability.

They have a clear and distinctive message tailored to the people they are selling to.
I thought we were discussing the OS.

Are you so dense that you can't understand that a business can fail not because of their technology, but because of the WAY THEY COMMUNICATE THAT TECHNOLOGY TO BUYERS!
Ah, is that the point of this side-track into marketing, to provide a Catch-22 to avoid discussing Microsoft's impact on the marketplace?

-

Not deciding to spend money (for some companies this is actually a limted resource, go figure) on advertising an OS to compete with Windows has nothing to do with Microsoft's control of the marketplace preventing successful competition on the PC. Therefore when a company does not spend money on their product trying to do so initially, it is proof not that they are trying to do something as nonsensical as capture developer support to aid in the penetration of said marketplace by having commodities to compete with in the future, but that they had no intent to ever compete at all. After all, they just wanted the developers to sit and admire their OS.

-

DC, you are capable of astounding feats of unreasoning.

Well, I did say "Or, we could redefine the discussion and look at what Windows offers us right now and ignore what other OSes have offered us and when and on what systems they achieved it.", so I suppose I shouldn't be surprised. Obviously Windows offers everything I could ever need and all the functionality of Amiga OS and BeOS that I miss from 10 years ago.

Oh gawd, you're right. Microsoft killed the Amiga! If only MS didn't exist, the Amiga would still be popular today? Oh, they must have killed the Atari ST as well! Oh my gawd, don't forget how they killed GEOS.

Suuure...that's what I said. Just because you are capable of taking my statements to mean that does not mean that outside of your mind the words actually changed.

No I don't remember. Is that something sort of like "glide" was for Voodoo cards, a partial set of OpenGL functionality? Why was Carmack insistent on this instead of the full ICD?

Carmack thought that IHV's couldn't implement full, complete ICDs. http://www.d6.com/users/checker/openglpr.htm

I see no mention of the OpenGL ICD there. To me it reads as if Carmack is worried about partial implementations due to no standardized OpenGL support at all for Windows, not due to the OpenGL ICD as an alternative.

But Microsoft has systematically been delivering vast improvements in their entire software line, and I am sick of people claiming MS is holding progress back. There is no one stopping any developing from writing the ultimate web browser or spreadsheet. Absolutely nothing.

Did you just say this about a "web browser" with a straight face?

Yes. I'm gonna open up my text editor and start writing a web browser today. How will Microsoft stop me? Kill me? I may not be able to convince consumers to buy it (especially if it doesn't do anything that IE doesn't already do), but that doesn't limit my freedom to write one.

Competition is more than the product existing, it the viability of the product on the market. Why is IE free? Altruistic motives on the part of Microsoft to benefit consumers? Looking at what has happened, it looks like they could afford to include it with their own product "free", and then leverage this piggy-backed saturation into profitability based on using it as a platform for initiatives (such as passport, msn, web authoring software, web servers) this allows them to again exercise control over (and control pricing my by monopolizing, which again solidifies functionality control for a further array of products). How much more can they make from this "synergy" than from charging for the web browser? Is this competing on quality, or competing on control?

A simpler question, how is Microsoft not a monopoly?

WinZip isn't distributed with Windows, yet the vast majority of people I know of use WinZip instead of the plethora if other free WinZip competitors. The fact that other ZIP authors have limited market share is a testament to the fact that WinZip had early mover advantage, got brand recognition, and that Unzipping stuff is a commodity.

Yes, and WinZip isn't a monopoly. It competes on merits (I use it because it is faster than WinAce last I checked, and will no longer use it when that is not the case).

Unlike you can provide something so compelling better to convince people to switch, they have no incentive to do so.

WinZip != Windows. I continue to be puzzled as how you propose something that isn't a monopoly in a discussion about the problems with something that is.

Microsoft could not stop AOL/Netscape from releasing a Mozilla that runs under Windows.

Didn't it take a court ruling (or was it just the threat of the court ruling) to stop them from preventing them from penalizing OEMs for configuring systems to ship this way? Or are you just selectively forgetting things?
 
demalion said:
Oh, you mean marketing, not the OS itself. I never attested that they were able to successfully market it to users. Not that Microsoft's control has anything to do with that.

Since marketing has nothing to do with the quality of the OS, I thought we were on the same page.

Correct me if I am wrong, but wasn't DCs marketing example designed to address you question regarding his accretion of who BeOS was targeted at?

To my recollection, the discussion went along the following lines:

DC: BeOS was not targeted at consumer.
You: And why do you think so?
DC: It was never marketed directly to consumer; hence it was not targeted at them.
You: I never said it was successfully marketed. Marketing has nothing to do with the quality of OS.

AFAIS, DC did address your original question, prior to arguments starting to go in circles.
 
demalion said:
Multi-threaded OS design allows SMP to be utilized more efficiently.
That's right. In BeOS every window runs in its own thread. This way SMP is really quite effective in BeOS. However, this logic also has its own share of problems. If every window runs in its own thread, you have to be more careful with data structures, you have to add more synchronization code. Of course it's possible to do so. But it makes porting applications to BeOS more complicated than to any other OS. Well, I've never programmed for or ported to BeOS yet, so take my words with a grain of salt.
DemoCoder said:
How many endusers have 2-way SMP boxes?
In the time when BeOS was on its high, I was reading the BeOS newsgroups from time to time. And lots of people there had dual Celeron PCs. They've built their PCs for BeOS. But of course that's not the normal every day computer user. So I'd say: Hardly any enduser has a 2-way SMP box. But if you think about it, it's a shame, two 2GHz CPUs are cheaper than one 3GHz CPU!
DemoCoder said:
If BeOS is so good at SMP, why aren't they selling BeOS servers to all those companies running Solaris for its SMP and NUMA support?
Because BeOS was built as a multimedia OS, it was never (originally) meant to be a server OS. SMP performance is very good (which is very nice for multimedia). But server security is not that great. Well, Be was working on that, but...

I find it quite sad that Be back in the past shifted to that damn Internet Appliences. They had a serious foot in the professional audio market and then in the moment where the future for desktop BeOS began to look positive they simply kind of dropped it. Sigh. I remember early benchmarks of their OpenGL kit. It simply put ATIs Windows 3D drivers to shame. I remember the programmers of Neverwinter Nights said, they would release a BeOS version and it would probably run faster and be more stable than the Windows version. There was also a full DirectX mapper in the works, building on that new OpenGL kit. Damn sad that the OpenGL kit was never released, same with the DirectX mapper and with Neverwinter Nights for BeOS. Sigh. :cry:
 
madshi said:
[To be honest, I'm not sure what you mean. In pascal you can pass any parameter by reference or by value. Or am I misunderstanding you?

Even class type parameters?
Last time I read the delphi documentation (which was a while ago though) it stated that arguments of class type are always passed by reference.
 
Humus said:
Even class type parameters? Last time I read the delphi documentation (which was a while ago though) it stated that arguments of class type are always passed by reference.
Hmmm... Well, you can pass the variable, which holds the class instance pointer by reference or by value. Is that what you mean? Perhaps I don't know what you mean, because it's not possible in Delphi? :D
 
Colourless said:
GCC has a poor optimizer. Those foo() test functions get compiled away by MSVC.

Not so sure about that. I did a small test application the other day just to compare the performance of gcc vs. msvc, took some code I had lying around of a red-black binary search tree and performed a larger number of operations on it. Results:
MSVC: ~15,000,000 cycles
GCC: ~11,000,000 cycles

This was with MSVC 6.0 though, the .NET compiler should perform better, but I don't think the GCC performs poor optimisations.

I'm not sure I like the idea of optimizing away empty loops. If you have an empty loop you most likely want to have a short delay there, could be useful for instance in driver code when waiting on a hardware response.
I had to workaround MSVC optimizing away my empty loop in my CPU speed detection code by writing the loop in assembler instead.
 
madshi said:
Humus said:
Even class type parameters? Last time I read the delphi documentation (which was a while ago though) it stated that arguments of class type are always passed by reference.
Hmmm... Well, you can pass the variable, which holds the class instance pointer by reference or by value. Is that what you mean? Perhaps I don't know what you mean, because it's not possible in Delphi? :D

Well, say you have some kind of class.

class TheClass {
// stuff here ...
};

Then in C++ you can take it as a parameter either as

void func(TheClass var){
// do stuff
}

or like this

void func(TheClass &var){
// do stuff
}

The first way will create a copy of the class by invoking a copy constructor if available, otherwise just copy the data. That's passing by value. Everything you do to var will only affect the local var and not affect the class instance you passed to the function.
The other way you pass only a reference to the object (basically a pointer to it) and all changes you do to the object will change the actual class instance you passed to the function. This is (afaik at least) the only way you can pass class parameters in delphi and java. This is usually what you want too, but sometimes passing by value can be very useful.
 
Humus said:
The first way will create a copy of the class by invoking a copy constructor if available, otherwise just copy the data. That's passing by value. Everything you do to var will only affect the local var and not affect the class instance you passed to the function.
Ah, now I understand. You're right - Delphi doesn't support that. But I must say that's a feature I don't care much about. There are other aspects of C(++) which are more appealing to me.
 
Geeforcer said:
demalion said:
Oh, you mean marketing, not the OS itself. I never attested that they were able to successfully market it to users. Not that Microsoft's control has anything to do with that.

Since marketing has nothing to do with the quality of the OS, I thought we were on the same page.

Correct me if I am wrong, but wasn't DCs marketing example designed to address you question regarding his accretion of who BeOS was targeted at?

To my recollection, the discussion went along the following lines:

DC: BeOS was not targeted at consumer.

This comment was made when we were discussing the relevance of the advantages of the BeOS architecture, after I asked "Are you going to tell me that BeOS and OS/2 were lower quality than Windows at the time?" We were not discussing marketing. The full sentiment of the question would be more like "How can a consumer benefit from the advancements of the BeOS architecture?".

Geeforcer said:
You: And why do you think so?
DC: It was never marketed directly to consumer; hence it was not targeted at them.
You: I never said it was successfully marketed. Marketing has nothing to do with the quality of OS.

Hmm, you seem to have missed a lot, including that my "original question" was "Are you going to tell me that BeOS is a lower quality OS than Windows?".

AFAIS, DC did address your original question, prior to arguments starting to go in circles.

No, DC did not. You also seemed to have missed a section of my text that addresses your question more directly as well about BeOS's targetting. Notice sarcasm mode is on...

demalion said:
Not deciding to spend money (for some companies this is actually a limted resource, go figure) on advertising an OS to compete with Windows has nothing to do with Microsoft's control of the marketplace preventing successful competition on the PC. Therefore when a company does not spend money on their product trying to do so initially, it is proof not that they are trying to do something as nonsensical as capture developer support to aid in the penetration of said marketplace by having commodities to compete with in the future, but that they had no intent to ever compete at all. After all, they just wanted the developers to sit and admire their OS.

That's why it wasn't marketed at the average consumer, but was "targetted" at the average consumer.
 
How do you know who it was targeted at without a marketing effort to determine that? Maybe when Be's programers were writing the OS, they were targeting consumer (implementing the features they though consumes would want). But if their sales never tried selling it to consumer, I would argue that it was never targeted at them by the company. Ultimately, its the marketing strategy rather then programer's intentions that determines where a product is targeted at.
 
Well, this is my last response in this thread since it is going nowhere. You seem unable to grasp that there are several discussions going on at once in these messages:

Discussion #1: You assert BeOS was killed by Microsoft monopoly
My Answer: BeOS died in the marketplace because of bad BeOS business strategy. Any launch of a new platform requires loads of money pumped into marketing and sales. BeOS tried a mostly "word of mouth" among hackers campaign.


Discussion #2: Large apps on old hardware. You assert you can have identical functionality on less capable hardware
My Answer: Yes, you could go find 5 separate word processors that each does a little bit of what Word does today. But Word includes everything. Localization? Word includes over 50 world languages with grammar/spell checkiing in each. Word can do document scanning. Word has an HTML Editor! Word can export as XML. Word has Asian Layout. Word has a Font Selector Combobox that allows you to see EVERY TRUETYPE FONT AT THE SAME TIME. That means 50+ truetype fonts are loaded and rendered into memory and makes it way easier to preview/select a font. If you can't understand the difference between 10 products that do the same as 1 product leading to the 1 product needing more memory, then I can't help you. (oh, Word also supports handwriting recognition and Speech recognition as input methods)


Discussion #3: BeoS quality vs Windows Quality
My Answer: Unix/Windows NT make different tradeoffs vs BeOS in their design points. Unix/NT carry years and years of legacy along with them. It's not a question of "quality" but "design" architecture. There are lots of little embedded realtime OSes like VxWorks, QNX, etc that run many tasks way more efficiently than Unix. Does that make RealTime OSes "better" and "higher quality"?


Quality refers to the stability, robustness, and performance of a design. You can have two separate architectures that differ remarkably in their design, but both are "quality". The Unix and NT kernels are both quality implementations. Yes, they may not run certain tasks as efficiently as BeOS (but what do I want for SMP on my Server, BeOS or Solaris?) but that doesn't disqualify them from being "quality". Unix wasn't designed, for example, to be a high quality MIDI workstation. But BeOS probably falls down in areas where Unices shine. Does BeOS have fine grained security?


So it's not a question of which kernel is higher "quality", they are different, not "better" It depends on what task and what features you care about.

Windows NT and Unix have much larger code bases than BeOS and support way more hardware devices, so the probability of any one system utility, applet, or driver having a bug is higher. But if you look at the kernels only, Unix and NT are rock solid.


What you don't seem to understand is that there are oodles of OSes out there that have nice design features, but those nice design features don't neccessarily sell to consumers. Developers and enterprise customers may appreciate them, but consumers don't care. People have limited attention spans and you need a way to communicate your advantages to them in 5 seconds.

BeOS needs more than to just be technically nice, it needs compelling software and compelling marketing to get people to adopt it over what hey already have. Steve Jobs showed that you can still excite the consumer. Apple made a comeback against Microsoft, and successfully got people to switch. Look at Apple's marketing campaign to see what you need to do and what's possible.


You seem to think that this all revolved around monopoly power. I tried to show you how even FREE products not owned by any company can become dominant and prevent smaller players from gaining traction in the market place.

Case in point: The Apache Web Server. Before Apache 2.0, for many years, the web server had an old "inelegant" architecture (no asynchronous I/O, no threads, no high availability, etc), yet it complete dominates above all other web servers in terms of usage. Many small companies tried to compete with better ones (Roxen, Zeus, etc) but no one would buy their "efficient" design. People still wanted "bloated, old," Apache. (because Apache DID ITS JOB regardles of the "BEAUTY" of the DESIGN)

Many "BeOS-like" companies tried and failed to dislodge Apache, which has a near monopoly (especially on Unix), even though it isn't even a company.

That's because it's hard to displace an incumbent, be it a politician, an OS, an application without LOTS of marketing/advertising. Incumbents get free advertising (they are already everywhere), strong brand name recognition (ask person X to think of product Y, what comes to mind first?), and people want to use what everyone else is using or knows. It's the network effort, or Metcalf's Law, or whatever you want to call it. You might be the better candidate for president of the US, but who knows about you?



You can blame Microsoft all you want. I blame Be for ruining BeOS's chances. I blame Commodore for sinking the Amiga. I blame Atari for sinking the ST. And I blame Apple for sinking the original Mac, by putting all their effort into trying to win a Copyright on Graphic User Interface in Court, and less effort finishing Copland/Mac OS9/ OSX/etc. Apple's OS stagnated while Windows when from 3.1 to 95 ( a TREMENDOUS improvement).


It's really ironic, but before 1995, most of the open-source fanatics (like Richard Stallman of GNU/FSF) were railing against Apple trying to "OWN" the GUI via Intellectual Property protection. Meanwhile, no real improvements happened to the core of MacOS. Oh, but they were killed by the Win95 monopoly right? I see BeOS making many of the same mistakes as Amiga. Amiga also had a focus on fast multithreading, and low latency video/audio for the Desktop Video Market. They wanted to be to DTV what Mac was for DTP. But DTV wasn't a compelling consumer market back then. Atari ST, ditto for the MIDI world. Too much focus on their niche, not enough on business and enduser.



This discussion is over. There is no way I can convince you that Microsoft isn't evil, has the worst programmers in the world, or that Be had any responsibility in the failure of BeOS.
 
Humus said:
The other way you pass only a reference to the object (basically a pointer to it) and all changes you do to the object will change the actual class instance you passed to the function. This is (afaik at least) the only way you can pass class parameters in delphi and java. This is usually what you want too, but sometimes passing by value can be very useful.

Java is pass-by-value for primitive types, and pass-by-reference for Objects. There is no such thing as a non-heap allocated object instance in Java, so any objects you have are already references. of course, you can't pass a "reference to a reference", e.g. &ptr, or whatever. So the references themselves are "pass by value". You can't ever modify anything on the caller's stack.

Object allocation in modern Java VMs works like stack allocation. When I write

MyClass c = new MyClass();

The Java VM has a pointer to the top of the heap, and does heapPtr += sizeof(MyClass) and returns the old pointer. So the "new" operation is much faster than the C malloc() or C++ operation. There is no scanning of "free lists" and no memory fragmentation.

The heap is broken into 2 or more separate heaps. The "young heap" and the "old heap" primarily. All new objects are created in the young heap. At certain points of time (even asynchrously with Concurrent/Parallel GC), the young heap is scanned for live objects, which are copied to a new young heap, and the previous young heap is cleared. This compacts the young heap. If an object survives for a certain period of time, it is deemed a "long lived object" and moved to the old heap where a less frequent garbage collector is run.

Because 90+% of all objects die quickly (similar to stack allocated objects in C++), the garbage collector only touches a few "live" object instances, and the rest of the whole heap is empty for reuse. So you may have 1 million objects in the young heap, but the algorithm only touches a small fraction, copies them to a new young heap, and the other are basically cleared like you clear a backbuffer. Basically, it only touches the # of objects proportional to the number you'd have sitting on your whole stack in C++. The objects in the "old heap" correspond to those heap allocated C++ objects that are "new"ed but stay around for almost the whole program execution, or long stretches of it.

In Java2 1.4.1 for example, a typically "young GC" pass runs in about 1.4ms every couple of seconds. Java may not be appropriate for highly real-time applications because of the non-determinism of GC, but GC boosts programmer productivity by leaps and bounds for almost every other app I can think of.


Microsoft went one step further with C#/"Managed C++" in .NET. You can have both garbage collected "managed" objects and non-GCed "unmanaged" C++ code. So you can write most of your menu screens, disk loading, mesh conversion, game AI, etc using "managed" Code, but the actual 3D rendering loop, network I/O, and world physics could be written using determinisic "unmanaged" C++ code. That way you only have to track down resource leaks in a small fraction of your app: the high performance part that matters. You can run non-critical non-realtime stuff in a separate GCed thread.
 
Humus said:
Colourless said:
GCC has a poor optimizer. Those foo() test functions get compiled away by MSVC.

Not so sure about that. I did a small test application the other day just to compare the performance of gcc vs. msvc, took some code I had lying around of a red-black binary search tree and performed a larger number of operations on it. Results:
MSVC: ~15,000,000 cycles
GCC: ~11,000,000 cycles

It depends on the application. Colourless is right in general. For example, instruction scheduling was just added to GCC recently. Before that, you had to use EGCS. But for maximum x86 speed, Intel's is generally the best.


I'm not sure I like the idea of optimizing away empty loops. If you have an empty loop you most likely want to have a short delay there, could be useful for instance in driver code when waiting on a hardware response.

It's a bug if you can't turn off this feature. As you said, the empty loop is traditionally used as a spin-wait in device drivers, etc.

if the compiler was smarter, it could differentiate between spin-waits, and real "dead loops"

For example

foo()
{
int i, sum=0;
for(i=0; i<100000; i++) sum+=10;
return;
}

Typical compilers would mark "sum" as a dead variable. Therefore, it would remove any side-effect-free statements that assign to it. Therefore, the for loop would become empty. "i" would also be a dead variable. So the entire loop can be dropped. If foo was called by bar(), it would even optimize away the call to foo().

However, the following should be detected as a spin-wait

for(i = 0; i<100000; i++);


GCC includes a general feature which is reads/writes to volatile variables can't be optimized away, thus if you write

volatile int i;
for(i = 0; i<100000; i++);

You are guaranteed that the loop won't be removed in GCC. Try it in MSVC and see if it works.

Also, GCC you usally need to specify tons of parameters to get full-optimizations, like

-O3 -W -Wall -fssa-dce -fssa-ccp -fregmove -fcprop-registers
-ffunction-cse -fbranch-count-reg -fsched-interblock -frename-registers
-freorder-blocks -fschedule-insns2 -march=pentiumpro
-foptimize-register-move -fpeephole2 -frerun-cse-after-loop -fident
-fmerge-all-constants -fmerge-constants -fmessage-length=0 -fmem-report
-ftime-report -fthread-jumps -freduce-all-givs
-fguess-branch-probability -frerun-loop-opt -finline
-fexpensive-optimizations -foptimize-sibling-calls -fomit-frame-pointer
-save-temps -fmove-all-movables -fprefetch-loop-arrays
-funroll-all-loops -mmmx -msse --param max-gcse-passes=10 --param
max-gcse-memory=100 --param max-inline-insns=10000
-Wdisabled-optimization

Also, resorting to inline assembly sometimes makes things worse, since it disables many optimization features (e.g. disrupts register allocation, liveness analysis, etc) I once replaced a call to _lrotl() with an inline assembly and it made things much worse.
 
Back
Top