What was MS-DOS? Historical arguments aplenty *spawn

aaah, good old memories...

about emm386 stuff:
initially, memory was paged using a special hardware window that allowed i.e. 8086 to address more than 1mb of ram, by making pages 'appear' magically. But that kind of RAM was hellish expensive. Later, with 386, the 'extra memory' was just paged in/paged out and became very fast (and no need any more of special ram).

About POST+BIOS+OS... the BIOS offered disk access via Int13h, which required you to do the math for cylinder/sector/plate and much more... that was not a file system.
Int25/26h was mapping the disk in a flat way, converting your addressing to raw (bios) access.
Won't enter details of ATA, extended ATA and the like... was crazy enough for the time.
Int25/26h was actually "the" OS services, and was provided by MSDOS.

the i186 was not supporting himem, was just a sort of 'superfast' 8086... and the 286 *was* actually resetting. They just discovered that an interrupt keyboard could trigger a soft reboot, which had no need of going into the POST blah blah. So they were able to use it to switch across protected mode to real mode this odd way.

...at those time, our common def for 286 was "a trashcan with the wheels".

About my remembering of DOS 1.0 and command.com... argh!
For sure, however, it had a temporary part that was unloaded and a permanent part. But really, on DOS 1 I cant remember if it was unloading in full or not... soz :p
 
I'm happy with the definition of OS as the resident foundation software layer for applications for a system. MS-DOS fits this definition. You're original definition was a BASIC-like interpreted language. I don't know what fits that.

The core interpreter for the original DOS was based on BASIC. The same internal memory layout etc. The bits that we think of as the OS were stitched on around it.

The article has been updated with time. Look at it from an engineering point of view. If COMMAND.COM was not resident, calling, and receiving output from the last terminating program, how exactly did batch files work? How did any MS-DOS program terminate? What state data was passed too? It was the shell because part of the shell was always resident. It was the also the interface for users and the bridge between commands, whether run in immediate or batch modes.

Batch files than ran the commands built into COMMAND.COM were just like running instructions in a BASIC interpreter. Batch files were just the equivalent of a .BAS file. It wasn't until much later that you had such fancy things as resident portions of the OS.

Program termination and flag passing was all processed through the use of interrupts. Seriously, check out RBIL. It's all written down there. Though it's pretty comprehensive now and contains a lot of the extended interrupts/

And Bill Gate was most uncomplimentary about the i286 describing it as "brain dead", but that didn't stop bespoke extended memory managers appearing for 286 and much more widely for the i386 which appeared in 1985 whereas you said:


I don't know who wrote those EMM386 managers :???:

The i286 was a joke, but no more than the i186. Motorola were running rings around Intel at that point. The 68k series were monster CPUs in comparison and far, far easier to develop for. If the PC had been based on the 68k series with CP/M and not the crippled x86 and MS-DOS the PC industry would probably be much more advanced than it is now.

We were writing our own custom EMM managers and Borland supplied their own with Pascal et al. There were lots of different flavours at one time.

Intel had one strategy for extending memory and MS had another. And in the middle you had IBM who just seemed confused by the whole affair! Bill Gates couldn't see a reason why we should ever need more than 640k and Intel had a maths co-processor that couldn't add up (or multiply I forget).


I never said it was a clone or compatible, just that it was the genus for MS-DOS.

In the same way that MS copied Apple for their GUI. And Apple 'borrowed' heavily from Xerox.
 
MS-DOS and the IBM-PC were abominations.
It's sad how the worst tech usually wins because of politics, inertia, price, coming from the "right" company or just being at the right place at the right time.
If only IBM had a clue we would have been spared of decades of suffering.
 
MS-DOS and the IBM-PC were abominations.
It's sad how the worst tech usually wins because of politics, inertia, price, coming from the "right" company or just being at the right place at the right time.
If only IBM had a clue we would have been spared of decades of suffering.

OK. So what was the BETAMAX operating system then in your view?

I remember many options that were better in some sort of technical or aesthetic sense. I don't remember many that were actually, realistically, likely to get anywhere in a market where people choose what to use.
 
about emm386 stuff
I just had a flashback of programming a DOS SVGA blitter (obviously with ASM) that took the texels from EMS memory (as SVGA images took quite a bit of memory). You had a single 64KB window to the SVGA memory and a single 16KB window to the EMS memory, and you had to change them manually (by interrupts)... At least it was guaranteed that a single scanline was in a single page, so you only had to change the pages per scanline.

And the alpha blending was fun too (and suprisingly fast). 256 palettized textures could be blended without any math by having a (256*256) 64KB lookup table. We had many funky blend modes in our 2d engine (and an offline tool to generate the lookups). Memories :)
 
I just had a flashback of programming a DOS SVGA blitter (obviously with ASM) that took the texels from EMS memory (as SVGA images took quite a bit of memory). You had a single 64KB window to the SVGA memory and a single 16KB window to the EMS memory, and you had to change them manually (by interrupts)... At least it was guaranteed that a single scanline was in a single page, so you only had to change the pages per scanline.

And the alpha blending was fun too (and suprisingly fast). 256 palettized textures could be blended without any math by having a (256*256) 64KB lookup table. We had many funky blend modes in our 2d engine (and an offline tool to generate the lookups). Memories :)

It was the Paradise cards that were the first to contain a workable blitter. I think anyway. Programming used to be so much more fun back then!
 
OK. So what was the BETAMAX operating system then in your view?
I remember many options that were better in some sort of technical or aesthetic sense. I don't remember many that were actually, realistically, likely to get anywhere in a market where people choose what to use.

Frankly I don't know all the options of that era, but I'd say that Xenix, Tripos, Oasis or a decent CP/M would have been better than QDOS. And Motorola 68K would have been better than Intel x86.
The Apple Lisa and the Macintosh were already under development before IBM even started the PC project. And instead of properly designing the computer, IBM chose to quickly assemble it using crappy components and without giving much thought about it.
GUIs, multitasking, multiuser, audio, "plug & play", flat memory, etc, were not new concepts when IBM started assembling the PC. The development of the Macintosh and Amiga happened roughly at the same period, and they all had comparable hardware but were better than the PC.
 
I've never used or even seen CP/M running, I am too young for that and even then I guess most people used pen & paper, typewriters etc. during its era or at best were introduced to an Apple II, Trash 80, VIC20, C64 etc.
Maybe CP/M was especially a US thing, even. But it was widely used enough that I could read about one of its advanced "features" : users would destroy data by being confused by the PIP command syntax. And maybe some common error user would often lead to wiping the wrong disk.

DOS at least was simpler. "copy file destination", "format b:", that's easy.


MS-DOS and the IBM-PC were abominations.
It's sad how the worst tech usually wins because of politics, inertia, price, coming from the "right" company or just being at the right place at the right time.
If only IBM had a clue we would have been spared of decades of suffering.

And yet it survives to this day.
The BIOS was the best thing ever to happen, it's we we choose between MSI, Asrock, Asus, Gigabyte etc. and not between Apple, Atari, Amiga, Acorn, MSX..
We've had decades of sourcing freedom and meaningful backwards compatibility. Of course it took till the mid 90s for PC to get good (486, VGA, sound and even then we mostly had "adlib" compatible music)

Proprietary systems could evolve (like Amiga 500/1000 to 1200/2000, to Amiga PowerPC eventually) but no other computer could be compatible with them. Or the evolution might be a dead end (Apple II GS).
Or you had industry standards with multiple companies making computers : old CP/M, MSX, maybe Apple II clones. They didn't meaningfully evolve.
The PC was both industry standard and managed to evolve, it's great that you can boot any BIOS-based PC the same way, run things like SeaTools from pure DOS etc. It took about 30 years for BIOS to be replaced by UEFI and even then you have BIOS emulation.

Without the BIOS you have the ARM situation where every little hardware is something special and a special OS image has to be baked. Android, the DOS/Windows of the ARM world, holds stuff together with strings, it caters to that need and even then you will mostly use the unofficial Java that comes with it, abstracting the hardware (and CPU instruction set)
 
Frankly I don't know all the options of that era, but I'd say that Xenix, Tripos, Oasis or a decent CP/M would have been better than QDOS. And Motorola 68K would have been better than Intel x86.
The Apple Lisa and the Macintosh were already under development before IBM even started the PC project. And instead of properly designing the computer, IBM chose to quickly assemble it using crappy components and without giving much thought about it.
GUIs, multitasking, multiuser, audio, "plug & play", flat memory, etc, were not new concepts when IBM started assembling the PC. The development of the Macintosh and Amiga happened roughly at the same period, and they all had comparable hardware but were better than the PC.

IBM did try to make a "nice" PC, called IBM 5100, which was insanely expensive. IBM PC is their second try which main focus was to make the price down. That's why they use 8088, because its 8 bits bus allows them to use cheap, already available peripherals instead of having to use 16 bits components. Those nice functions you mentioned all cost money, and I have to say that price (especially those clones) was an important factor of IBM PC's success.

Today we are largely free from the legacy of IBM PC. Even the traditional BIOS is being replaced by EFI (though legacy modes are still widely available). I'd say the only time lasting legacy is x86, and while it seems impossible for a while ago, is now being seriously challenged by ARM.
 
I didn't get into MSDOS until 5.0, so I have very little to provide in this thread. I do have interest in the dichotomy between MSDOS providing the most basic OS layer underneath Windows 3.x and Win9x versus the "drivers" that were involved with those versions of WIndows?

For example, I know that Windows 3 supported very basic video drivers that were an obvious extension over the capability provided directly by the underlying command.com capabilities. I also know that in 3.11wfw there were extensions made to disk I/O handling for performance reasons. There was also a "32 bit" Windows capability patched into the late life of the Win3.x OS cycle.

Then you get into more nebulous areas like WIndows 95's apparent full-on driver stack, but still being launched by an MDSOS "7" operating system underneath. Where do those lines cross?
 
Then you get into more nebulous areas like WIndows 95's apparent full-on driver stack, but still being launched by an MDSOS "7" operating system underneath. Where do those lines cross?

Windows 3.1 runs in protected mode, but it still uses DOS for some very basic operations, such as disk access, for compatibility's sake. The reason is that Windows 3.1 didn't have drivers for everything, so for those "unsupported" hardware components, it switches back to real mode and do it the old fashion way. It's slow, of course, but at least it runs. The situation is worse for 16 bits protected mode because there's no standard way to switch back to real mode, so a trick called triple fault is used to reset the CPU and back to real mode.

Windows 95 largely removed most of these, and basically everything needs a driver to function. However, it still retains a version of DOS (the version 7 you mentioned) as a boot loader. Windows NT, on the other hand, has its own boot loader and no longer needs DOS.
 
And yet it survives to this day.
The BIOS was the best thing ever to happen, it's we we choose between MSI, Asrock, Asus, Gigabyte etc. and not between Apple, Atari, Amiga, Acorn, MSX..
We've had decades of sourcing freedom and meaningful backwards compatibility. Of course it took till the mid 90s for PC to get good (486, VGA, sound and even then we mostly had "adlib" compatible music)
Yes, the PC was technically unimpressive but it was "open", expansible and much easier to reverse engineer. So it quickly got cloned and became the market standard. In the end that's what matters (and in the beggining it was the name "IBM"). But I just wish it had been designed by a more forward looking company instead of a reactionary mainframe monopoly in a hurry (while still being fairly "clonable", of course).

IBM did try to make a "nice" PC, called IBM 5100, which was insanely expensive. IBM PC is their second try which main focus was to make the price down. That's why they use 8088, because its 8 bits bus allows them to use cheap, already available peripherals instead of having to use 16 bits components. Those nice functions you mentioned all cost money, and I have to say that price (especially those clones) was an important factor of IBM PC's success.
I don't think that a better designed PC would have been super-expensive (perhaps only if still designed by IBM). Choosing the 8088 may have been cheaper and more convenient at first but it was a short sighted decision. Choosing a better CPU would have prevented all the memory problems that were bound to appear in the near future.
When IBM released the PC, Apple already had a mouse and GUI that worked nicely on an Apple II, there were multiuser and multitasking OSes running on 8bit and 16bit machines, some 8bit systems had better sound and graphics than the PC, etc.
Surely it's too much to expect the PC to have everything from above, but it had none. And I can even understand that fact since the PC was open and expansible, but the choices of CPU and OS were just wrong.
 
I don't think that a better designed PC would have been super-expensive (perhaps only if still designed by IBM). Choosing the 8088 may have been cheaper and more convenient at first but it was a short sighted decision. Choosing a better CPU would have prevented all the memory problems that were bound to appear in the near future.
When IBM released the PC, Apple already had a mouse and GUI that worked nicely on an Apple II, there were multiuser and multitasking OSes running on 8bit and 16bit machines, some 8bit systems had better sound and graphics than the PC, etc.
Surely it's too much to expect the PC to have everything from above, but it had none. And I can even understand that fact since the PC was open and expansible, but the choices of CPU and OS were just wrong.

I don't know, if IBM PC has all the functions you mentioned, then it becomes a Mac :p
Of course, we'll never know what'd happen if IBM PC was something like Mac but "open" (i.e. other people can clone cheaply). However, just take GUI for example, if you have a built-in GUI, then a mouse will be required, and that's an additional cost (for a reference, my first mouse cost something like US$70). Not to mention that a GUI enabled computer likely will need a faster CPU and more memory.

Further, IBM PC did has a GUI called Windows. Windows 1.0 was out in 1985, just about one and half year later than the first Macintosh. It hadn't caught on, though.

The question of IBM PC's choice of CPU has been heavily discussed for quite some time. With the benefit of hindsight, 68K seems to be a better choice, but on the other hand, 68K didn't survive the RISC revolution, while x86 did. Maybe ARM would be a better choice, but unfortunately ARM wasn't available at that time (the first ARM CPU was released in 1985, with commercial release in 1986).
 
With 68k you would have ended like Amiga, flat memory with no protection, or MacOS hacked-together memory management. ( http://en.wikipedia.org/wiki/Mac_OS_memory_management is a decent read )

Early 68000 seemed to have manufacturing problems and ran at 5 MHz. And the rosy, great systems used a 68010 (out in 1982) + external MMU. I'm forgetting the Apple Lisa which had a 68000 and a real OS (more like Windows 95 than Mac OS?) but it cost like a car and was very slow.
 
And early ARM is pretty crazy too. Since early ARM only addresses 26 bits, they decided that it's a good idea to use the high 6 bits for other purposes, and also the low 2 bits since everything is aligned to 4 bytes. Fortunately this is mended in later revisions, but if ARM was used in some popular computers where backward compatibility is important, I can't imagine what the mess it might be.
 
With 68k you would have ended like Amiga, flat memory with no protection, or MacOS hacked-together memory management. ( http://en.wikipedia.org/wiki/Mac_OS_memory_management is a decent read )
Why would you end up without memory protection, because not every 68k had one? Linux runs fine on 68020 + MMU.
Practically anything is alot better than the MSDos segments, and until 286 there was was no protection either

The question of IBM PC's choice of CPU has been heavily discussed for quite some time. With the benefit of hindsight, 68K seems to be a better choice, but on the other hand, 68K didn't survive the RISC revolution, while x86 did. Maybe ARM would be a better choice, but unfortunately ARM wasn't available at that time (the first ARM CPU was released in 1985, with commercial release in 1986).
I rather doubt x86 survived for any other reasons than the x86+MS-DOS/Win stronghold. And even then it got successively replaced, with the compatibility parts just active if you run really old code and otherwise just wasting space. x86 and specifically x87 is a horrible design.
the 68000 is not "entirely RISC" (arguably no CPU is truly RISC or CISC but something between) but alot better and saner than the intel equivalent.
 
I don't know, if IBM PC has all the functions you mentioned, then it becomes a Mac :p
Of course, we'll never know what'd happen if IBM PC was something like Mac but "open" (i.e. other people can clone cheaply). However, just take GUI for example, if you have a built-in GUI, then a mouse will be required, and that's an additional cost (for a reference, my first mouse cost something like US$70). Not to mention that a GUI enabled computer likely will need a faster CPU and more memory.
Further, IBM PC did has a GUI called Windows. Windows 1.0 was out in 1985, just about one and half year later than the first Macintosh. It hadn't caught on, though.
Yeah, I know I'm asking too much, but at least a decent CPU and OS would have been nice.
Regarding the GUI, look here, a mouse and GUI on a lowly Apple II in 1981. If an Apple II could do it, so could a PC.
And the reason Windows 1.0 didn't caught on was because it sucked hard. It was just a slightly improved "MS-DOS Shell".

Why would you end up without memory protection, because not every 68k had one? Linux runs fine on 68020 + MMU.
Practically anything is alot better than the MSDos segments, and until 286 there was was no protection either
Memory protection would be expensive at that time. And it cannot be retrofitted into an OS that do not support it without breaking compatibility. But even without protection, the choice between AmigaOS/MacOS/Xenix/whatever vs MS-DOS is still a no-brainer.
 
Last edited by a moderator:
Regarding the GUI, look here, a mouse and GUI on a lowly Apple II in 1981. If an Apple II could do it, so could a PC.

pc got it in 1985 (ps: afaict apple got a gui in 1985 could be wrong though)
K4G6Uzi.png
 
CP/M (control program/monitor ?), was an OS initially written (or at least widely implemented) on Z80 8-bit CPUs.

One of the posters up thread was correct regarding the 380Z computers. These were made by Research Machines and were the standard hardware my UK technical college used in the very early 80s.

I don't know much about the origin of DOS, but I'm pretty sure it was an OS, and not just a command line batch processor.

In relation to BASIC, I do still have an old 6502 based homebrew with 10K (as in, it takes up 10K of ROM) "extended" BASIC, which when you press a certain key sequence brings up "written by Weiland and Gates"

<---wanders off muttering about showing his age.
 
Back
Top