Base 10 computer

Various architectures do have instructions for decimal math. It can combine multiple bits within the representation and treat them as a single digit.
Non-binary logic is possible, as an alternate interpretation of that statement. It doesn't need to be 1s and 0s only, though there are some significant practical advantages for the way things are.
 
Yep, most business computers were base 10 (BCD) until what us old guys would call "fairly recently". At my first full time job maybe 90% of the work ran on an IBM 360 (32-bit binary) but the other 10% was still running on an IBM 1401, which was on the edge between binary and BCD. IIRC it had 6-bit "bytes" but performed arithmetic operations on variable-length words with a single BCD digit per "byte".
 
Burroughs Medium Systems

https://en.wikipedia.org/wiki/Burroughs_Medium_Systems

First generation

The B2500 and B3500 computers were announced in 1966. [1] They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some binary base, and had up to 100 mantissa digits. A typical COBOL statement 'ADD A, B GIVING C' may use operands of different lengths, different digit representations, and different sign representations. This statement compiled into a single 12-byte instruction with 3 memory operands. [2] Complex formatting for printing was accomplished by executing a single EDIT instruction with detailed format descriptors. Other high level instructions implemented "translate this buffer through this (e.g. EBCDIC to ASCII) conversion table into that buffer" and "sort this table using these sort requirements into that table". In extreme cases, single instructions could run for several hundredths of a second. MCP could terminate over-long instructions but could not interrupt and resume partially completed instructions. (Resumption is a prerequisite for doing page style virtual memory when operands cross page boundaries.)

The machine matched COBOL so closely that the COBOL compiler was simple and fast, and COBOL programmers found it easy to do assembly programming as well.

In the original instruction set, all operations were memory-to-memory only, with no visible data registers. Arithmetic was done serially, one digit at a time, beginning with most-significant digits then working rightwards to least-significant digits. This is backwards from manual right-to-left methods and more complicated, but it allowed all result writing to be suppressed in overflow cases. Serial arithmetic worked very well for COBOL. But for languages like FORTRAN or BPL, it was much less efficient than standard word-oriented computers.
 
The IEEE-754-2008 standard defines a few decimal floating-point formats (32-bit, 64-bit and 128-bit) that generally use an encoding that packs three decimal digits into 10 bits. This has been implemented in some processors, such as IBM's POWER processors from POWER6 and up; its main use case is apparently financial calculations. Decimal floating-point ALUs are generally considerably slower and more expensive than binary floating-point ALUs, so I wouldn't expect to see them in e.g. GPUs anytime soon.
 
Digital usually means binary and it probably used normal binary logic with some special handlers to adjust things for base-10 and not use all of the bits allocated.

Yep, most business computers were base 10 (BCD) until what us old guys would call "fairly recently". At my first full time job maybe 90% of the work ran on an IBM 360 (32-bit binary) but the other 10% was still running on an IBM 1401, which was on the edge between binary and BCD. IIRC it had 6-bit "bytes" but performed arithmetic operations on variable-length words with a single BCD digit per "byte".

The software may have stored numbers in BCD or some other decimal representation and the CPU may have had instructions to efficiently work with these but they'd still mostly have internal logic/ALUs/etc that were some multiple of base-2.

There were actual base-10 computers but I think they died out in the 40s or 50s.
 
The 6502 (Apple II) and 6510 CPUs (C64) supported BCD arithmetic. Gory details are here:
http://www.6502.org/tutorials/decimal_mode.html
I remember that! Being a scientific type, I managed to avoid COBOL. ;)
Also, it may bear mentioning in passing that word length (8-bit=byte) took some time solidifying as well. While it may seem odd now, 7-9 bits were used that I have some personal experience with, CDC Cybers (precursors and parallell to CRAYs) used 9-bit representation, 36-bit words and 72-bit d-words which actually bit me as I ported code to a 32-bit word system, where the dubious numerical stability of the coded algorithm made it very occasionally (grrrr) diverge rather than converge nicely.
Never trust an algorithm or piece of code that is dependant on the specifics of how numbers are represented or rounded.....
 
...the specifics of how numbers are represented or rounded.....

The great thing about rounding is just how many strategies you have -- RoundingMode in Java has ceiling, floor, half-up, half-down, half-even (banker's), and then there's whatever that bastardization that java's Math.round uses which isn't any of those. And the great thing about having all those choices is how different vendors (like database providers) choose different strategies. Ugh, rounding!
 
The software may have stored numbers in BCD or some other decimal representation and the CPU may have had instructions to efficiently work with these but they'd still mostly have internal logic/ALUs/etc that were some multiple of base-2.

There were actual base-10 computers but I think they died out in the 40s or 50s.

My first full time job was with a typical insurance company - in 1979/80 payroll was still running on an IBM 650 emulator hosted on a 1401, with the 650 sitting over in the corner as backup. The 650 was decimal-oriented internally and the 1401 was character-oriented internally but numerical calculations were performed directly on digit-per-character decimal numbers without any conversion to internal base-2 representation (IIRC there was a dedicated bit in each character that indicated end of string or end of number).

The big news in 1981 was getting the 650 emulator running on a 1401 emulator hosted on the S360, allowing the 650 to be hauled away and the 1401 moved to cold backup status, while keeping the paycheques coming.
 
My first full time job was with a typical insurance company - in 1979/80 payroll was still running on an IBM 650 emulator hosted on a 1401, with the 650 sitting over in the corner as backup. The 650 was decimal-oriented internally and the 1401 was character-oriented internally but numerical calculations were performed directly on digit-per-character decimal numbers without any conversion to internal base-2 representation (IIRC there was a dedicated bit in each character that indicated end of string or end of number).

The big news in 1981 was getting the 650 emulator running on a 1401 emulator hosted on the S360, allowing the 650 to be hauled away and the 1401 moved to cold backup status, while keeping the paycheques coming.

I stand corrected, IBM 650 indeed looks like one of those decimal systems still being developed in the 50s. I guess I'm really disconnected from the world where these machines were being used even in the 1980s (although even that was still well before my time)

The way I put it was probably not that good, but the distinction I'm making is between whether or not it's implemented with digital logic. I would consider that to be the case with 1401, even if the data representation and instructions are strictly limited to decimal math.
 
Yeah... you were pretty much right on in terms of when they were being developed, but they were sold for another decade or so and used for another decade after that. Nobody wants to mess with payroll :)
 
Some early HP 'desktop calculators' supported BCD math... See the marvellous 'Hybrid Processor' that powered the 9825 and other desktop calculators in the mid '70s: http://www.hp9845.net/9845/hardware/processors/
it's a multichip unit composed of three programmable chips: a I/O unit (16 bit bus, 5.7MHz), a general purpose ALU (it should be 16 bit as well) and... a BCD math coprocessor (0.15 MFlops).
 
Back
Top