Why certain clock speeds always seem to 'return' in PC hardware?

msxyz

Newcomer
I was discussing with my colleagues why certain speeds seems to return frequently in hardware.

For reasons part historical and part technical, for example, all the frequencies that are a multiple (or fraction) of 14.318 MHz originate from the need of sharing memory between video (in a form that NTSC equipment could readily digest) and CPU, which was common in the early days of personal computing.

Other common frequencies in PCs are 8 MHz (a 24/48MHz clock was used for floppies, then USB, not to mention that in PAL land it is a nice multiple of 15625) or the ubiquitous 27 MHz (which replaced 14.3 MHz in imaging devices).

We couldn't find a reason why so many devices seems to operate on multiple of 33.3 MHz, though. Why this speed was chosen? It can be synthesized in a PLL multiplying 14.318 x 7/3 (the result it's actually closer to 33.4 MHz) but other 'nicer' numbers can be synthesized as well such as 25 MHz (14.318 x 7/4) or 20 MHz (x 7/5).

Does anybody know why the frequency of 33 MHz seems to hold such a great importance in the PC world?
 
It is the clock of the PCI bus, and that of the Low Pin Count bus which emulates an ISA bus at 8.33MHz on about a quarter the pins.

So for a PC, you need that clock to boot at all. Even if you replaced every PCI device with PCIe ones, LPC is how you access the real time clock and CMOS settings, as well as a few stuff (PS/2, old COM1 on IRQ4 and COM2 on IRQ3 etc. and even the Trusted Platform Module)
 
I seem to recall hearing or reading about Intel coming out with new system design guidelines when they launched 33MHz 386s, saying something to the effect of "The ancient ways of making 16 or 25MHz computers won't work with this speed demon...!". That should have been around the same time as they started developing PCI.

Don't know if the two are in any way interconnected, though.
 
Intel tried a 50MHz bus speed on the 486 but it was troublesome. So it was quietly retired and we got the DX2/66 instead. 486DX 33 was very popular as well when I was a kid. Intel then kept 33MHz at the max FSB speed they used on the 486.
VLB was a hackish bus that ran at that frequency. Cards could crap out at higher frequency (e.g. at 40MHz bus, your vid card and floppy+IDE card might work fine but if you use an SCSI card instead there may be bus errors depending on the card and random luck)

There were bus wars before : IBM used MCA, with similar features as PCI (plug'n'play, etc.) but it ran at 8MHz (32bit). The strict licensing to close off the PC market backfired and the industry came up with EISA instead (32bit and sort of 8MHz) but it was only successful on servers and some workstations, a bit like 64bit PCI slots later.
A rare and nice speed demon would be a 486 DX50 with EISA graphics card, 32MB RAM on 9bit SIMM slots and L2 cache. Holy Grail in 486 fetishism lol.

Had IBM not made braindead decisions we would perhaps be running "MCA express" slots with a couple legacy MCA 2.0 or whatever slots on the side. ISA would have died much earlier. But clinging to the old ways was funny, in the year 1997 we had a brand new Windows 95 beige computer to play with and instead of running the actual Windows 95, it was an Über DOS gaming machine.
 
Last edited:
33 MHz speeds were common well before the PCI bus advent. In fact the PCI bus, at the beginning, was available at 25, 30, 33 MHz frequencies (to run synchronously with the system bus). ISA topped at 8 MHz (it was also available at slower speeds, for example at 6MHz in early 286s). The fact that now it runs mostly at 8.33 is a consequence of using an integer divider from 33MHz to obtain its clock. Most devices that run at 8 MHz will do so with a slight overclock at 8.33 and this simplifies the chipset circuitry; in any case most motherboards of the '90s offered a BIOS option to set wait states on the ISA bus for them older devices which wouldn't run reliably at 8 or 8.33 MHz.

I believe this choice of frequency predates even the 386 (the 33MHz variant first appeared in 1988). 60ns DRAM started to appear about 1985-86 along with the 1um process but then, again, I'm not sure if a clock speed of 33MHz (60nS equal 2T) was chosen because of that or the other way around; in fact 60ns is just the guaranteed time needed to fetch data after the RAS line goes down (60ns RAMs are, in fact, 105-110ns RAMs if we count the RAS precharge time as part of a 'true' random access and not the page mode).

Last funny fact: the 68020 debuted with an odd choice of frequency of 16.7 MHz in 1984 (33.4 / 2 =16.7), so this 'magic' number lingered around even at that time!

If any old timer is here and has any clue, any input is welcome. I made a bet with my colleagues regarding the true reason! ;-)
 
I know that VLB also linked to the bus speed of the system. There were jumpers (later, firmware settings) that similarly set latencies for VLB in response to 16MHz (!), 20MHz, 25MHz, 33MHz, 40MHz and even 50MHz bus speeds. I think later in the PCI realm they had the option to run the bus asynchronously at the extreme speeds at each end (33 PCI for 16 bus, 25 PCI for 50 bus) and used latency settings for the stuff in the middle.

I thought I remembered the answer for the ubiquitous 33.4MHz speed, but I can't for the life of me remember now. :(
 
The component that generates the timing is cheap....
Sure, but any frequency will do. Actually, many frequencies came into use because crystal oscillators were readily available and mass produced (ie the NTSCx4 reference of 14.318Mhz) while having a quartz cut specifically for a single, specific use may cost a little more. (we're talking about cents, but industrial production is all about cutting corners everywhere is possible because savings are realized on the quantity of mass produced goods).

Anyway, I couldn't find a better answer anywhere else and also this topic seems to have run dry. Here is the explanation I was told a while ago and that could be as good as any. The original frequency employed was 2^25 Hz or 33.55MHz. It's a nice neat number than can be reasonably divided easily with flip flop counters into several fractions of seconds, all powers of two. The original advantage must have been lost or must had escaped later designers, because many of the old boards (late '80s) I have collected over the years already employ a 14.318 MHz timebase (14x7/3 = 33,4) and, starting from mid '90s, a 66 MHz one.
 
I wonder what the max ISA bus speed was, i.e. on a "turbo XT" with a 8088 or 8086 at 10MHz I'd wager the 8-bit ISA was running at 10MHz, perhaps same with a 286 10MHz (AT bus / 16bit ISA bus) but with a 286-12 maybe there's a divider (but which one?)

Anyway I lost a PC to PCI overclocking, that Celery 500 ran as if nothing happened at FSB 75MHz ; it didn't complain by itself at FSB 83 but that's a 41.5MHz PCI bus and I ended up killing the motherboard or whatever little part in it. Yet I did know 75 was entirely safe and there was something wrong giving voodoo instability at 83 sometimes.
I kind of miss it lol. Intel did something evil with the chipset : it's a 440BX rebranded to 440ZX and it refuses to boot (black screen) with any RAM size above 256MB. Was it a real limitation? or an actual 440BX with a check to refuse to work with big mem.
 
I wonder what the max ISA bus speed was, i.e. on a "turbo XT" with a 8088 or 8086 at 10MHz I'd wager the 8-bit ISA was running at 10MHz, perhaps same with a 286 10MHz (AT bus / 16bit ISA bus) but with a 286-12 maybe there's a divider (but which one?).
Officially 8 MHz but, by using wait states, you could be more 'flexible'

Many old computer interfaces/buses were nothing but buffer/latches realized with discrete logic (like the 74xx244 or 245) so these are more tolerant of non standard frequencies. For example, the hobbyists regularly build IDE interfaces for ancient 8/16 bit computers at all kind of odd frequencies because IDE itself, in its simplest form, can be realized with a pair of 74244 and a pair of 74245. That is enough, for example, to pilot a compact flash card. The first IDE ran at 6 MHz (the frequency of the original PC AT bus) but soon it was stepped up to 8MHz and remained unchanged till the advent of ATA/16 /33 etc which, more or less, mimicked the frequency progression of the computer buses.

The weakest link is usually the 'cycle time' of the peripheral behind the bus: when you write or read a value inside a peripheral, you must ensure that the minimum time is always respected least the data returned (or written) is trash. Chips made for the 68K world, for example, returned a DTACK signal so that the computer designer doesn't have to worry about using chips with different cycle times or that are considerably slower than the CPU bus itself. The 68K CPU would patiently wait till the DTACK is received (or a system monitor returns a bus error signal when the device doesn't answer for too long). This system was especially important for 68K cpus, as all the peripherals were memory mapped, thus accessed like any other RAM/ROM cell (but usually much slower to respond).

The problems with PCI and other modern buses is different. PCI is synchronous: the bus uses one clock. The clock runs at 33MHz by default, but it can run lower (all the way down to idle) to save power; running it higher than specs will result in a lot of other glue logic and chips running out of spec since they are all forced to run in sync, possibly resulting in damage. Of course, synchronous buses and interfaces also have mechanisms to force a temporary stall but they rely on the master clock for anything else, thus it's important that this clock doesn't run faster than the devices are designed to cope.
 
Digging through unrelated technology, I arrived at this tiny tidbit in a VMWare Guest Timekeeping document which then reminded me of this thread:
PIT
The PIT is the oldest PC timer device. It uses a crystal-controlled 1.193182MHz input oscillator and has 16-bit counter and counter input registers. The oscillator frequency was not chosen for convenient timekeeping; it was simply a handy frequency available when the first PC was designed. (The oscillator frequency is one-third of the standard NTSC television color burst frequency.) The PIT device actually contains three identical timers that are connected in different ways to the rest of the computer. Timer 0 can generate an interrupt and is suitable for system timekeeping. Timer 1 was historically used for RAM refresh and is typically programmed for a 15µs period by the PC BIOS. Timer 2 is wired to the PC speaker for tone generation.

1/3rd of the NTSC television color burst frequency. Multiply by four, and you get the 4.77MHz that we know the earliest XT's ran at. Multiply by seven, and you get the 8.3MHz of later generations. By twenty eight, and you get 33.4MHz.

Perhaps this is the foundational component we were loking for...
 
Good catch! If I remember correctly 1.19 MHz was also the frequency of the 6502 inside the Atari VCS, no doubt that frequency was handy to create all the signals needed to drive a NTSC TV (including color. I was always amazed by the clever tricks used to create the color signal in those early home computer consoles... no DAC or dedicated encoders, just delay lines, phase shifters and counters).
 
The techno-archaeological work in this thread is impressive. Keep it up for a few pages, and you might be able to trace it all back to the frequency of the first steam engine.
 
Back
Top