feature article
Subscribe Now

Life Before Windows

Before We Knew What An “Operating System” Was

Gather ’round the campfire, children, as we talk about The Time Before Operating Systems. That was when all computers came with their own built-in software, indistinguishable from the hardware. You didn’t have an IBM computer and an IBM operating system, for example. You had just an IBM system. Yeah, it included software, but nobody thought much about where it came from or what it was called. It was simply part of the machine.

This was also the time of early microprocessors, home computers, and build-it-yourself kits. Again, all of these machines came with their own bundled software. Each one was different, of course, because they were developed by the computer makers alongside the hardware. Amiga, Commodore, Altair, IMSAI, Apple, and other machines all had their own personalities as determined by the bit stream that made them go.

Then came a guy named Gary Kildall. Gary died 20 years ago, but not before changing the entire microprocessor and computer world.

I never met Mr. Kildall, but it so happens that I live just a few doors down from his old house. Coincidentally, his company’s former office building is directly across the street from me; I look out my window every day at Gary’s old office. It’s a pizza place now.

Gary Kildall figured that even early little microprocessors like Intel’s 8080 could run real software, just like the commercial IBM, Data General, or DEC minicomputers of the time. He created something he called a “control program and monitor,” or CP/M for short. The company he formed around it was named Digital Research. 

Computer scientist John Wharton says of his friend, “[Gary] offered the complete package to Intel, along with an editor, assembler, linker, and loader, for $20,000. Intel turned him down, and CP/M went on to sell a quarter of a million copies and become by far the highest-volume operating system of its time.”

“Back before the introduction of the IBM PC, CP/M supported protocols for memory allocation, file sharing, process switching, and peripheral management. When Microsoft bought the rights to an unauthorized quick-and-dirty knockoff of CP/M from Seattle Computer Products and renamed it MS-DOS, these protocols were removed, since Microsoft programmers didn’t understand why they were needed.”

CP/M was arguably the first operating system to separate the software from the hardware. Up until then, the OS (such as it was) was just whatever software that came with the system. With CP/M, the operating system, programming APIs, user interface, disk format, communication protocols, BIOS, and other features were now independent of the hardware, and independent of the company making that hardware. No longer did DEC equipment have to use DEC’s proprietary protocols for everything, or IBM equipment do everything the IBM way. CP/M was portable across processor architectures and instruction sets. It laid the groundwork for the hardware/software division of labor we have today. And it was phenomenally successful.

So why didn’t CP/M become the world’s dominant operating system? The popular myth is that IBM came calling to Digital Research but that Gary Kildall blew them off, preferring to go flying that day rather than meet with a bunch of East Coast guys in white shirts and blue ties.

In reality, Kildall did meet with IBM and he successfully licensed CP/M for their newfangled IBM Personal Computer Model 5150 (aka, the PC). But IBM ultimately offered the PC with a choice of two operating systems: Digital Research’s CP/M or Microsoft’s MS-DOS, which was cheaper. You can guess how that turned out. And thus was born Microsoft’s dominance of the personal computer world for the next 30-odd years.

I’ve often wondered, as I look at Digital Research’s former headquarters, how things might have played out differently. If the company had agreed to different licensing terms, perhaps with different pricing, would we all be using CP/M 8.1 today? Would we curse it as much as we do Windows? And what would have happened to Microsoft, and the whole Redmond tech scene? If Digital Research had become Microsoft, and vice versa, would the area around the company have boomed the same way Seattle’s tech corridor did? Oh, what might have been…

Gary Kildall himself didn’t care, according to his friends. While outsiders tended to tiptoe around the subject of Microsoft’s success or Bill Gates’s phenomenal wealth (both presumably at the expense of Digital Research) out of polite concern that Gary might be a bit sensitive on the subject, the man himself didn’t give a rat’s ass. It was never about the money, success, or fame. Gary Kildall worked at CP/M and other projects because he liked to, not because he wanted to get rich at it. Reportedly, the whole reason he developed CP/M in the first place was because he didn’t want to commute into Silicon Valley to use a “real” timesharing minicomputer. With him, it was all about the product. A real engineer, in other words.

Wharton goes on to say, “At a time when Intel was positioning microprocessors as a replacement for random logic in fixed-function desk calculators, postage scales, and traffic-light controllers, it was Gary who advised Intel that these same chips were flexible enough to be programmed as general-purpose computers. At a time when microcomputer software developers were debating the merits of machine-language programming in octal vs. hex, Gary defined the first programming language and developed the first compiler specifically for microprocessors.”

In any endeavor, somebody has to be first. In microcomputer operating systems, that was Gary Kildall and Digital Research.

Last week our little town placed a commemorative plaque outside Gary Kildall’s house. If you’re in the area and are a fan of the GPS game geocaching, there’s a GZ called “Life Before Windows” (GC10PG1) that will test your hexadecimal skills. I think Gary would have aced it. 

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

ADI Pressure Sensing Solutions Enable the Future of Industrial Intelligent Edge
The intelligent edge enables greater autonomy, sustainability, connectivity, and security for a variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Maurizio Gavardoni from Analog Devices explore how the intelligent edge is driving a transformation in industrial automation, the role that pressure sensing solutions play in IIoT designs and how Analog Devices is reshaping pressure sensor manufacturing with single flow calibration.
Aug 2, 2024
60,237 views