feature article
Subscribe Now

Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)

I sometimes wonder if I spend too much time reading science fiction books. Similarly for watching science fiction films and TV series in general and Doctor Who in particular. The reason I say this is that I do tend to spend more time than is good for me thinking about what I would do if I inadvertently wandered into a timeslip and found myself transported back to the late 1930s or early 1940s, for example. The problems would only be exacerbated if this slip also transported me into a parallel dimension – such as one that never discovered things like Boolean algebra – in which computer science was still firmly rooted in the analog domain.

One of the things I flatter myself I would be good at would be designing a digital computer from the ground up. There are lots of things to wrap one’s brain around here, so – on the off chance anything like this happens to you – I thought I’d provide a few pertinent pointers for you to peruse and ponder.

Let’s assume that the task falls upon you to build the first digital computer on the planet. The first thing you are going to need is a good understanding of the binary number system, including signed and unsigned integers, ones and twos complement values, and a basic grasp of floating-point concepts wouldn’t go amiss. As fate would have it, I was at a friend’s house one evening earlier this week (a few of us get together to watch a couple of episodes of Doctor Who each week), and his university student son asked me some questions about radix complements and diminished radix complements, so I gifted him with a copy of my book, Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), which discusses this in excruciating detail.

Once you’ve decided on your implementation technology (relays, vacuum tubes, transistors) – which largely depends on the time in which you find yourself – a good starting point will be to decide on the fundamental architecture of your central processing unit (CPU). Will it have a single accumulator (ACC), two accumulators, an accumulator and some general-purpose registers, or just the registers? Based on this, the next step will be to decide on a set of machine-level instructions your CPU is going to use and how it will handle them (see also Weird Instructions I Have Loved by Jim Turley). Things like the ability to shift and rotate binary values, to perform logical operations (AND, OR, XOR), to perform mathematical operations (ADD, SUBTRACT), to compare two values to see which is the larger, and to jump to another location in the computer’s memory based on the results from any of these operations.

For the purposes of simplicity and brevity, let’s assume that – in your spare time – you’ve also created some form of read-only memory (ROM) and random-access memory (RAM), along with some form of long-term storage, possibly in the form of perforated paper products like punched cards or paper tapes.

Things really start to get interesting once you’ve actually built your machine, because now you have to program it. The computer itself works at the level of machine code instructions – these are the ones you decided your CPU would implement – each of which is represented by a different pattern of binary 0s and 1s.

So, how are you going to capture and enter your programs? One approach that was used with the first digital computers in our slice of the multi-universe was to (a) specify an address in the computer’s memory in binary using a set of toggle switches, (b) specify an instruction or a piece of data that you wanted to load into the memory, again in binary, using a set of toggle switches, (c) force load this information into the specified memory location, and (d) repeat over and over again for the remaining instructions and data. In addition to taking an inordinate amount of time and being prone to errors, this really wasn’t as much fun as I make it sound.

Your next step would be to define some sort of assembly language, which involves associating mnemonics with each of your instructions, like JMP for “Jump” and “LSHL” for “Logical Shift Left,” for example. Of course, there’s much more to this than simply selecting a set of mnemonics – you also need to describe an associated syntax (what constructs are allowed, how you specify comments, all sorts of things, really).

The interesting thing is that, at this stage, you now have an assembly language, but you don’t actually have anything you can do with it. Well, that’s not strictly true. What you can do is use pencil and paper to capture your programs in your assembly language. Then you hand-assemble the program into its equivalent machine code instructions (the binary patterns of 0s and 1s the computer uses). Then you enter these instructions into the computer using your trusty toggle switches.

This is probably around the time that you will capture, hand assemble, and hand load some simple utility programs that will allow you to do things to make your life easier, like reading your machine code instructions from a paper tape, for example. Along the way, you will also invent some sort of code (like ASCII in our world) that you can use to represent files of human-readable characters like letters and numbers and punctuation marks and suchlike.

What you really need is to be able to capture your programs in human-readable form (that is, in your assembly language) using a simple text editor, and then use a program called an assembler to translate this assembly code into the machine code equivalent that the computer understands. Unfortunately, you don’t have either of these little scamps at the moment.

This is a bit of a chicken-and-egg situation – what comes first, the assembler or the editor? If it were me, I think I’d start by using my pencil and paper to capture the assembly language description of a simple assembler, and then hand assemble this to create the machine code for my first assembler. Next, I’d use my pencil and paper to capture the assembly language description of a simple editor, and then use my rudimentary assembly program to assemble this into the machine code corresponding to my editor.

At this point, we are really cooking on a hot stove, because now we can use our simple text editor to capture the assembly language representation for a more sophisticated assembler, then we can use our rudimentary assembler to assemble our spiffy new assembler, and then we can use our original editor and our spiffy new assembler to create a more sophisticated editor. And around and around the loop we go.

At some stage, believe it or not, the joys of capturing programs in assembly language will start to wane, at which point we will commence to contemplate moving up to a more sophisticated programming language, but what form will this take? I’ll tell you what, I’ll leave you to mull on this for a while, and then we will return to this topic in my next column. In the meantime, as always, I’d love to hear what you think about all of this.

One thought on “Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)”

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 24, 2024
Going to the supermarket? If so, you need to watch this video on 'Why the Other Line is Likely to Move Faster' (a.k.a. 'Queuing Theory for the Holiday Season')....

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

ROHM’s 3rd Gen 650V IGBT for a Wide range of Applications: RGW and RGWS Series
In this episode of Chalk Talk, Amelia Dalton and Heath Ogurisu from ROHM Semiconductor investigate the benefits of ROHM Semiconductor’s RGW and RGWS Series of IGBTs. They explore how the soft switching of these hybrid IGBTs contribute to energy savings and power generation efficiency and why these IGBTs provide a well-balanced solution for switching and cost.
Jun 5, 2024
33,780 views