feature article
Subscribe Now

Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)

I sometimes wonder if I spend too much time reading science fiction books. Similarly for watching science fiction films and TV series in general and Doctor Who in particular. The reason I say this is that I do tend to spend more time than is good for me thinking about what I would do if I inadvertently wandered into a timeslip and found myself transported back to the late 1930s or early 1940s, for example. The problems would only be exacerbated if this slip also transported me into a parallel dimension – such as one that never discovered things like Boolean algebra – in which computer science was still firmly rooted in the analog domain.

One of the things I flatter myself I would be good at would be designing a digital computer from the ground up. There are lots of things to wrap one’s brain around here, so – on the off chance anything like this happens to you – I thought I’d provide a few pertinent pointers for you to peruse and ponder.

Let’s assume that the task falls upon you to build the first digital computer on the planet. The first thing you are going to need is a good understanding of the binary number system, including signed and unsigned integers, ones and twos complement values, and a basic grasp of floating-point concepts wouldn’t go amiss. As fate would have it, I was at a friend’s house one evening earlier this week (a few of us get together to watch a couple of episodes of Doctor Who each week), and his university student son asked me some questions about radix complements and diminished radix complements, so I gifted him with a copy of my book, Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), which discusses this in excruciating detail.

Once you’ve decided on your implementation technology (relays, vacuum tubes, transistors) – which largely depends on the time in which you find yourself – a good starting point will be to decide on the fundamental architecture of your central processing unit (CPU). Will it have a single accumulator (ACC), two accumulators, an accumulator and some general-purpose registers, or just the registers? Based on this, the next step will be to decide on a set of machine-level instructions your CPU is going to use and how it will handle them (see also Weird Instructions I Have Loved by Jim Turley). Things like the ability to shift and rotate binary values, to perform logical operations (AND, OR, XOR), to perform mathematical operations (ADD, SUBTRACT), to compare two values to see which is the larger, and to jump to another location in the computer’s memory based on the results from any of these operations.

For the purposes of simplicity and brevity, let’s assume that – in your spare time – you’ve also created some form of read-only memory (ROM) and random-access memory (RAM), along with some form of long-term storage, possibly in the form of perforated paper products like punched cards or paper tapes.

Things really start to get interesting once you’ve actually built your machine, because now you have to program it. The computer itself works at the level of machine code instructions – these are the ones you decided your CPU would implement – each of which is represented by a different pattern of binary 0s and 1s.

So, how are you going to capture and enter your programs? One approach that was used with the first digital computers in our slice of the multi-universe was to (a) specify an address in the computer’s memory in binary using a set of toggle switches, (b) specify an instruction or a piece of data that you wanted to load into the memory, again in binary, using a set of toggle switches, (c) force load this information into the specified memory location, and (d) repeat over and over again for the remaining instructions and data. In addition to taking an inordinate amount of time and being prone to errors, this really wasn’t as much fun as I make it sound.

Your next step would be to define some sort of assembly language, which involves associating mnemonics with each of your instructions, like JMP for “Jump” and “LSHL” for “Logical Shift Left,” for example. Of course, there’s much more to this than simply selecting a set of mnemonics – you also need to describe an associated syntax (what constructs are allowed, how you specify comments, all sorts of things, really).

The interesting thing is that, at this stage, you now have an assembly language, but you don’t actually have anything you can do with it. Well, that’s not strictly true. What you can do is use pencil and paper to capture your programs in your assembly language. Then you hand-assemble the program into its equivalent machine code instructions (the binary patterns of 0s and 1s the computer uses). Then you enter these instructions into the computer using your trusty toggle switches.

This is probably around the time that you will capture, hand assemble, and hand load some simple utility programs that will allow you to do things to make your life easier, like reading your machine code instructions from a paper tape, for example. Along the way, you will also invent some sort of code (like ASCII in our world) that you can use to represent files of human-readable characters like letters and numbers and punctuation marks and suchlike.

What you really need is to be able to capture your programs in human-readable form (that is, in your assembly language) using a simple text editor, and then use a program called an assembler to translate this assembly code into the machine code equivalent that the computer understands. Unfortunately, you don’t have either of these little scamps at the moment.

This is a bit of a chicken-and-egg situation – what comes first, the assembler or the editor? If it were me, I think I’d start by using my pencil and paper to capture the assembly language description of a simple assembler, and then hand assemble this to create the machine code for my first assembler. Next, I’d use my pencil and paper to capture the assembly language description of a simple editor, and then use my rudimentary assembly program to assemble this into the machine code corresponding to my editor.

At this point, we are really cooking on a hot stove, because now we can use our simple text editor to capture the assembly language representation for a more sophisticated assembler, then we can use our rudimentary assembler to assemble our spiffy new assembler, and then we can use our original editor and our spiffy new assembler to create a more sophisticated editor. And around and around the loop we go.

At some stage, believe it or not, the joys of capturing programs in assembly language will start to wane, at which point we will commence to contemplate moving up to a more sophisticated programming language, but what form will this take? I’ll tell you what, I’ll leave you to mull on this for a while, and then we will return to this topic in my next column. In the meantime, as always, I’d love to hear what you think about all of this.

One thought on “Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)”

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Advantech Dual Band WiFi
Sponsored by Mouser Electronics and Advantech
In this episode of Chalk Talk, Amelia Dalton and Monica Goode from Advantech investigate the what, where, and how of dual band WiFi. They also explore the benefits that dual band WiFi can bring to a variety of embedded designs and how you can take advantage of Advantech dual band WiFi solutions for your next design.
Jul 31, 2024
84,014 views