I don’t really recall when I first ran across the concept of the binary. I think I must have been around six years old. I remember getting a lined pad and pencil and starting to capture the sequence: 0, 1, 10, 11, 100, 101, 110, 111, 1000… until I’d filled the entire pad. I also remember being surprised that I hadn’t “reached the end” of the binary count sequence (it was some time before I wrapped my brain around the fact that the integers [in any base] went on forever).
Similarly, I’ve been enthused by digital logic ever since I discovered it… or it discovered me. I’m entranced by the things one can do with combinational logic just by connecting different collections of gates together, like implementing a 2-input XOR using four 2-input NAND gates, for example (not least that there are multiple ways to do this).
You can only imagine my surprise when I was introduced to the concept of feedback used to implement memory elements like level-sensitive latches and edge-triggered flip-flops out of primitive gates, like an SR latch formed from two back-to-back NAND gates, for example.
I was really excited by the idea of combining multiple flip-flops together to create things like serial in, parallel out (SIPO) shift registers. And I was almost delirious with delight when I realized all the things that could be achieved by feeding one or more of the outputs of a SIPO back to its input. I think of this as “nested feedback” (the hardware equivalent of “nested loops” in software) in that we have the “inner” feedback loops inside the individual registers and the “outer” feedback loops in the form of the outputs being fed back as inputs.
For example, suppose that we have a 3-bit register that we preload with a value of 100 in binary. Perhaps the simplest implementation is when we connect the SIPO output from its final stage directly back to its input, in which case, clocking the register will result in a sequence of 100, 010, 001, 100… This is known as a Ring Counter.
Now, assume that we preload our SPIO with 000, and we feed the inverted version of the SPIO output from the final stage back to its input. In this case, clocking the register will result in the sequence 000, 100, 110, 111, 011, 001, 000… This is known as a Johnson counter (see also Who Invented the Johnson Decade Counter (and Why)?).
Where things really start to get exciting for me is when multiple outputs (“taps”) are fed back through a simple logic function (like an XOR) into the input. The result is a linear feedback shift register (LFSR) that produces a sequence of bits that appears random but is actually predictable and repeats after a certain length. These little scamps are often used for pseudorandom number generation, scrambling, error checking (like CRCs), and encryption applications.
If we select the correct tap points, then an n-bit LFSR will cycle through (2n – 1) patterns. For example, a 3-bit LFSR will cycle through (23 – 1) = 7 patterns. More specifically, a 3-bit XOR-based LFSR will cycle through every pattern except 000, while its NXOR-based counterpart will cycle through every pattern except 111.
Example 3-bit LFSRs (Source: Max Maxfield)
In the illustration above, we start with (a), which has an XOR feedback path with taps [1,2]. In (b), we keep the XOR but change the taps to [0,2]. In (c), we keep the [0,2] taps but swap the XOR for an NXOR. Did I mention that I love this stuff?
The reason I’m waffling on about LFSRs here is that I was just meandering my way around the Embedded Related website (as you do), which is hosted by my old chum Stephane Boucher. I’ve found the forums on this site to be a great place to pose my own embedded-related questions.
A forum post titled Linear Feedback Shift Registers for the Uninitiated by Jason Sachs caught my eye. Wow! This really is a case of “Everything You Wanted to Know About LFSRs (But Were Too Afraid to Ask).” There is a treasure trove of information here, so I’m happy to report that Jason and Stephane kindly granted me permission to present this forum post in its entirety as follows:
In 2017 and 2018 I wrote an eighteen-part series of articles about linear feedback shift registers, or LFSRs:
- Ex-Pralite Monks and Finite Fields, in which we describe what an LFSR is as a digital circuit; its cyclic behavior over time; the definition of groups, rings, and fields; the isomorphism between N-bit LFSRs and the field GF(2N); and the reason why I wrote this series
- libgf2 and Primitive Polynomials, in which we describe how to use a Python library called libgf2, which nobody else actually uses, to analyze LFSRs and binary finite fields; primitive polynomials p(x) in GF(2)[x] and their relation to maximal-length LFSRs; how to find some of these primitive polynomials in Python; and how libgf2 handles arithmetic in binary finite fields
- Multiplicative Inverse, and Blankinship’s Algorithm, in which we describe how Blankinship’s Algorithm can be used to compute the extended GCD and thereby determine the multiplicative inverse, whether in multiplicative groups or in finite fields
- Easy Discrete Logarithms and the Silver-Pohlig-Hellman Algorithm, in which we describe what a discrete logarithm is; what makes discrete logarithms easy or difficult to compute; how to use the Silver-Pohlig-Hellman algorithm and the Chinese Remainder Theorem to compute a discrete logarithm; and how to use libgf2 to calculate discrete logarithms
- Difficult Discrete Logarithms and Pollard’s Kangaroo Method, in which we describe some simple methods for computing discrete logarithms with time complexity O(√n), namely Shanks’s baby-step giant-step algorithm, Pollard’s Rho method, and Pollard’s kangaroo algorithm; and in which we talk a little bit about the faster index calculus methods
- Sing Along with the Berlekamp-Massey Algorithm, in which we describe the Berlekamp-Massey algorithm to determine the characteristic polynomial given the output of an LFSR
- LFSR Implementations, Idiomatic C, and Compiler Explorer, in which we describe using the XC16 compiler to create reasonably-efficient LFSR implementations in a dsPIC; GCC extended assembly to improve efficiency; an idea of “idiomatic C”; the PCG random number generator; how to use Compiler Explorer to see which compilers can recognize idiomatic C for expressing bitwise rotation; and a test harness for verifying LFSR implementations in C
- Matrix Methods and State Recovery, in which we describe how to represent LFSRs with matrices; how to compute time-shifted output of an LFSR; how to recover LFSR state from its output bits; how to avoid matrices to make the same computations more efficiently; and how to use libgf2 for state recovery
- Decimation, Trace Parity, and Cyclotomic Cosets, in which we describe what happens when you decimate the output of an LFSR; the equivalency of something called trace parity to the LFSR output; how to compute trace parity in libgf2; and how to compute an unknown decimation ratio from characteristic polynomials using calculation of roots, discrete logarithms, and cyclotomic cosets
- Counters and Encoders, in which we describe counters and encoders as applications of LFSRs
- Pseudorandom Number Generation, in which we describe some methods of pseudorandom number generation (PRNG); some simple ways of evaluating correlation between PRNG outputs; correlation properties of LFSR output bits as a pseudorandom bit sequence (PRBS); and correlation properties of LFSR state
- Spread-Spectrum Fundamentals, in which we describe some properties of a numerical Fourier transform and show examples of spectral visualizations; how sharp edges affect the high-frequency spectral content; frequency-hopping and direct-sequence spread-spectrum (DSSS); how disturbance waveforms impact bit error rates of unencoded vs. DSSS-encoded signals; bit error rate curves; and how two DSSS-encoded signals can both share the same transmission bandwidth
- System Identification, in which we describe the use of perturbation signals for the identification of a transfer function; how sine-wave perturbations behave; how PRBS can be used to estimate the impulse response in the time domain; parameter estimation techniques; and how perturbation amplitude impacts estimated errors
- Gold Codes, in which we describe the importance of synchronization between direct-sequence spread-spectrum (DSSS) transmitters sharing bandwidth; how path delays present some challenges in the use of DSSS; how to modify PRBS from an LFSR using Gold Codes, in such a way as to reduce the effects of interference between multiple spread-spectrum transmitters; and how Gold Codes are used in the Global Positioning System for satellite-based navigation
- Error Detection and Correction, in which we describe bit error rate graphs; some aspects of Hamming distance; parity, checksums, and cyclic redundancy check (CRC) for error detection; and Hamming codes for error correction.
- Reed-Solomon Error Correction, in which we describe the theory of Reed-Solomon codes; how to encode data with Reed-Solomon in Python using unireedsolomon or libgf2; how to encode data with Reed-Solomon in C; how Hamming and Reed-Solomon encodings impact bit error rate curves; why we don’t just use Reed-Solomon encoding for error correction instead of CRCs for error detection; and how Reed-Solomon encoding is used in QR codes
- Reverse-Engineering the CRC, in which we describe how to determine the parameters of an unknown CRC using LFSRs, the Berlekamp-Massey algorithm, and carefully-chosen test messages; and look at some real-world CRC test cases
- Primitive Polynomial Generation, in which we describe eleven different techniques for finding the primitive polynomials over GF(2) of a given degree, including the St. Ives Algorithm, Gordon’s Method, Shoup’s Algorithm, and the Falling Coyote Algorithm.
If you end up using these articles extensively to learn on your own or to teach others, I would love to hear about it.
These articles are dedicated in memory of L. Sachs (1917 – 2019)
Well, I for one am very impressed. There’s enough LFSR related information here to keep one busy for yonks and yonks (that’s a lot of yonks). How could anyone resist topics such as “Pollard’s Kangaroo Method” and “Cyclotomic Cosets”? So, if you’ll excuse me, I have a lot of reading to do.
Someone just reminded me that I wrote a 3-part series of articles on LFSRs way back in the mists of time. I just found them on EETimes. Eeeek, I wrote these almost 20 years ago. I’d completely forgotten about using LFSRs as pointers into FIFOs and stuff like that. Here are the URLs to those columns:
Part 1: https://www.eetimes.com/tutorial-linear-feedback-shift-registers-lfsrs-part-1/
Part 2: https://www.eetimes.com/tutorial-linear-feedback-shift-registers-lfsrs-part-2/
Part 3: https://www.eetimes.com/tutorial-linear-feedback-shift-registers-lfsrs-part-3/
Memory is the second thing to go, Max.
ROFLOL