Well, PCI-SIG has done it once again. They’ve doubled the peak maximum bandwidth of the PCI Express (PCIe) bus by releasing the PCIe 6.0 specifications on January 11. A 16-lane PCIe 6.0 connection has a peak bidirectional bandwidth of 256 Gbytes/sec. That’s pretty darn fast. How’d they do it? The simple answer is that they ripped a page from the high-speed Ethernet spec and adopted PAM4 modulation. PAM4 modulation encodes two bits with each signal excursion by using 4-level voltage signaling instead of the familiar, digital, 2-level signaling. Presto! You’ve doubled the bandwidth. PAM4 Serdes have been appearing on more recently announced FPGAs such as the Intel Stratix 10 I-series FPGAs and Versal Premium devices from Xilinx. We’ve seen technology demonstrations of PAM4 transceivers since 2016 or so. Semiconductor vendors already know how to deal with high-speed, PAM4 signaling.
That said, you’re not likely to see PCIe 6.0 in systems for a while. Processors, SOCs, FPGAs, and other chips with PCIe 5.0 support are just starting to become available. That’s because PCI-SIG only released the PCIe 5.0 specification in 2019, a little more than two years ago. However, like their predecessors, PCIe 5.0 and PCIe 6.0 are virtually guaranteed success. The electronics industry has adopted PCIe as the reigning standard, the best high-speed interconnect for chip-to-chip and board-to-board communications within a system. There’s simply no other viable choice.
Way back in 1995, long before PCIe appeared, I wrote a hands-on article about the “new” PCI bus. (PCI is the predecessor to PCIe.) The switch from the PC/AT system bus to PCI that was occurring in the middle 1990s set the stage for PCIe. Back then, I wrote:
“Early implementations of the PCI bus tarnished its image. Although the bus spec promised 132-Mbyte/sec transfer rates, the first PCI systems barely achieved one-quarter of that rate. Further, few 32-bit pPs can sustain 132-Mbyte/sec transfer rates on their data buses, so processor-based tests on PCI motherboards can’t demonstrate PCI’s true potential.
“Even with these drawbacks, the PCI bus is conquering hardware design. The PCI bus is already the standard for PC motherboards. It’s in the latest Macintosh computers and in Digital Equipment Corp’s Alpha workstations. Many embedded-system board vendors are jumping on the PCI bus, so PCI is becoming an increasingly important factor in the industrial market. In short, PCI is becoming a key design element in many computer markets.”
The PCI-SIG published the serialized version of PCI, the PCIe 1.0a spec, in 2003. PCIe 1.0a featured a per-lane data rate of 250 MBytes/sec at a bit-serial rate of 2.5 GTransfers/sec. Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes PCIe’s overhead bits, which detract from the actual data-transfer rate. PCIe 1.0a used an 8b/10b coding scheme, so there was 20% overhead. The net result was a maximum transfer rate of 2 Gbps or 250 Mbytes/sec. PCIe 1.1 followed two years later. The revised spec cleaned up a few issues that had arisen from implementations of the original spec, but the data rate remained unchanged.
Two years after that, in 2007, PCI-SIG rolled out the PCIe 2.0 spec, which doubled the transfer rate to 5 GTransfers/sec and upped the data rate to 500 Mbytes/sec. One key precedent that PCIe 2.0 established was backward compatibility with the PCIe 1.1 specification. PCI-SIG has continued to support that precedent through subsequent PCIe spec iterations through version 6.0. In 2009, PCI-SIG introduced the PCIe 2.1 specification, which did not increase data-transfer bandwidth but added many management, support, and troubleshooting features to the spec, in preparation for PCIe 3.0.
Unveiled in 2010, PCIe 3.0 increased the transfer rate to 8 GTransfers/sec. By itself, the new transfer rate would not have doubled PCIe’s peak data bandwidth. However, the PCIe 3.0 spec called for 128b/130b encoding instead of 8b/10b encoding, so the transfer overhead dropped from 20% to 1.54 percent. As a result, the PCIe 3.0 bandwidth is 985 Mbytes/sec, which almost doubles the data-transfer rate compared to PCIe 2.1. PCI-SIG updated the spec to PCIe 3.1 in 2014 and added a few protocol improvements but left the transfer rate unchanged.
PCI-SIG made a preliminary announcement for PCIe 4.0, teasing a transfer rate of 16 GTransfers/sec, in 2011, long before it announced the PCIe 3.1 spec. The actual PCIe 4.0 spec didn’t appear until the middle of 2017. There was no change to the data-encoding scheme, and the peak bandwidth doubled again, from 985 Mbytes/sec to 1.969 Gbytes/sec. That’s per lane. A 16-lane PCIe 4.0 implementation can move 31.508 Gbytes/sec.
PCI-SIG announced the final PCIe 5.0 spec in 2019, after releasing a preliminary version in 2017. PCIe 5.0 boosts the per-lane transfer rate to 32 GTransfers/sec and the per-lane data rate to 3.938 Gbytes/sec. As of today, we’re still in the early rollout phase for chips and systems that incorporate PCIe 5.0 I/O ports. For example, Intel announced its 12th Generation “Alder Lake” Core i9, i7, and i5 CPUs with PCIe 5.0 support in November 2021.
Over its two-decade lifespan, PCIe usage has become widespread. Early PCIe implementations were primarily expansion-card buses, first in PCs and then in servers. Widespread use in computer design drove the cost of PCIe implementations down and made PCIe an attractive data-transfer protocol for a much broader range of applications. Systems engineers became very familiar with PCIe, thanks to it’s being omnipresent in PCs and servers, and engineers soon found many new uses for the I/O protocol.
One of the early PCIe successes was the replacement of dedicated graphics card slots in PCs. AGP, the Advanced Graphics Port, grew out of the original PCI (not PCIe) spec from the 1990s to meet the high-speed I/O requirements of graphics cards. Although the AGP protocol was based on the parallel PCI bus, it introduced the concept of a dedicated slot to the PC. If you want to maximize performance, you can’t share bus bandwidth across multiple slots. And, above all else, bleeding-edge graphics users want performance.
PCIe stole the concept of dedicated slots and, as a result, started to replace AGP as the graphics-card slot of choice in PCs starting around 2004. That’s just one year after PCI-SIG rolled out PCIe 1.0a. Although it took a few years, PCIe eventually killed off AGP.
PCIe has also made great strides in remaking the world of data storage. Thanks to the advent of SSDs (solid-state drives) as HDD (hard disk drive) replacements, the data bandwidth of conventional disk-storage interfaces such as SCSI, SAS, and SATA became throughput bottlenecks. PCIe-based SSDs are increasingly taking market share from other types of storage devices. There’s a special flavor of PCIe called NVMe, which is an open specification for accessing a computer’s non-volatile storage media connected to a system via PCIe. The first NVMe spec was released in 2011. Today, NVMe Storage vendors offer NVMe SSDs in a variety of form factors from conventional 3.5- and 2.5-inch HDDs down to tiny M.2 cards.
At the other end of the bandwidth spectrum, the older, slower PCIe 2.0, 3.0, and 3.1 protocols are still quite useful for making chip-to-chip connections in all sorts of systems including automotive and embedded applications. Although these protocols don’t have the nose-bleed data bandwidth of the newer PCIe protocols, the serial nature of PCIe hugely simplifies circuit-board design in all applications. When you no longer need to route 8-, 16-, or 32-bit buses around on a board, life gets so much easier. And thanks to the ubiquitous nature and backward compatibility of the PCIe standards over two decades, the cost of integrating PCIe interfaces into all sorts of chips has become negligible, despite the circuit complexity of these interfaces.
There are several additional uses for PCIe, but I think you get the idea by now. PCIe has become immensely successful. It has become the universal in-system bus for many applications, and PCIe 6.0 ensures that the PCIe I/O protocol continues to lead a long and happy life.