feature article
Subscribe Now

Intelligently Transporting Electrical and Optical Signals

Back in the day, when computers ran standalone and there was no such thing as networks, I used to be reasonably confident that I had at least a vague understanding as to what was going on. Silicon chips talked to other silicon chips and circuit boards talked to other circuit boards using electrical signals travelling over copper wires. If you wanted to send a message over longer distances, you could write a letter, make a phone call, or send a telegram (well, that’s the way it felt).

As an aside, one of the earliest and most famous steam-powered railways to carry passengers was the Stockton and Darlington Railway (S&DR) in England, and the first such occurrence took place on 27 September 1825. Just a few years later, in 1830, the Liverpool and Manchester Railway opened, also in England. This line was a major milestone in railway history as it was the first fully intercity passenger railway with scheduled service.

As strange as it may seem to us today, those early railway passengers were genuinely afraid that high speeds would make it impossible to breathe. People worried that the human body couldn’t handle speeds over 30 mph, fearing they’d be unable to draw in air fast enough or that they might even suffer physical harm. There was also a concern about what open-air travel at those speeds might do to people’s eyes and other senses.

The reason I mention this here is that, when I was coming up, clock frequencies of a few hundred kilohertz were considered to be pretty darned exciting. I remember working on a project in the very early 1980s whose motherboard was to be driven by a clock running at… wait for it… wait for it… one megahertz! (I’ll pause for a moment to let the gasps of astonishment die down). I also remember that—when the lead engineer reached out to flip the power switch for the first time—we all took deep breaths and leaned back, “just in case.”

I think we subconsciously believed we were pushing the bounds of what was possible. We didn’t have a clue. In the case of shrinking semiconductor process nodes, as each new node came online, the naysayers proclaimed, “this is as low as we can go,” and then we went lower. Similarly, every time we increased our clock frequencies, the pessimistic prophets of doom and despondency pronounced, “this is as high as we can go,” and then we went higher.

I’m thinking of things like the ISA bus (8 MHz) in the 1980s and the PCI bus (33-66 MHz) in the 1990s, followed by 100 Mbps (Fast Ethernet) and later 1 Gbps (Gigabit Ethernet) in the 2000s, followed by newer standards like 10 Gbps Ethernet (10GBASE-T) in the 2010s, followed by even more advanced technologies like 25 Gbps, 40 Gbps, and even 100 Gbps over copper in the 2020s (albeit in very controlled environments over very short distances using advanced twisted-pair cables and connectors).

Based on my past experiences, I’m certainly not going to be the one to say, “thus far and no farther,” but I also don’t mind saying that going faster and faster over copper is getting harder and harder (and you can quote me on that).

The fact that we need to move more and more data faster and faster with lower and lower latency explains why we are moving to photonics systems and optical interconnect to link our chips, boards, systems, and facilities. All of which goes to explain why I was just chatting with Dr. Armond Hairapetian, who is the Founder and CEO of TeraSignal.

You may be familiar with TeraSignal’s TS8401/02 intelligent 400G (4x100G) PAM-4 modulator drivers, which are the industry’s first CMOS solutions with digital link training and link monitoring for 800G linear pluggable optical (LPO) modules. (The TS8401 and TS8402 are essentially the same die—the only difference is that the 01 has pads and is wire-bonded to the substrate, while the 02 has solder bumps and is attached using a flip-chip technique.)

The primary purpose of our chat was for Dr. Armond to bring me up to date with respect to TeraSignal’s latest development, which is a protocol-agnostic intelligent interconnect for plug-and-play linear optics called TSLink.

Now, the following diagram can be a little confusing for a bear of little brain like your humble narrator, so let’s take things step-by-step. On the left we have an application-specific integrated circuit (ASIC), possibly in the form of a system-on-chip (SoC). In addition to processors, hardware accelerators, on-chip memory, and a bunch of other stuff, this little scamp will contain multiple serializer/deserializer (SerDes) transceiver (transmitter (TX) and receiver (RX)) functions. It’s the TX side that’s of interest here.

DSP-based re-timer vs. TSLink-based re-driver (Source: TeraSignal)

Observe that the SerDes TX function includes a digital equalizer and a digital-to-analog converter (DAC). The digital equalizer is used to apply pre-emphasis and/or de-emphasis to compensate for signal degradation over the copper interconnect (pre-emphasis boosts the high-frequency components of the signal to counteract the losses they’ll face over the transmission path; de-emphasis reduces the strength of the lower-frequency components relative to the high-frequency parts, effectively flattening the overall frequency response).

When it comes to converting the electrical signal from the ASIC into an optical signal to the rest of the system, we have two options: either we can use a traditional DSP re-timer, or we can use TeraSignal’s TSLink re-driver. Both of these options are shown on the right of the image above.

It’s important to note that the “Optics” annotations on the extreme right of this image do not represent optical fibers. Instead, they indicate an electrical path to something like a Mach-Zehnder modulator, which will be used to control the amplitude of an optical wave. The re-timer or re-driver functions will be bundled with the Mach-Zehnder modulator and other stuff, all presented as a single optical module whose electrical input comes from the ASIC and whose optical output (feeding a fiber) comes from the Mach-Zehnder modulator.

Bearing all this in mind…

Suppose we start off by visualizing two devices communicating directly with each other over copper interconnect using something like a non-return-to-zero (NRZ) binary code. When the designers decided to move to optical interconnect, they would feed the electrical signal from the ASIC into an optical module.

The traditional approach was to put a DSP re-timer inside the optical module, because this provided a simple way to employ some kind of clock data recovery (CDR) technique to recover the clock, re-time the data, and send this re-timed data on its merry way.

But then the industry started to move away from NRZ and to adopt the PAM-4 (pulse amplitude modulation with 4 levels) modulation scheme. PAM-4 employs four distinct amplitude levels to represent data, thereby allowing it to encode two bits per symbol instead of the traditional single bit supported by NRZ binary signaling.

With PAM-4, you can’t implement a simple CDR scheme. Instead, you must use an analog-to-digital converter (ADC) and a DSP-based clock recovery and re-timer approach, which increases the complexity by orders of magnitude. All this is obvious when you look at the DSP re-timer implementation in the above diagram. Starting with the original digital signal in the ASIC, the traditional path is DAC (in ASIC) to ADC (in re-timer) to DAC (in re-timer) to the optical modulator. Doesn’t this DAC > ADC > DAC path seem a little redundant? (Did I already imply that?) 

By comparison, the optical module using the TSLink Re-Driver works directly with the analog signal coming from the ASIC—no additional ADC, DSP, and DAC overhead steps are required or involved.

The actual way TSLink performs its magic is beyond my ability to describe (at least, to describe correctly—I could easily make things up, but that wouldn’t benefit either of us). What I can do is summarize the advantages of TSLink-based optical modules as follows:

  • Power: TSLink re-drivers consume at least 50% less power compared to DSP based re-timers. Why? TSLink re-drivers do not have high-speed time-interleaved ADC, digital FFE filters, and high-speed DACs.
  • Quantization Noise: TSLink re-drivers are inherently linear and do not make decisions. Therefore, they do not add quantization noise to the signal. By comparison, quantization noise is added to the signal by the ADCs in the DSP-based re-timers. This quantization noise can result in higher bit error rates in DSP-based re-timers compared to TSLink re-drivers.
  • Latency: Due to the continuous-time nature of the signal path, the latency of a TSLink re-driver is in tens of pico-seconds. In contrast, DSP-based re-timers utilize discrete-time ADCs, DSP filters, de-serializers, serializers, and DACs. The latencies introduced by the DSP-based re-timers are in tens of nanoseconds—three orders of magnitude higher than that of TSLink re-drivers.
  • Crosstalk: Channel-to-channel crosstalk can be calibrated and cancelled by TSLink link-training at the transmitter. This can also be employed by DSP-based re-timers, but this is not being done today.
  • Link Training: TSLink utilizes impulse response link characterization to fully characterize channel impairments such as ISI (inter-symbol-interference) and reflection. By adjusting the FFE taps of the transmitter, ISI and reflection of the channel are removed.
  • Size: Due to the reduced number of building blocks (data converters and digital equalizers), the size of TSLink re-drivers is at least 50% smaller than DSP-based re-timers.
  • Protocol Agnostic: DSP-based re-timers generally participate in protocol negotiation and need to support the data rates required by the protocol. By comparison, TSLink re-drivers are transparent devices and do not participate in protocol negotiation.
  • Cost: TSLink re-drivers are small CMOS devices that are designed in planar (non-FinFet) 12-inch (300 mm) wafers. This results in TSLink re-drivers being more than 50% lower cost than DSP-based re-timers that need to be implemented in advanced FinFet nodes.
  • Assembly Options: Due to their small die size, TSLink re-drivers can be placed very close to (or on top of) and wire-bonded or bumped to the photonic devices, all of which makes them a much more flexible choice than large DSP-based re-timers.

Well, color me impressed. What’s not to love? If you want to learn more, feel free to reach out to the folks at TeraSignal, who will be happy to regale you with more nitty-gritty details than you’ll know what to do with. In the meantime, as always, I welcome your comments (especially the nice ones) and questions (especially the easy ones).

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Ultra-low Power Fuel Gauging for Rechargeable Embedded Devices
Fuel gauging is a critical component of today’s rechargeable embedded devices. In this episode of Chalk Talk, Amelia Dalton and Robin Saltnes of Nordic Semiconductor explore the variety of benefits that Nordic Semiconductor’s nPM1300 PMIC brings to rechargeable embedded devices, the details of the fuel gauge system at the heart of this solution, and the five easy steps that you can take to implement this solution into your next embedded design.
May 8, 2024
39,097 views