“You can never be too rich or too thin.” – Wallis Simpson
In keeping with this week’s theme of glorified wires, I spoke with a man who designs them for a living. Naturally, he doesn’t see his job that way. To be honest, neither do I, now that we’ve talked about everything that goes into it. Even wires are becoming more interesting.
The gentleman in question is Nathan Tracy, Technologist and Manager of Industry Standards at TE Connectivity. TE is a ginormous company headquartered in Switzerland, although Tracy and several hundred of his colleagues work out of Pennsylvania. Electronics greybeards might remember AMP Incorporated and AMP connectors. That firm got acquired, split, and renamed over the past dozen years before becoming TE.
Nathan’s also the president of something called OIF, which used to stand for Optical Internetworking Forum but now just stands for… OIF. More rebranding, I guess. As the (old) name might suggest, OIF is into high-end optical-fiber networks for big iron. He and his group push the limits of what electrons and photons can do, all for the betterment of the cloud and those of us who depend on it. While semiconductor designers are building faster server chips, the members of Team OIF are trying to make those boxes talk to each other as fast as they can. It’s trickier than I thought.
On one hand, you’ve got laws of physics that limit just how fast and how far your photons can go. (Hint: pretty fast.) On the other hand, you’ve got all the quotidian demands of a commercial enterprise that needs to sell products at the end of the day. It’s great to design a wicked fast optical interconnect, but if nobody buys it, it’s just a fun academic exercise.
Much of the group’s focus is inside the datacenter, and most connections there are still electrical, not optical. That is, OIF works to connect boards and server racks to each other, as well as to citywide networks or transcontinental cables. Within the rack, 10Gbps electrical connections used to predominate, but now it’s more often 25 and 50Gbps. There’s growing demand for 100Gbps as well, often served by differential pairs.
Networks can also leave the building. “A good datacenter network promotes agility,” he says, allowing servers to cluster into separate islands so they don’t have to be physically adjacent. A virtual backplane among several regional sites can function like it’s one big building. But that calls for extreme bandwidth and low latency. Oh, and weather resistance.
Like most things, the demand for performance is infinite. Customers have no self-imposed limits; they never say, “stop, that’s fast enough.” The trick is to make that performance cost-effective. “Nobody wants next-generation technology because it’s cool. Only if it provides a better return for them.” And therein lies the challenge. TE has to be careful proposing new schemes that might be plenty fast, but that won’t be adopted because of practical or cost considerations.
That sets up a good-natured conflict between OIF as a standard-setting body and TE Connectivity as a commercial enterprise. OIF – whose members represent about 100 different companies making computers, semiconductors, connectors, optical fiber, services, and more – is looking to advance its members’ interests. TE wants to help in that effort, to push things along where it sees fit, and then to get a piece of the business when everything is settled.
Sometimes TE proposes its own technology for a new standard, and sometimes it sits back and accepts someone else’s alternative. Interconnections need to be widely adopted to be useful. There’s no point ramming through something that might be technically superior but unpopular and commercially unsuccessful.
Then there’s OBE (overtaken by events). OIF and its members may hammer out the best technology standard going forward, only to have an outside company unilaterally invent something that supersedes the standard. That’s the nature of deliberative bodies, especially those staffed by volunteers who meet in person only about four times per year. You can’t predict disruptive technology. Paradigm shift happens.
What does OIF do in those cases? “It depends,” says Tracy. “You don’t want to abandon the work you’ve done, because maybe it’ll get picked up later” for another project.
Not all of TE’s work is electrical or optical. There’s some tricky mechanical engineering involved, because we’re combining cutting-edge semiconductors and interconnections with decades-old metal chassis, board spacing, form factors, and cooling physics. Like a bullet train running on 19th-century railroad tracks, it’s half old, half new.
Servers still use 19-inch racks, but the data rates, component density, power consumption, and heat dissipation are all much higher. PCB traces are being replaced by twinaxial cabling, even for short distances. Racks dissipate “crazy high power,” he says, and that causes big thermal problems. Yesterday’s 3W transceivers are being replaced by ones that consume 20W or more. “They’re transmitting heat as much as data.”
Faceplates represent valuable beachfront real estate because that’s where pluggable transceivers go. Density is king, but that kind of packing presents thermal problems. Despite all the holes, there’s minimal airflow. Conducting the heat away presents its own problems.
One solution is to use heat pipes or chill plates. There’s no room for massive fans, like on a big microprocessor or GPU. But where do you conduct the heat to? And chill plates assume a flat, coplanar surface and a solid mechanical fit. Tracy says that TE has developed its own “flexible metal” solution that physically conducts heat away from hot transceivers but that also tolerates lumpy, irregular shapes and sizes. It combines conformability with low thermal resistance, he says. If the answers were easy it wouldn’t be called engineering.
We’ve come a long way from just hooking up two ends of a wire.