feature article
Subscribe Now

Lynn Conway, 1938-2024: The Computer Architect Who Helped to Revolutionize Digital IC Design

Lynn Conway is best known for her collaboration with Carver Mead that resulted in the Mead-Conway design methodology for VLSI chip design, which triggered a renaissance in IC development and spurred the growth of commercial EDA. While working on IBM’s Advanced Computer System (ACS) project in the 1960s, Conway conceived of Dynamic Instruction Scheduling (DIS), one of the fundamental innovations needed for out-of-order (OOO) instruction execution by superscalar processors, which is now commonly implemented in all high-end microprocessors. She joined the Department of Engineering’s faculty at the University of Michigan in 1985 as a Professor of Electrical Engineering and Computer Science (EECS) and Associate Dean of Engineering and retired in 1998, taking the title of Professor Emerita.

Conway received many professional honors for her work, including the Electronics Magazine Award for Achievement along with Carver Mead (1981), the Harold Pender Award from the Moore School at the University of Pennsylvania (1984), the John Price Wetherill Medal from the Franklin Institute along with Carver Mead (1985), and the IEEE Computer Society’s Computer Pioneer Award (2009). She was elected to the National Academy of Engineering (1989) and honored as a Fellow by the Computer History Museum (2014). Over time, she was awarded six honorary doctorates from Trinity College (1998), the Illinois Institute of Technology (2014), the University of Victoria (2016), the University of Michigan Ann Arbor (2018), Princeton University (2023), and Syracuse University (2024). Conway became a transgender woman in 1968 and she became an outspoken advocate for transgender people after publicly coming out in 1999. Conway died on June 9 and her technical achievements are inextricably intertwined with her life’s journey as a transgender woman.

Professor Emerita Lynn Conway at the University of Michigan. Image credit: Lynne Conway, University of Michigan

Lynn Conway was born in 1938 and quickly developed a deep interest in astronomy, science, math, and building things. This interest was nurtured by her father, who was a chemical engineer. Conway entered MIT as a Freshman in 1955 and relished her classes. At the same time, she expanded her love of extreme sports whether it was rock climbing, motorcycling, or sailing. She was also dealing with her gender dysphoria, which started when she was 3 or 4 years old. She felt increasingly freakish and ugly in a male body and longed for the world to see and accept her as a woman. A disastrous meeting with a physician with an implied threat of institutionalization in an insane asylum frightened Conway enough to leave MIT in 1959 before graduating.

Conway spent the next two years roaming the country and working as a hearing aid repair technician while continuing to hike, ride motorcycles, hunt deer, and even do some technical rock climbing. However, the job as a repair technician wasn’t engaging. By 1961, Conway had recovered from her MIT experience to the point where she was able to enroll at Columbia University, where she finished her BSEE in a year. While there, she took classes in the nascent field of digital computing, including computer organization and advanced programming classes with Dr. Herbert Schorr, an Adjunct Professor on loan from IBM Research.

 

Lynn Conway jumping her dirt bike in 1997. Image credit: Lynn Conway

She did an independent study project under Schorr in 1963, and he was impressed with her enough to invite her to join him at IBM’s Research Laboratory in Yorktown Heights for a new research project: the Advanced Computer System (ACS). Meanwhile, during 1963, Conway had gotten close to a female friend, who Conway calls “Sue,” and managed to father a child with her. Upon discovering Sue’s pregnancy, Conway and Sue married in September 1963 as man and wife. Their daughter was born in February 1964. (Conway has been careful to mask the real names of her family members in her histories, to protect them.)

IBM

The new job with the IBM ACS team started in June 1964. IBM’s ACS project was the response to the failure of the company’s Stretch supercomputer to meet its performance objectives. IBM’s CEO, Thomas Watson Jr, was incensed that IBM could not match the performance of the supercomputers developed by Seymour Cray at Control Data Corp. Watson authorized the ACS project with the mandate to “go for broke.” Computer legends on the ACS development team included Gene Amdahl, who later founded a company to make plug-compatible IBM clone mainframe computers that would exploit many ACS concepts, and John Cocke, who would later develop IBM’s pioneering 801 RISC processor. Schorr was the head of the ACS architecture team. He hired Conway to write an architectural simulator program so that the team could try out various architectural ideas including memory caches, instruction pre-fetching, branch prediction, and multiple execution pipelines. These are all common features in today’s superscalar processors, but they were experimental in 1964.

As a result of her work, Conway became deeply familiar with ACS architectural concepts, and, with the naiveté of a grad student who doesn’t know that a problem hasn’t been solved, went about finding a solution to a pressing challenge: the need to speed execution by issuing and executing multiple instructions simultaneously.  By fall 1965, Conway had the answer: fetch multiple instructions and load them into a queue. Record each instruction’s dependencies using individual bits stored in separate source, destination, and branch vector matrices, and evaluate the readiness for each instruction to issue from the queue and assign the instruction to an execution unit when ready, as determined by scanning those matrices every clock cycle. An IBM colleague named Brian Randell named the technique “Dynamic Instruction Scheduling.” The concept was so powerful that the architecture team redesigned the ACS processor to incorporate the idea, which doubled its performance.

In late summer of 1965, IBM moved the ACS team including Conway and his wife from Yorktown Heights, New York to Sunnyvale, California. Conway and Sue got an apartment in Palo Alto. Sue managed to get pregnant a second time and gave birth to a second daughter in March 1966. By then the marriage was on the rocks because of the increasing severity of Conway’s sexual dysphoria, and the two planned on  divorcing. Meanwhile, Conway redoubled her hiking and rock climbing to escape marital problems at home. Then Conway discovered that Dr. Harry Benjamin had developed surgical and hormonal techniques to help transgender people make a physical transition to the opposite gender. Conway got his wife’s support, scheduled the transition surgery with a doctor in Mexico, and the die was cast.

When Conway informed the IBM Research team’s management about her decision to transition in 1968, they were supportive. When IBM corporate management got wind of the plan, they fired Conway to avoid the public embarrassment of employing a transgender woman. This action had serious consequences beyond immediate unemployment. Conway’s extended family severed ties upon hearing news of the decision. Conway and Sue eventually went on welfare and became afraid that state social workers would take away their kids after Conway made the transition. As a result of these consequences, Conway and Sue divorced in December 1968, a month after Conway underwent transition surgery. The transition marked Conway as an unacceptable risk and she was forbidden from seeing her children, which she dearly loved. Under threat of a restraining order, she would not see her children again for another 14 years.

Memorex

By March 1969, the newly named Lynn Conway had recovered and was ready to find a new job. After several refusals, she became a contract programmer for Computer Applications, Inc. She was then recruited by Greyhound Time-Sharing, but the company went bust. Conway then found work as a systems programmer at Memorex in September 1969.

During this period, Memorex was just starting to develop mainframe computers. When managers at the company discovered her talents and experience, Memorex moved Conway to a digital design job, then promoted her to computer engineer, and then assigned her to be the architect for the company’s low-end mainframe processor, the Memorex 7100. The 7100 processor was a 16-bit, microprogrammed machine built from TTL SSI and MSI chips and was part of the Memorex MRX 30 System, which was positioned to compete with the IBM System/3 midrange computer.

With Conway as architect, the machine went from blank sheet to prototype in nine months. In 1971, Memorex promoted Conway to Senior Staff Engineer. She bought a condo in Los Gatos, California, a red 1972 Datsun 240Z sports car, and two Siamese cats. Then, Memorex decided it could not compete with IBM, canceled the MRX 30 project, and exited the mainframe business. Conway felt adrift.

Xerox PARC

In late 1972, before Memorex had a chance to lay her off, Conway went looking for a new job. Her headhunter found one, at Xerox PARC’s Systems Science Laboratory (SSL). Conway’s first assignment was to develop an OCR/FAX system called the Sierra project. She eventually got the system working, but the implementation technology of the day – TTL chips – meant that the prototype required racks and racks of boards stuffed with chips. The design was too complex and expensive to become a commercial product without re-implementing it in LSI chips, which was beyond state of the art at the time.

The new SSL lab manager, Bert Sutherland, cancelled the Sierra project but did not terminate Conway. Instead, he was impressed with her work and started to mentor her. Sutherland soon introduced Conway to his brother Ivan, the chairman of Caltech’s new Computer Science Department, and Carver Mead, a Caltech professor who had developed a new way for architects like Conway to design ICs. Mead had been teaching a Caltech class about this method since the early 1970s. His design methodology was far more efficient than laying down polygons, the current state of the art for IC design.

In 1976, Xerox PARC and Caltech formalized a development agreement to explore new ways to design and implement systems in silicon. Ivan Sutherland assigned Mead and two of his students, Jim Rowson and Dave Johannsen, to the project. Bert Sutherland assigned Conway and Doug Fairbairn to the project. Mead and Conway pooled their device physics and computer architecture knowledge while Fairbairn and Rowson created an interactive IC layout system called ICARUS that ran on the Xerox PARC Alto computer systems.

This work led to the development of a structured design methodology based on the assembly of systems using large, structured blocks. Instead of transistors or gates, properly designed functional blocks could implement large, regular architectural structures such as PLAs (instead of individual gates), registers, ALUs, and entire datapaths. Architects could understand and design with these large structures. If the structures were designed to abut each other and snap together like Lego blocks, wiring length would be minimized. Even in the mid-1970s, wiring delays were starting to dominate over the need to minimize logic. The new design methodology minimized this looming performance bottleneck. These ideas produced working devices in 1976 and 1977. The design methodology was ready to go big.

Mead-Conway

In June 1977, Conway proposed writing and self-publishing a book about the design methodology using Xerox PARC’s advanced page-layout and publishing software. Mead enthusiastically approved. Conway wrote large sections of the book using Mead’s ideas and served as the book’s architect and central coordinator. Mead added information about NMOS fabrication and IC mask-making. Fairbairn and Rowson wrote a tutorial on using ICARUS. Johannsen developed design examples. By early 1978, the first five chapters were ready for spring semester classes taught by Bob Sproull at CMU and Fred Rosenberger at Washington University. In a last-minute decision, Conway titled the book Introduction to VLSI Systems and published it using Xerox PARC’s laser printers.

Bert Sutherland, who had joined the EECS Department’s advisory committee at MIT, then challenged Conway to teach an MIT class based on the book during the fall semester. She accepted the challenge while being terrified by it. That fear drove Conway to develop extensive class notes so that she would know exactly what she’d say each day during the class. During the spring of 1978, the book authoring team added four more chapters and the book was ready for the fall semester class at MIT.

Conway decided to turn the class into a hands-on lab. During the semester’s first half, she taught from the book. During the second half, class members designed chips and submitted them to Conway’s nascent Multi Project Chip (MPC) program, which combined multiple designs onto one mask set to reduce fabrication costs. The class started with 32 students and developed nineteen chip designs, which Conway sent to Xerox PARC for mask making. The IC Processing Lab at HP Labs in Palo Alto fabricated the wafers, and the Xerox PARC team diced the wafers, packaged the die, and sent them back to MIT. Despite primitive IC design tools and a brand-new design methodology, some of the students’ project chips worked. The Xerox PARC team knew they’d created something quite revolutionary.

Word of Conway’s MIT class spread, and professors started contacting Conway. They too wanted to teach this new IC design class. Fairbairn and Dick Lyon ran an intensive short course for Xerox PARC researchers in spring 1979 and videotaped the classes. Together with Conway’s extensive bound notes detailing each day’s classes and the textbook, Xerox PARC had developed a juggernaut. Professors at twelve universities taught classes, and the MPC wafers for 1979 included 82 design projects from 124 designers, spread over two wafer sets. In 1980, the MPC project implemented 171 design projects from 15 different universities.

Addison-Wesley published Introduction to VLSI Systems as a textbook in 1980. The book became a bestseller, selling more than 70,000 copies, and its wide availability, along with the Mead-Conway design methodology, triggered multiple explosions in the semiconductor world. Within two years of the book’s original publication, more than 120 universities around the world taught courses in the Mead-Conway design methodology. The universities churned out graduates who knew how to design chips without ever having worked at a chipmaker. Many of the trained students joined semiconductor companies as IC designers and many elected to start or join commercial EDA companies that translated Mead-Conway design concepts into EDA tools. In 1981, the MPC program became formalized as the DARPA-funded MOSIS, which continues to offer multiproject wafer services to universities, government agencies, research institutes, and businesses based on Lynn Conway’s original concept.

DARPA

Mead enjoyed the accolades and hoopla associated with the Mead-Conway methodology and he reveled in being declared one of Silicon Valley’s founding fathers. Conway, who always considered herself shy, felt intimidated and pulled back from the limelight. When the Director of DARPA asked her to lead a team planning a new program called the Strategic Computing Initiative, she left Xerox PARC and took the assignment. Conway’s team produced a plan that triggered $100 million in funding for computing research.

Then, Conway got an offer from the University of Michigan’s Dean of Engineering to become a faculty member and Associate Dean. Having tired of Silicon Valley, Conway accepted, moved to Michigan, and became a professor in 1985. She retired and became Professor Emerita in 1998.

A year later, she found out that Professor Mark Smotherman at Clemson University was researching the origins of superscalar computing and was gleaning what he could from the few remaining IBM documents about the ACS project. Conway, meticulous saver of her project documentation, contacted Smotherman and started to feed him information about the project. She’d kept every piece of paper from the ACS project. Smotherman did not understand why Lynn Conway had this information, since her male name appeared on the documents instead of “Lynn Conway.” (The online scans of those documents have been retyped and now carry Conway’s name.)

Eventually Conway explained her full story to Smotherman, and she decided to get ahead of the train by going public with her transgender story. At that point, Conway started to become an active transgender advocate and started to maintain a website for other people suffering with gender dysphoria. That website now contains hundreds of pages with a mix of Conway’s historical documents and detailed information covering a variety of transgender topics.

Conway had started dating men in Silicon Valley. She continued dating in Michigan and met Charles Rodgers in 1987. The two became a couple and married on Michigan’s Mackinac Island in 2002. Rodgers, a professional engineer, shared Conway’s love of outdoor activities and her website chronicles many of their trips. Conway died at the age of 86.

In 2020, IBM formally apologized for firing Conway back in 1968 at a ceremony titled “Tech Trailblazer and Transgender Pioneer Lynn Conway in conversation with Diane Gherson.” At the time, Gherson was IBM’s Senior Vice President of Human Resources. Conway attended the event and was moved by the apparent sincerity of the apology. A bit earlier, IBM presented Conway with a Lifetime Achievement Award and acknowledged that it had benefited from Conway’s innovations for years after the company fired her.

It’s not clear whether IBM’s actions back in 1968 helped or hurt Conway professionally. It certainly hurt her personally, cruelly, and on multiple levels. Conway chose to believe that IBM’s actions strengthened her. However, one cannot help but wonder, with all of Conway’s achievements, and they are legion, how much further might she have gone if she’d not been forced to expend all that mental and emotional energy just trying to be herself?

References

Lynn Conway’s University of Michigan Website

Lynn Conway’s Retrospective Website

Lynn Conway, “Reminiscences of the VLSI Revolution: How a series of failures triggered a paradigm shift in digital design,” IEEE Solid-State Circuits Magazine, Fall 2012, pp 8-31

Carver Mead and Lynn Conway, Introduction to VLSI Systems, Addison-Wesley, 1980

Jeremy Alicandri, “IBM Apologizes For Firing Computer Pioneer For Being Transgender…52 Years Later,” Forbes.com, November 18, 2020

Acknowledgement

Many thanks to Doug Fairbairn, Staff Director of the Semiconductor Special Interest Group at the Computer History Museum and Lynne Conway’s colleague at Xerox PARC, for adding color and detail to this article.

11 thoughts on “Lynn Conway, 1938-2024: The Computer Architect Who Helped to Revolutionize Digital IC Design”

  1. Steve, Thanks always for your great articles on pioneers. Its sad we are losing some of the great computer architects. Early architects had very little history to drive them. They actually had to be creative, not only in math, but in hardware technology. Computer architecture college classes are teaching Arduino and forgetting the binary, gates, packaging, temperature, and even cost trade-offs. Some day soon computer architecture will be selecting the best product from Shopify or Amazon. IC design … what’s that?

    Ray Holt
    https://FirstMicroprocessor.com

  2. Thanks Ray. It is sad that we are losing the earliest architects, but that’s been true for decades. What we celebrate is that we had them with us for a time and that they were able to contribute so much before we lost them. I seem to have written a lot of tributes to lost pioneers lately. Not looking forward to the next one and I hope it’s a while before I need to write another.

  3. Multiple instruction fetches were possible using interleaved memory, which I think ACS had 4 way interleave.
    However, today’s systems have both instruction and data caches, cache coherence, DDR, etc.
    So ACS was redesigned and performance doubled. Using which benchmark? (probably matrix inversion)
    And by the way, I don’t think IBM ever used instruction caches in their mainframes.
    Oh, well “complexity is what sells” according to Professor Dijkstra.
    And nobody knows what RISC was reduced from what?
    In the case of John Cocke, The 801 was defined according to what thee early compiler could compile.
    (decimal, floating point, and vfl)

    1. I mean to say that (decimal, floating point, and vfl) were not included.
      That essentially meant that it did add, subtract, compare, load, store, branch.Ssame as the older 7044.

      The 7044 had index registers that were offsets to operands.

      System 360 had 16 General Purpose registers, but most were used for addressing data memory.

      Multi-processing according to Amdahl’s Law does not double performance for a 2 way, rather increase as the square root of 2.

      So now there are processors with a dozen cores that probably have the same performance as a 2 core.
      That is another case where complexity sells.

      I say Ray Holt is right. We will soon be able to buy a chip with true dual port rams, GPIO drivers and an assortment of attachable devices.

      And then buy or design the logic to move data in and out of embedded memory and the attached devices.

      Now that the CSharp compiler supports conditional assignment of Boolean variables, it will be as easy as it will ever be to design and build your own widget without a custom ASIC.

      1. “I mean to say that (decimal, floating point, and vfl) were not included. That essentially meant that it did add, subtract, compare, load, store, branch. Same as the older 7044.
        The 7044 had index registers that were offsets to operands. System 360 had 16 General Purpose registers, but most were used for addressing data memory.”

        — Karl, this random recitation of facts seems disconnected from anything discussed in the article

        “Multi-processing according to Amdahl’s Law does not double performance for a 2 way, rather increase as the square root of 2. So now there are processors with a dozen cores that probably have the same performance as a 2 core.
        That is another case where complexity sells.”

        — Why are you bringing multiprocessing into the mix here, Karl? Out-of-order execution is not multiprocessing.

        “I say Ray Holt is right. We will soon be able to buy a chip with true dual port rams, GPIO drivers and an assortment of attachable devices.”

        — We have these devices now, Karl. They’re called FPGAs. They now have little dual-port memories sprinkled all over the place inside of the FPGA fabric. It’s been this way for a quarter of a century.

        “And then buy or design the logic to move data in and out of embedded memory and the attached devices. Now that the CSharp compiler supports conditional assignment of Boolean variables, it will be as easy as it will ever be to design and build your own widget without a custom ASIC.”

        — Again. FPGAs.

        1. Right on, Steve!
          However, the current soft processors are just too slow and embedded processors are at the limit of clock frequency.

          By the way, Stratix FPGAs have impressive embedded memory blocks with true duel port mode.
          Meaning that it can write the result of the previous cycle and read the next two operands simultaneously.

          And another block can access ALU opcodes concurrently.

          1. Right on, Steve!
            However, the current soft processors are just too slow and embedded processors are at the limit of clock frequency.

            By the way, Stratix FPGAs have impressive embedded memory blocks with true duel port mode.
            Meaning that it can write the result of the previous cycle and read the next two operands simultaneously.

            And another block can access ALU opcodes concurrently.

            No more load, add, store, fetch, branch.

            The Roslyn Compiler API is open source, CSharp Compiler.
            There is a syntax tree walker and stack based expression evaluator.

            So CISC and RISC RIP.

  4. Well, this article is a tribute to Lynn Conway and is not really a discussion of the ACS architecture, Karl Stevens. Besides, I’m not the expert on this scarcely documented and groundbreaking computer. For more detailed discussion by an expert who researched the subject for years, I suggest you check out Professor Mark Smotherman’s analysis: https://people.computing.clemson.edu/~mark/acs_organization.html. You’ll find a detailed discussion of multiple execution fetch, out-of-order execution, and (in particular for your claims) the 2-way, set-associative instruction/data cache designed into the machine. IBM (as usual) had its own terminology for memory cache and the IBM 360 Model 85 appears to have had one. These are all covered in Smotherman’s Web documentation. Feel free to argue with him.

    The 801 project was tasked with building a 12 MIPS machine for a telephone switch at a time when IBM’s best mainframe cranked at 25% of that speed. The result was a pipelined machine with hardware instruction decode and a large register file to speed the execution of the machine’s simplified instruction set. These are all bedrock attributes of what we call a RISC machine.

    You and I are in agreement that RISC is a poor name for this type of machine, but I think the horse has left the barn on this one. I do offer one counterexample. The Motorola 68000 family was a wonderful 32-bit microprocessor with a multi-level microcode scheme for instruction decode. It’s as CISC as you can get, which was necessitated by the semiconductor process technology of the late 1970s. Much later, Motorola introduced the ColdFire family, which executed a reduced set of 68000 instructions using hardware decode and pipelining. Over time, as semiconductor technology progressed, Motorola was able to implement more and more of the extended 68000 instruction set using this architecture. At some point, the number of 68000 instructions was no longer “reduced,” but ColdFire remains a RISC machine.

        1. I could really get into designing using the FPGA memory blocks.
          Pipelining has reached the clock speed limit and HLS may never be truly usable.
          Using HDL for design entry was and still is a colossal blunder.
          There is no real debugging capability in the tool chain other than waveforms.

          I have a project designed and debugged using Visual Studio and CSharp. I put it up open source on GIT. There was absolutely no interest.

          By the way, I dubbed it CEngine since the input is C statements and expressions. Expression evaluation is extremely fast because of parallelism instead of pipelining.

          Yep, it is built on an FPGA and uses embedded true dual port memory blocks.

          t

Leave a Reply

featured blogs
Nov 15, 2024
Explore the benefits of Delta DFU (device firmware update), its impact on firmware update efficiency, and results from real ota updates in IoT devices....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Vector Funnel Methodology for Power Analysis from Emulation to RTL to Signoff
Sponsored by Synopsys
The shift left methodology can help lower power throughout the electronic design cycle. In this episode of Chalk Talk, William Ruby from Synopsys and Amelia Dalton explore the biggest energy efficiency design challenges facing engineers today, how Synopsys can help solve a variety of energy efficiency design challenges and how the shift left methodology can enable consistent power efficiency and power reduction.
Jul 29, 2024
80,240 views