feature article
Subscribe Now

Where Do Programming Languages Go to Die?

Assembly Programming has Become a Lost Art

“Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi volutpat.”

If you’re a parent with small children, you’ve probably taught them to “tie” their shoes by closing the Velcro straps. Someday, when they get older, maybe they’ll also learn how to tie shoelaces. You know, like their ancestors once did. 

The question is, will they? Is learning to tie shoelaces a useful skill or just a remnant of an old and outdated technology that’s no longer relevant? 

What about telling time on a clock with hands? Digital clocks are the norm, so much so that analog clock faces (a retronym) are generally just decorations, an optional look you can download to your Apple Watch for special occasions. Is reading a clock face a useful skill or a pointless carryover? 

A friend mentioned that his son had started taking C++ programming classes in college. The son’s reaction was basically, “This isn’t programming! Python is programming. These are just primitive runes on ancient scrolls!” 

He’s got a point. Programming languages have progressed an awful lot over the years. That’s a good thing. But with each new generation of languages, we leave an old generation behind. Is that a bad thing? Are we losing something – losing touch with what a computer really is and how it works? 

Another friend made the opposite point. He’d started out learning high-level languages first, and hardware design later. He was utterly mystified that CPU chips couldn’t execute Python, HTML, Fortran, or any other language natively. How did this code get into that machine? There was no correlation that he could see. Some large and magic step was evidently missing. It wasn’t until he studied assembly language programming that the aha! moment came, and he went on to become a brilliant hardware/software engineer after that. 

For him, learning that assembly language even existed, never mind using it effectively, was the bridge between understanding hardware and understanding software. Ultimately, every programming language boils down to assembly. (Which boils down to 1s and 0s, which are really electrical impulses, which represent electrons changing orbits, ad infinitum.)  

Do we miss something – are we skipping an important step – by not learning assembly-language programming? Or is that just a pointless waste of time and a relic of an earlier and less efficient era? Does assembly belong in the same category as Greek, Latin, and proto-Indo-European languages, or is it foundational, like learning addition and subtraction before tackling simultaneous equations? 

There’s no question that programming in assembly language enables you to build faster and smaller programs than a compiler can. That doesn’t mean you definitely will produce tighter code; only that you can. Compilers are inherently less efficient with hardware resources, but more efficient with your time. Good C compilers can produce code that looks almost like hand-written assembly, but they can also produce completely inscrutable function blocks. Compiler companies generally devote more time to their IDEs and other customer-facing features than they do to their code generators. Binary efficiency is not high on the list of must-have features. 

As programmer extraordinaire Jack Ganssle points out, “In real-time systems interrupts reign, but their very real costs are disguised by simple C structures that hide the stacking and unstacking of the system’s context.”

Because compilers hide so much of the underlying hardware structure – and deliberately so – programmers can’t see the results of their actions. One line of C++ might translate into a dozen lines of assembly, or a few hundred. There’s no way to know, and most programmers won’t care. Sadly, many won’t even understand the difference. As you climb to higher levels of abstraction, the disconnect grows wider. Who can guess what one line of Ada will translate into at the machine level? 

Fortunately, we hide this inefficiency with better hardware. Processors get faster, so we ask them to do more, that we may do less. Like an ermine-robed nobleman assigning work to his vassals, we command, “Don’t bother me with details. Just get it done!” The royal “we” becomes increasingly disconnected from the real work going on, in the hope that our minds may focus on more pressing and important issues. 

In a historical fiction novel, the peons would rise up and overthrow their oppressive chief. I don’t think the CPUs are going to revolt anytime soon. Nor do I think we should abandon all high-level programming languages and revert to Mennonite tools or artisanal assembly methods. It’s just sad, and possibly counterproductive, that we’re stretching such a gap between what a computer does and what we ask it to do. That’s a recipe for frustration, inefficiency, code bloat, overheating, and very challenging debugging sessions. 

Instead, I think all programmers, regardless of product focus or chosen language(s) should be trained in the assembly language of at least one processor. It doesn’t even have to be the processor they’re using – any one will do. A good understanding of RISC-V machine language (to pick one example) will make the point that printf() is not an intrinsic function on any machine. Just to be safe, Ada programmers should have to learn two different machines. And three for Java. 

14 thoughts on “Where Do Programming Languages Go to Die?”

    1. VHDL is a Hardware Description, not an executable. A simulator can produce waveforms, but Synthesis must be done to generate the bit stream to “program” the FieldProgrammableGateArray(FPGA). A simulator identifies which operations will occur simultaneously after synthesis.

      Parallel execution only occurs AFTER synthesis and the FPGA has been “programmed”.

      And nowhere in this process is an Assembler used. Assembler implies that there is traditional cpu involved and can help to understand how computers work.

      Logic chips use Boolean Logic and states that are the combination of storage elements(flip-flops) that have true/false values. Inputs and register values or FSMs can also determine states.

      So, please fill in the blanks as to how knowing assembly language has anything to do with chip/hardware design.

  1. Really interesting this view about programming languages.

    My first was assembler for mainframes,back in 1976,
    after that I learned algorithms and logic schemes.
    The 2nd programming language was Fortran,
    while Cobol being the 3rd language, all of them in high-school.

    Since then I migrated to other computer systems and languages,
    like Assembler for operating system RSX11M from DEC, he he.
    SInce the PC era, I moved on to C and UNIX, Visual Basic and Windows,
    To ABAP and SQL in SAP and now VHDL with FPGAs, long way, isn’t it?

    However, once a programmer always a developer, right?
    Not quite, because every human evolves in time and
    this way we gain a lot of international experience,
    which inevitably leads to a more complex view of structures.

    An evolved language like Java or Python is for creating applications,
    while C++ is used to design nice tools within the kernel of an opsys.
    However, embedded systems require an extension to sensors and actors,
    while VHDL or Verilog is to design the underlaying hardware, e.g. processor.

    So, we have here four(!) different levels of complexity within a computer:
    1st VHDL to build the processor, 2nd Assembler to program the processor,
    then C to write the operating system and Java to write applications.
    But all of them are designed by humans, which have totally different structure.

    The architecture of the universal computer, as processor with memory,
    corresponds roughly to the usual brain structure of a man with consciousness,
    which represents the foreground on the screen, which we can follow consciously,
    and subconscious, which is the background, where the following rule will apply:

    The consciousness takes control of all activities and decides at the same
    time what will be considered as background, namely all intermediary steps.
    But, one can carry out consciously only one step at once.
    For this reason the flow in our consciousness is always serial.

    Our entire behavior is discrete from other unconnected events.
    This way of thinking decides on the methods used to obtain a result,
    in this case: isolated, static, unilateral, approximately and relative,
    exactly those resources which are used in mathematics.

    This way of thinking contains however apparently one big advantage:
    The consciousness is told from outside in small mouthfuls
    what it has to do AND which meaning the actual input has.
    The order of doing things, for a before well-defined purpose.

    What does this has to do with ptogramming? Well a lot,
    because programming means a job for the consciousnes,
    same like every parent tells his children how to do something,
    how to polish your shoes or how to count the finger of a hand.

    What a pity that we can not program our subconscious?
    Why are we not able to do this? Well, the answer is very simple:
    It does not work in serial mode, like a processor or our consciousness.
    It works only in parallel, while it uses structures, which are in fact neural nets.

    Some people think neural nets can be used in conjunction with mathematics,
    create hereby vast databases containing millions of pictures, all of the same type.
    This with only one purpose: to recognize in a new picture if it contains that very type.
    As a programmer I can only shake my head: What were you people thinking?

    If we understand that algorithms are in fact a network of elements,
    and that these elements of the network are communicating with each other,
    all at the same time, means in parallel, like VHDL in FPGAs,
    NOT serial like the execution of instructions within a processor,

    well, then and only then, we might have a chance to use neural nets accordingly.
    But unfortunately, each developer has its own specialty, either VHDL, Assembler , C or Java,
    let alone the fact that hardware programming is the most difficult one. Why is that?
    Well, in VHDL all processes are executed in parallel , not serial like in all the others.

    Furthermore, biological neural nets are all 4 programming levels in one,
    where the processor is our consciousness, but to get there, it is a long way.
    This implies that our brain has all 4 levels, but only one entry point for programmers:
    namely neural nets. So, try to program some hardware in neural nets for embedded systems.

    Then extend all this neural net hardware to an operating system valid throughout the entire brain.
    Afterwards, one can develop a more complex kernel of the operating system and only then,
    we can start thinking of different applications within the brain, but remember one thing:
    we are programming an embedded system, with sensors and actors and a body framework.

    Programming a self-programming operating system based on self-programming neural nets.
    This is the ultimate challenge for all programmers, namely the king discipline of designing
    a digital brain which will work in a corresponding body with sensors and actors and organs.
    And there is only one more thing: the entire system has to have a self-learning mechanism.

    How many of our esteemed colleagues would even dare to think of such a task?
    Well, if this is too high for you, then you relax and simply buy a digital brain from our website, soon.

  2. Programming languages convert source text to native computer instructions to perform expression evaluation and control flow execution.
    The computer itself evaluates Boolean expressions(statements) that determine the sequences of state changes to execute the native computer instructions.
    So languages like VHDL and Verilog have synthesizable constructs that are used to determine the logic and memory elements(registers and flip-flops) that perform the logic operations and state changes of the chip.
    Program compilation produces native computer instructions. Hardware compilation produces digital logic circuitry. Therefore “compilation” means two completely different things.
    Does this help to explain that hardware engineers and software engineers have completely different skill sets?

  3. By the way, regarding the title : “Assembly Programming has Become a Lost Art”.
    I do not believe this. You know why? Because in biological neural net, aka the brain,
    the conscience manages the serial events within our life via the spoken language,
    which is in fact its own Assembler language, with grammar and dictionary as instruction set.

    Too bad that we people speak so many different languages,
    while each of us is concentrating only on our mother tongue.
    This means it is not portable on other (brain) systems,
    except when we learn another foreign language, like English.

    But then again, how many of us do the effort to extent their views and:
    a. speak more than one language, natural or programming
    b. work different job types, e.g. developer, consultant, trainer, manager, company owner
    c. live in more than one country or continent

    Sic transit gloria mundi ! (Latin for: “Fame is but a transient shadow.”)

  4. let’s not forget Forth – which Charles Moore used to design and compile everything in his computers from the actual silicon area layouts and interconnect to the Colorforth OS and forth app programs that ran on the resulting F18 chips (AFAIR). One of the Forth applications that could run (fast) on the chips as soon as they came back from the foundry was that silicon compiler itself. Notably a common syntax was used throughout, evolving as it went but ultimately readable by anyone who knew Forth itself. I think only occam did anything like this.
    To recap, the F18 cpu ran a Forth as its assembly language, so once you learned the high level app coding, the lower levels had very little mystery!

    1. Yes, stack based computing is still alive and well. That is the basis of FORTH. And the latest is the Roslyn Compiler developed for the CSharp and Visual Basic languages. It runs on both Windows and Linux and produces CIL/MSIL that can be JIT compiled to various computer ISAs — compile once/ run anywhere. And stacks are at the heart of the whole thing.

      An abstract syntax tree/AST is created and used for both expression evaluation and flow control. I had a prototype running in simulation. It used three embedded memory blocks and a couple of hundred LUTs on an FPGA. Physically it resembles a heterogeneous FPGA accelerator and has comparable performance.

      Using the Roslyn Compiler API will save a lot of verification time and effort. Expression evaluation is pretty well done and flow control is started. (if/else, for, while, do/while, and switch) Flow control is basically compare two values and either do the next sequential control word or do the target control word.

  5. I don’t think you can describe Python as progress in language design, re-inventing the wheel at best.

    “Modern” machines are designed for running C code, but C as a language is a horrible mismatch for what modern Silicon is actually good at.

    Dead languages I know: Algol60, Imp77

    C++ re-imagined for 2000+ – http://parallel.cc

      1. “A coworker once described C as “sugar-coated assembly language.”
        Maybe, but it brought some structure to the source code, that is the important part.
        It helps to separate control structure from assignments because control is Boolean while data flow expressions are arithmetic.

        Operator precedence is especially important for arithmetic evaluation which is essentially sequential.

        Control sequencing in computers is also sequential and depends on accessing memory for each state change. (at least to fetch a branch instruction to determine the next instruction to be executed. )

        And the world has been sold on the SuperScalar branch prediction and out of order execution magic.
        But it is not a cure all, it is application sensitive, and has not been quantified. But you say it is intuitively obvious, and I say “take off those rose colored glasses!”

        But FPGA accelerators exist, don’t they? How can that be? Well, only a few have the fortitude to use the so called “tool chain” to design a few.

        How about fixing the “tool chain”? FPGAs are not the culprit.

  6. Compilers are wonderful things – until they break (or maybe you just changed the optimization level) and you need to look at what happened. So, some understanding of the whole compiler, assembler, linker, and programmer processes are pretty important for any holistic view of debugging, IMHO.
    ARM Cortex, or RISC-V are likely good places to start for new programmers.

    Of course, my personal favorites were the Z-80 Assembler (micro-controller), and the Compass Assembler (mainframe), for Seymour Cray’s CDC 6600 mainframe series. I think the weird prize goes to the TMS9900 micro-controller assembler though, a stack based machine with only three (3) registers…

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,826 views