So many things are currently going on in the artificial intelligence and artificial body spaces that my head is spinning like a top. For example, AIs are being used to design chips and systems for other AIs to run on, and an AI running on one of those systems can generate synthetic data that can be used to train another AI, and… then things start to get complicated.
Now, before we jump into the fray with gusto and abandon (and aplomb, of course), I have exciting news. This news is so exciting that it may be best for you to sit down before reading on. Are you ready? Good, then I’ll begin.
This year’s Embedded Online Conference (EOC) will take place 12-16 May 2025. Now in its fifth year (there were two precursor GoToWebinar sessions in 2018 and 2019), this virtual gathering has grown to be one of the most prestigious events in embedded space (where no one can hear you scream).
In a crunchy nutshell, this event features the crème de la crème of presenters… and me (I’ve contributed at least one paper to every EOC since its inception). This is the place to see and be seen, and to learn numerous nuggets of knowledge like “every bit counts,” “even lost signals can make waves,” and “silent failures can produce the loudest problems” (I’m sorry, I’m too “punny” for my own good).
As one commenter says on the EOC website: “I am so blown away. In the past, I have spent 10X $ on 1/10 the amount of education I got from EOC. Amazing job to everyone who put this together.”
The reason I’m waffling on about this here is that my talk this year will be on the subject of AI in Embedded Systems and Life Writ Large. In this talk, I’m going to waffle furiously about how AI is being used to design embedded hardware and software, how it’s being used to write (and read) documentation, and lots of related “stuff.” I’m also going to be covering the “latest and greatest with respect to AI-powered robots and, once again, lots of related “stuff.”
The truly awesome news is that the conference organizers have given me a promo code to share with you (and for you to share with your family and friends). If you use promo code MAXFIELD2025 when you REGISTER, this will bring the price down to $95 for access to EOC 2025 (or $145 with access to the archives from previous events). The organizers also tell me that this price is good only until the end of February. Both prices will go up by $50 in March, as will the early bird registration fee (but this promo code will always be good for a $50 discount, even if you decide to leave things until the last minute).
But I fear we are in danger of wandering off into the weeds …
Have you ever found yourself in the “Uncanny Valley”? As related to robotics engineering, Japanese roboticist Masahiro Mori first introduced the concept in 1970 in his book titled Bukimi No Tani. According to the Wikipedia:
The uncanny valley effect is a hypothesized psychological and aesthetic relation between an object’s degree of resemblance to a human being and the emotional response to the object. The uncanny valley hypothesis predicts that an entity appearing almost human will risk eliciting eerie feelings in viewers. Examples of the phenomenon exist among robotics, 3D computer animations and lifelike dolls.
Consider the graphic below. It’s interesting to note the difference between something that’s unmoving (like a corpse) or that’s moving (like a prosthetic hand). I also like the fact that zombies managed to make their way into this graph. It’s also interesting to observe that our affinity with humanoid robots increases as they become more realistic (that is, as they look more like us) until they reach about 75% of human likeness (although I’m not sure how you go about measuring this), at which point the affinity factor falls precipitously and we plunge screaming into the depths of the uncanny valley.
Welcome to the uncanny valley (we hope you’ll enjoy your stay)
(Source: Wikipedia/Smurrayinchester)
It’s true. Contrast C-3PO from Star Wars with Sophia from Hanson Robotics, for example. C-3PO’s golden, mechanical appearance makes him clearly non-human, so people accept him without discomfort. By comparison, Sophia is a real-life humanoid robot with realistic skin and facial expressions that sometimes feel “off” due to slight inconsistencies in movement.
I should point out that I am in no way denigrating the guys and gals at Hanson Robotics. I am tremendously impressed with their work. I’m just not sure that I’d like to find myself along with Sophia in a desolate and deserted building in the middle of a dark and stormy night.
What if a robot were truly lifelike? Just saying this makes me think about The Caves of Steel by Isaac Asimov. Set around 3,000 years in the future, fifty planets known as the “Spacer Worlds” relatively close to the Earth have been colonized. Meanwhile, people on Earth live in vast city complexes covered by huge metallic domes (i.e., the “caves of steel”).
Our hero, detective Elijah Baley, is familiar with Earth robots like R. Sammy, which are boxy, metallic, and obviously robotic. They are designed to look and act in a way that clearly differentiates them from humans, which makes them less unsettling to the general population. By comparison, Spacer robots are remarkably human in appearance. In fact, R. Daneel Olivaw has realistic skin, facial expressions, and mannerisms to the extent that he’s indistinguishable from a human.
But we digress…
If we were to get my time machine working and use it to travel back to the early 2000s, we would find a bunch of start-up companies interested in creating lifelike humanoid robots. As we now know, there are two aspects to this: the robot’s body and the robot’s mind.
Circa 2005, AI technology was much less sophisticated as compared to today. The field was still in a transitional phase, with notable progress being made in certain areas, but not yet reaching the advanced levels we see now in conversational AI or robotics. Sophisticated physical movement (like ASIMO’s walking, dancing, or simple human interaction) was possible, but robots lacked the adaptive intelligence to respond dynamically to complex human interactions. Robots at that time could be programmed to perform repetitive tasks with high precision (think industrial robots), but they were far from having autonomy or the ability to think independently.
Some of those early robot companies split their resources, trying to develop their own AIs in conjunction with lifelike robots. The problem is that both these activities can consume vast amounts of time and money.
Whether by chance or design, other companies decided to focus on the physical aspects of the robots, like their appearance and facial expressions. The real game-changer came with the development of large language models (LLMs) like ChatGPT, and their release into the wild (ChatGPT’s initial release was on 30 November 2022). Suddenly, the companies who had focused on developing the physical aspects of life-like robots found themselves on a level playing field with respect to the AI portion of the package.
The reason I’m waffling on about all this here is that I was recently introduced to one of these latter companies. I was just chatting with Leo Chen, who is Director of U.S. Operations at Engineered Arts.
This company has an interesting back story. Based in Cornwall, England, it was founded by Will Jackson in 2004. At that time, Will wanted to create story-telling robots for use in entertainment venues. Over the years, the company grew, and their robots became more and more lifelike. Consider Ameca, for example.
Ameca is the perfect humanoid robot platform for human-robot interaction (Source: Engineered Arts)
The interesting point here is that Ameca was introduced in December 2021, which was almost a year before ChatGPT bid the world a cheery “Hello”. Now, we have the combination of the Ameca physical platform with today’s AI-powered machine vision, speech recognition, natural language processing (NLP), and facial expression generation.
Ameca’s interaction capabilities are built on sophisticated computer vision and machine learning algorithms, all integrated with emotional AI to produce more lifelike expressions. (Emotional AI, also known as “affective computing,” is a type of AI that can recognize, analyze, and respond to human emotions.) Ameca and companion Azi also use AI to recognize gestures and understand basic conversational cues, enhancing their realism as humanoid robots.
You may not think the joke told by Azi in this video is funny, but I’ve heard (and told) worse.
Engineering Arts have desktop versions of Ameca and Azi (essentially the head and torso), which provide great platforms for human-robot interaction (HRI) research, front-of-house applications, and human conversation.
Are we at the stage of having humanoid robots that can help us with our housework, aid us with our gardening chores, and accompany us on supermarket shopping expeditions? Not yet, much to the chagrin of my wife (Gigi the Gorgeous). But we are getting closer day-by-day, and the folks at Engineered Arts are helping to take us there.
I don’t know about you, but I would love to have a desktop model of Ameca here in my office. And, speaking of you (may I say how dapper you’re looking today), do you have any thoughts you’d care to share on anything you’ve read here?
Great article Max — I especially enjoyed the Uncanny Valley chart!
Note: Readers in or visiting the San Fransisco Bay Area can interact live and in-person at the Computer History Museum’s new “Chatbots Decoded: Exploring AI” exhibit. Ameca awaits your questions!
Dag Spicer
Senior Curator
Computer History Museum
Thanks for the kind words Dag. I don’t get out to Silicon Valley very often these days, but the next time I do I will be sure to visit the Computer History Museum again (I LOVE that place!!!)