I fear that my poor old cranium has been crammed to capacity with contemporary concepts. For example, I have always understood the abbreviations HIL or HITL to mean “hardware-in-the-loop” in the context of simulation and emulation. Now, however, I have discovered they can also refer to “human-in-the-loop,” which is an experience I’ve not hitherto enjoyed myself.
When I hear “human-in-the-loop,” I cannot help but think about “The Borg Collective,” which is a hive mind of cyborgs in the Star Trek universe. A similar concept is also explored in A Deepness in the Sky by Vernor Vinge. In this case, the “Focused” are humans who have been mentally reprogrammed, through brain manipulation, to concentrate solely on a particular task or area of expertise. Their personalities, free will, and ability to think beyond their assigned specialty are effectively stripped away. Like the Borg drones, the Focused lose their individual autonomy and exist only to serve a collective goal. They are highly efficient and surpass normal humans in specialized tasks, but at the cost of personal freedom.
Just thinking about this sends shivers running up and down my spine, so it’s fortunate that it’s not what we are talking about here. In the context of this column, human-in-the-loop refers to a collaborative approach in artificial intelligence (AI) and machine learning (ML) whereby humans actively participate in the training, evaluation, or operation of AI systems, providing feedback and expertise to enhance accuracy, reliability, and adaptability. In this case, human input helps address biases, refine models, and improve performance in complex or nuanced situations. HITL systems can learn and adapt to new situations more effectively by incorporating human feedback. Such systems can gracefully handle low-confidence situations and leverage human expertise when needed. In a crunchy nutshell, HITL leverages the strengths of both humans and machines, enabling more robust and reliable AI systems.
One more concept I’d like to touch on before we plunge headfirst into the heart of this column is that of self-replication in the context of robots and AIs. Now I’m thinking of Dennis Taylor’s “Bobiverse,” which began with his book, We Are Legion (We Are Bob). Our hero, Bob, unexpectedly finds himself with his consciousness uploaded into computer hardware (I hate it when that happens), where he is slated to be the controlling intelligence in an interstellar probe searching for habitable planets. When he reaches such a planet, he is to replicate his probe and his consciousness and “keep on carrying on,” as it were.
Why am I waffling on about all this? If we are lucky, all will be revealed while I manage to maintain a corporeal presence on this plane of existence.
I was just chatting with Kent Gilson. Our paths have crossed several times over the years. The first instance of this occurred in the early 2000s, when Kent was serving as the CTO of StarBridge Systems, a company that built a hypercomputer boasting hundreds of FPGAs and multiple memory hierarchies. Next, in the late 2010s, we shared the stage at an Embedded Systems Conference (ESC). Kent gave a live demonstration in which he trained a 3D-printed robot arm to pick up and manipulate a cup of water. I remember thinking, “that’s interesting,” but I had no idea where he was going to take this technology, all of which brings us to REVOBOTS.
Let’s start with the fact that this company is headed by a crack team: CEO Dr. Giby Raphael (human-in-the-loop pioneer), CTO Kent himself (mechatronics and high-performance computing expert), CAIO Dr. Rahul Khanna (renowned AI expert), CMO Andre Christian (visionary technology and marketing executive), CSO Dr. Ranjy Thomas (seasoned entrepreneur and strategic innovator), and VP of Design Mike Phillips (automation and robotics expert).
There are many things that impress me about this company, not least their innovative robots, which we will examine in just a moment (I think you’ll find the REVOBOTS Origin Story to be jolly interesting). The thing is that anyone (well, almost anyone) can build a robot these days, but the result is typically horrendously expensive, and it’s hard to envision how they will be deployed in the real world. By comparison, the guys and gals at REVOBOTS have a rather clever deployment strategy, which we will consider after contemplating the robots themselves.
The core idea is a robot called a TASKBOT. These can be presented in a variety of form factors, including fixed or mobile, equipped with one arm or multiple arms. The image below shows two different views of a mobile TASKBOT flaunting two arms.
Mobile TASKBOT equipped with two arms (Source: REVOBOTS)
Depending on where it’s being deployed and the tasks it’s required to perform, this bodacious beauty can be equipped with 360° 3D Lidar, IR, and/or camera sensor options. Similarly, the ends of its arms can be equipped with an almost limitless variety of end-effector and tool options, and these end-effectors can have sub-micron accuracy. Its arms can handle a 150-pound (70 kg) payload, and its tracks provide robust and agile maneuvering capabilities.
One of TASKBOT’s key features is that nearly all its structural components are 3D-printed. This means production can scale on demand. Need a new batch of robots? Just hit “print.” Kent says that the marginal cost of producing each additional unit is astonishingly low, approaching zero for the plastic parts beyond raw material and energy. In fact, the entire body of the TASKBOT can be produced on about $1,000 worth of raw materials, an order-of-magnitude reduction in cost that makes widespread adoption plausible.
In Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith, we learn many things, including the fact that an octopus has eight “arms” (not “tentacles”), and that around two-thirds (66%) of an octopus’s neurons are in its arms. This means that most of an octopus’s neural processing happens outside its central brain, allowing each arm to independently sense, react, and even make decisions without direct input from the brain.
The reason I mention this here is that each of the TASKBOT’s arms is controlled by a low-cost SoC FPGA, such as a dual-core ZYNQ from Xilinx/AMD. I believe these chips cost around $50 a piece. Additionally, the robot’s mobility, in the form of its track subsystem, is controlled by an affordable Arduino processor. All the processors communicate with each other, and there is no central control per se.
There’s so much to talk about here, but I don’t want to wander off into the weeds, so I’ll just give you some of the highlights from my chat with Kent. Let’s start with the precision of the end-effectors, which is currently running at 900 nanometers (less than 1 micrometer)! This means a TASKBOT can perform tasks for which companies usually hire people with special skills, such as Swiss watchmakers with 20 years of experience. I’m serious. One of the jobs a TASKBOT has performed is manipulating and repairing 30-micron probes on a semiconductor wafer test probe card. Something else they can do is assemble devices that employ teeny-tiny M1 machine screws. The TASKBOT’s end-effectors are so sensitive that the bot can detect when it has a crossed thread, back off, and try again. Impressed? I know I am.
Now consider that many other robot vendors are equipping their robots with large language models (LLMs) that run on high-end, power-hungry AI processors. By comparison, the TASKBOT employs a set of small language models (SLMs) that are combined hierarchically, can be trained quickly, and have a very small memory footprint, allowing them to be integrated into an FPGA and run in real-time on a 5-watt compute infrastructure.
Speaking of power, a TASKBOT consumes approximately 80W when standing still and around 160W when in motion. It’s powered by two lithium batteries. It runs off one battery while keeping the other in reserve. When the TASKBOT detects that its active battery is running low, it switches to the fresh battery, goes to the nearest “battery bank,” pulls out the depleted battery (inserting it into the bank to recharge), and replaces it with a new one (you could think of this as equivalent to the TASKBOT taking a coffee break).
Now, this is where things start to get really clever. It’s possible to fit a custom 3D printer in the space between the TASKBOT’s “legs.” The idea is that, in the not-so-distant future, a TASKBOT will be able to “print” new end-effectors on the fly to handle any specialist tasks it is required to perform. Furthermore, Kent says it won’t be long before it will be possible for a TASKBOT to replicate itself by printing and assembling new versions of itself. In fact, he envisions a time when a helicopter could lower a solar-powered container carrying a TASKBOT and a collection of raw materials, along with any necessary electromechanical and electronic components. The TASKBOT could then create a small TASKBOT task force to accomplish whatever was required.
Now we come to the next part of the puzzle, which is the human-in-the-loop. In this case, we’re talking about remote HITL (RHITL), where humans remotely oversee and intervene in automated processes.
Imagine the scene. A TASKBOT shows up for its first day at work. Let’s say it’s going to perform its duties on some sort of assembly or production line. I bet that you, like me, cannot help but think of the I Love Lucy sketch where Lucy and Ethel are at a chocolate factory.
What are you thinking about? We don’t have the time to talk about that here! So, the TASKBOT undergoes the same onboarding process as a human being. There are differences, of course, such as the fact that humans would be shown the locations of the canteen and restrooms, while the TASKBOT would be shown the location of the battery bank.
In the same way that an existing employee would show a new human recruit what was required, that employee would train the TASKBOT. Well, not really, because the RHITL would be controlling the TASKBOT’s actions—picking things up, turning them around, manipulating them in certain ways. Meanwhile, the TASKBOT’s cameras and tactile sensors would stream everything to the cloud, which would train the small language models that would be downloaded into the TASKBOT’s processors.
So, how much does a TASKBOT cost? It really doesn’t matter because you can’t actually buy one. This is where we come to REVOBOTS’ deployment strategy. Developed nations like the US are facing a crisis due to their aging populations. As we read on the REVOBOTS website:
By 2030, a global labor shortage could leave over 85 million jobs unfilled, leading to $8.5 trillion in lost annual revenue. Industries from retail to hospitality to manufacturing—with millions of vacancies today—face growing strain as inflation and stagnant wages exacerbate the problem. Without a solution, businesses worldwide will experience dwindling productivity, spiraling costs, and an inability to meet consumer demand.
The REVOBOTS solution? Robot-as-a-Service (RaaS)! Rather than selling TASKBOTS as high-priced capital equipment, REVOBOTS offers them through a flexible subscription model. Clients can rent a robot workforce much like they rent cloud computing power. This eliminates the capital expense barrier for customers. For example, a factory can subscribe to a set number of TASKBOTS for a predictable monthly fee, covering hardware, maintenance, continuous software updates, and RHITL pilot services. The folks at REVOBOTS say that the TASKBOT can perform its duties 10X faster than a human at only 50% of the cost.
This flexibility is hugely attractive to industries with seasonal or cyclical labor needs. And because REVOBOTS retains ownership of the units, it aligns its own incentives with those of its customers: if the robots aren’t delivering value, REVOBOTS doesn’t receive payment. As the folks at REVOBOTS say: “It’s a win-win structure rooted in performance and trust.”
I know that a lot of people’s knee-jerk reaction is that this type of technology will put people out of jobs, but that’s true only if there are enough people to do the jobs in the first place. I don’t like to admit it, but I’m getting old myself, as are many of my friends. And it’s not just me and my mates. The whole US population is aging rapidly, driven by longer life expectancy and declining birth rates. In 2020, for example, ~56 million individuals (16.8% of the U.S. population) were 65 or older, but this is projected to reach ~63 million (18.4% of the population) in 2025, ~72 million (20.6% of the population) by 2030, and ~81 million (21.6% of the population) by 2040. Similarly, the number of Americans aged 85 and older is expected to triple from ~7 million in 2020 to 19 million by 2060. Fewer young workers will mean slower economic growth unless robotics, artificial intelligence, and automation are utilized to fill labor gaps.
If you want to learn more about all this, REVOBOTS’ Official North American Debut is set for April 17th at the AI Venture Network Showcase in Phoenix, Arizona. This event is FREE to attend. I wish I lived near there because I would LOVE to see a TASKBOT with my own eyes (I’ve already cleared a place for one in my garage on the off chance the chaps and chapesses at REVOBOTS decide they need some additional field testing).
All I can say is that I am very impressed with the folks at REVOBOTS and their vision, which is currently exemplified by today’s TASKBOT. But it’s not all about me (it should be, but it’s not). What say you? Do you have any thoughts you’d care to share on any of this?