feature article
Subscribe Now

AI Boldly Goes Behind the Beyond

I used to love the title sequence at the beginning of each episode of Star Trek: The Original Series starring William Shatner as Captain James Tiberius Kirk. I’m thinking of the part where the announcer waffled on about the Enterprise’s five-year mission “to boldly go behind the beyond, behind which no man has boldly gone behind, beyond, before” (or words to that effect). Well, it seems to be artificial intelligence’s turn to boldly go behind the beyond.

As an aside, writing the previous paragraph reminded me of The Boys: A Memoir of Hollywood and Family by brothers Ron Howard (The Andy Griffith Show, American Graffiti, Happy Days…) and Clint Howard (The Andy Griffith Show, Gentle Ben…). In addition to being one of the best autobiographical books I’ve read, I was surprised to discover that 7-year-old Clint played Balok in The Corbomite Maneuver, which was the tenth episode in season one of Star Trek.

As another aside (I simply cannot help myself), have you ever pondered the Fermi paradox, which is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the potentially high likelihood of its existence? As we discover in the science fiction novel The Forge of God by Greg Bear, one answer to Fermi’s puzzling poser is that electromagnetically noisy civilizations like ours might be snuffed out by the arrival of artificially intelligent self-replicating machines designed to destroy any potential threat to their (possibly long-dead) creators. It has to be acknowledged that having the Earth destroyed in The Forge of God is a bit of a bummer, but we get our own back in Anvil of Stars when… but no; I’ll let you discover what happens for yourself.

To be honest, when it comes to artificial intelligence (AI) in the context of space—excepting extraterrestrial encounters such as those discussed above (or, preferably, more human-friendly fraternizations of the ilk depicted in Batteries Not Included and The Flight of the Navigator)—I’ve only really thought about things like AI-powered autonomous space probes and suchlike. Thus far, however, I’ve never really thought about using AI as part of designing things like satellites and space probes.

Trust me; you don’t want to be the one who failed to fully verify a satellite’s retro encabulation system.

All this was to change recently when I got to chat with Ossi Saarela, who is Space Segment Manager at MathWorks. Prior to his current role, Ossi spent 18 years as a practicing aerospace engineer working on mega-cool programs like the International Space Station (ISS). Ossi’s specializations include spacecraft operations and spacecraft autonomy.

One of Ossi’s current focuses is the simulation of space systems. Another is the use of AI both to design and verify space systems and as an integral part of those systems. It’s important to get things right the first time with any system, but even more so with systems destined for space because (a) getting them there is horrendously expensive and (b) it’s close to impossible to fix them once they are in space (the Hubble Space Telescope being one of the few exceptions that proves the rule).

Lest I forget, before we plunge headfirst into the fray with gusto and abandon, a couple of useful links are as follows: Using MATLAB and Simulink for Space Systems and Machine Learning for Space Missions: A Game Changer for Vision-Based Sensing.

I’m afraid this is the point where things become a little recursive (“in order to understand recursion, you must first understand recursion,” as programmers are prone to proclaim).

Let’s start with the fact that you need a humongous amount of data to train an AI model, and not just any old data will do. It needs to be good data because bad data can leave you spending inordinate amounts of time trying to determine why your model isn’t working as expected. 

Rather than banging your head against the wall tweaking your AI model’s architecture and parameters, it has been shown that time spent improving the training data and testing thoroughly can often yield larger improvements in accuracy. But where are we to get this data? Since it can be difficult to obtain real-world data from systems deployed in space, simulation often provides a solution. The use of simulation to augment existing AI model training data has multiple benefits, including the fact that running computational simulation is much less costly than performing physical experiments. Also of interest is the fact that simulations provide access to internal states that might not be accessible in an experimental setup. Furthermore, in the case of simulation, engineers have full control over the environment and can simulate scenarios that are too difficult, too dangerous, or even impossible to create in the real world. 

This is where the recursion starts to kick in, because people are starting to use AI models to approximate the workings of complex systems. Suppose you wish to create control algorithms that will—in the fullness of time—interact with a physical system. Rather than use the physical system itself, the key to enabling rapid design iteration for your algorithms is to create a physics-based simulation model that gives you the necessary accuracy to recreate the physical system and environment with which your algorithms can interact.

But there’s an elephant in the room and a fly in the soup (I never metaphor I didn’t like). Historically, to achieve the necessary accuracy, engineers have created high-fidelity physics-based models from first-principles (i.e., from the ground up). But these models can take a long time to build and a long time to simulate. The problem is only exacerbated when large numbers of models representing different parts of the system are combined in a single simulation.

One solution is to simulate each high-fidelity model in isolation, and then use the captured input stimulus and output responses to train corresponding AI models. These reduced-order AI models are much less computationally expensive than their first-principles counterparts, thereby enabling the engineers to perform more exploration of the solution space (where no one can hear you scream). Of course, any physics-based models can always be used later in the process to validate the design determined using the AI model.

Alternatively, in some cases, it’s possible to use real-world data from a physical system to train the AI model, thereby completely bypassing the creation of a physics-based model. Of course, once you have AI models to represent each of your subsystems, you could use these as part of a simulation to generate the data to train an AI model of the entire system, which returns us to the part where I started to waffle about recursion. 

But wait, there’s more… My degree in Control Systems involved a core of math sufficient to make your eyes water, coupled with electronics, mechanics, and fluidics (hydraulics and pneumatics). It also involved a lot of algorithms (oh, so many algorithms). Increasingly, engineers use simulations as part of the process of designing and verifying their algorithms.

A big problem with creating control algorithms is to ensure they fully address the complex non-linearities inherent in many real-world systems. One solution is to use data (either measured or simulated) to train an AI control algorithm (model) that can predict unobserved states from observed states. This model can subsequently be employed to control the real-world system.

So, if we re-read all the above, in a crunchy nutshell: (a) we can use simulation to generate data to train AI models, (b) we can use the trained AI models to speed our simulations, and (c) we can create AI-based control algorithms that we train using data generated by AI-model-based simulations after which we can verify these algorithms using AI-model-based simulations. (I feel it would be recursive of me to return to the topic of recursion.)

I was going to talk about using AI for tasks like Rendezvous, Proximity Operations, and Docking (RPOD), alighting probes on asteroids and comets, landing rovers on the Moon and Mars, and… so much more, but I’m afraid that will have to wait for another day because (what I laughingly call) my mind seems to be stuck in a recursive loop. It’s like déjà vu all over again (did someone just say that?).

Leave a Reply

featured blogs
Dec 19, 2024
Explore Concurrent Multiprotocol and examine the distinctions between CMP single channel, CMP with concurrent listening, and CMP with BLE Dynamic Multiprotocol....
Dec 20, 2024
Do you think the proton is formed from three quarks? Think again. It may be made from five, two of which are heavier than the proton itself!...

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured chalk talk

Advances in Solar Energy and Battery Technology
Sponsored by Mouser Electronics and onsemi
Passive components will play an important part in the next generation of solar and energy storage systems. In this episode of Chalk Talk, Amelia Dalton, Prasad Paruchuri from onsemi, Walter Fusto from Würth Elektronik explore trends, challenges and solutions in solar and energy storage systems. They also examine EMI considerations for energy storage systems, the benefits that battery management systems bring to these kinds of designs and how passive components can make all the difference in solar and energy storage systems.
Aug 13, 2024
54,624 views