I once worked for a large computer manufacturer that considered itself to be a “big cheese” in its headquarters’ hometown. For some reason, the folks who donned the undergarments of authority and strode the corridors of power decided to have a blitz on the local media channels — including newspapers, radio, and television — to remind the hoi polloi as to who we were and what we did. At the end of this campaign, the bigwigs (those sporting the biggest wigs) sponsored a survey and were chagrined to discover that — when questioned — the vast majority of local residents took a WAG (wild-assed guess) that the company was famous for manufacturing paint. There was much gnashing of teeth and rending of garb in the executive command and control canteen that day, let me tell you.
The reason I mention this here is that I was just chatting with Johanna Pingel, Product Manager at MathWorks. For the past couple of years, the little scamps at this illustrious organization have been doing everything but jump up and down, wave flags, and shout from the rooftops the news that they have awesome artificial intelligence (AI) capabilities to help their users with regard to developing and deploying machine learning (ML), deep learning (DL), and reinforcement learning (RL) designs. So, you can only imagine their frustration when their users kick off a conversation by saying something like, “We love your MATLAB and Simulink products… if only they supported the creation of AI applications.” (Can you hear a faint “Arrgggghhhhh” floating on the wind?)
I can well appreciate their humiliation and mortification. I mean to say, it’s not like they have been trying to keep any of this a secret. If you visit the MathWork’s home page, for example, you are immediately presented with the “MATLAB for Artificial Intelligence” legend accompanied by Machine Learning, Deep Learning, and Reinforcement Learning buttons. There’s also a telling image of a robotic figure rising from its knees as it learns to walk.
Robotic figure rising from its knees as it learns to walk
(Image source: MathWorks)
Actually, with regard to this illustration, I think they’ve got things the wrong way round. That is, if I had been in charge of creating this graphic, I would have reversed the sequence such that the kneeling robot was on the left and the walking robot was striding off the screen to the right. Of course, this is because I’m used to reading from left to right. Also, the flow charts and circuit schematics with which I’m familiar tend to flow top to bottom and left to right with regard to their inputs and outputs. There are, of course, a number of languages that employ right to left scripts, including Arabic, Aramaic, Azeri, Dhivehi/Maldivian, Hebrew (see also The Two-Minute Haggadah), Kurdish (Sorani), Persian/Farsi, and — it goes without saying, but I’ll say it anyway — Urdu. It just now struck me to wonder if the folks who read and write these languages would, if presented with a choice, prefer their circuit diagrams to flow from right to left also.
As an aside, this reminds me of a true story* involving a series of Heineken adverts from the 1970s (*not that any of my stories are untrue, you understand, but I learned at my mother’s knee not to let facts get in the way of a good tale). These adverts, which were intended to be viewed from left to right, were pictorial in nature featuring three panels. On the left, we had an image showing a sad state of affairs; in the middle, we were presented with a picture of someone quaffing a glass or bottle of Heineken; and, on the right, we saw that everything had now turned out to be the way it should be. One of these adverts featured the English cricketer Frederick (Fred) Sewards Trueman (1931-2006), who played for Yorkshire County Cricket Club and the England cricket team. (You may recall that I myself was born and bred in the county of Yorkshire.) Sad to relate, Fred sported a less-than-perfect set of teeth, whose state had not been improved by English dentistry. The Heineken advert in question showed Fred with a ragged-tooth grin on the left; Fred quaffing a Heineken in the center; and Fred flaunting a full-on splendid smile on the right. Unfortunately, Heineken sales quickly plummeted in those countries whose citizens read the story from right to left, who understood the moral of the tale to be, “Start with a great smile — drink Heineken — and say goodbye to your teeth.”
But I fear we digress… On the off chance you don’t have a clue (you might be a manager, bless your little cotton socks), as summarized by Wikipedia, MATLAB is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. Meanwhile, Simulink is a MATLAB-based graphical programming environment for modeling, simulating, and analyzing multidomain dynamical systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries.
The point of all this is that the folks at MathWorks have a bunch of tools that enhance the capabilities of MATLAB and Simulink to embrace AI with regard to the four main areas of Data Preparation, AI Modeling, Simulation and Test, and — ultimately — Deployment.
For engineers, there’s a lot to consider beyond the broad definition of AI, and more importantly, how to implement it. The result will vary from application to application but building a successful AI system involves navigating the entire workflow and focusing on more than just training an AI model
(Image — and enthusiastically long caption — source: MathWorks)
In the case of data preparation, for example, college students are typically provided with nice, clean data sets (or datasets). In the real world in which we live, the system is eventually going to have to work with messy data. With regard to training the system, data sets containing both good and bad data are going to be required, and engineers often underestimate the amount of time needed to clean and prepare the data. Yes, of course the folks at MathWorks have tools to aid us in this task, otherwise they wouldn’t bother mentioning it in the first place.
AI modeling is an interesting area. Different engineers think of different things when they hear the term “model.” For some, it’s a 3D mechanical CAD model; for others, it’s an analog or digital simulation model; and so forth. In the context of AI, the term model might be said to refer to the algorithms that are embodied in an artificial neural network (ANN). It’s important to realize that AI is not the whole system — it has to fit into a system and work with the rest of the system. Similarly, an AI model is not the end result — it has to fit into the final result. When it comes to AI modelling, you’re probably not going to start from scratch. Instead, you will typically base new projects on models that have already been created by experts in the field. There’s a lot of jargon to wrap your brain around here; for example, Caffe and TensorFlow are frameworks and libraries that are used to develop and train AI/ML/DL/RL models. By comparison, AlexNet and GoogLeNet are AI models in the form of convolutional neural networks (CNNs). To be honest, this stuff makes my head spin, which is why you should be talking to the folks at MathWorks and not listening to me.
I think we can all agree that if there’s one thing the guys and gals at MathWorks — and their tools — are known for, it’s simulation and test, so let’s skip to the deployment phase, which is where the AI is deployed into the real world. What this really means is that the AI has to be transformed into a form suitable for use in an embedded device or an enterprise system, anywhere from the edge of the internet to mega-servers in the cloud. Furthermore, the targeted compute engine might be a general-purpose processor in the form of a CPU, MPU, MCU, or a graphical processing unit in the form of a GPU, or a field-programmable gate array (FPGA). Whatever the target, the folks from MathWorks have you covered because MATLAB can export C/C++ code for use with CPUs/MPUs/MCUs, CUDA code for use with Nvidia GPUs, and hardware description language (HDL) representations for use with FPGAs.
Last but not least, you may be wondering as to the difference between ML (machine learning), DL (deep learning), and RL (reinforcement learning). I know I am. I thought I knew, but I just had a quick Google while no one was looking, only to discover that the definitions I saw for RL were not the way I remember things. Fortunately, the folks at MathWorks provide free, two-hour, self-paced courses that they promise will clarify the basics:
I don’t know about you, but I was under the impression that the difference between ML/DL and RL was that once ML/DL models had been trained and deployed, they used their existing knowledge but learned no more. By comparison, I was of the belief that RL algorithms were capable of continuing to learn in the field, adapting to any changes in their environments. Sad to relate, this didn’t seem to be the picture returned by Google, which is why you might expect to see me attending the aforementioned Reinforcement Learning Onramp in the not-so-distant future. How about you? Do you already know all of this stuff, or are you eager to learn more and join me in riding the AI wave?
I’m going to do my own AI research : wire together 4.7 dazzillion neuron models and hope it’s smarter than me
Too late — I already did that — it wasn’t 🙂