feature article
Subscribe Now

Qeexo Takes Misery Out of EdgeML

Startup Takes a Dose of its Own Medicine

“It’s what you learn after you know it all that counts.” — John Wooden

Invention, meet your mother, Necessity. It’s an oft-told tale. A programmer hacks together a tool to solve a particular problem, then realizes the tool has broader applicability than he thought. He refines it a bit so it can be used over and over. Sometimes, the tool is even more valuable than the work product. And sometimes, that insight leads to a whole new company. 

Just like it did with Qeexo

Sang Won Lee and his colleagues from Carnegie Mellon University had been working as programmers-for-hire for several years, specializing in clever ways to detect and characterize fingertip presses on smartphone screens, using just little microcontrollers. Their technique relied on some clever ML models and inference code, which ran on ARM Cortex-M0 and M4 MCUs. They productized that as FingerSense, and life was rosy. 

Problem was, the team had to redo everything for each new project. Every screen is different, every vendor wants something different, every sensor suite is different. That meant lots of travel, lots of on-site tweaking with customer hardware, and lots of sleepless nights. It was a traveling minstrel show called ML at the Edge. 

Time to automate the process. They polished up their in-house tools and the next year cranked out 56 new variants of FingerSense without ever leaving the office. The light dawned. “Hey, we need to productize this thing.” Thus was born a new product, AutoML

The idea is that you feed AutoML the data from your sensors (accelerometers, gyroscopes, thermometers, microphones, etc.) and let it build a model. A few sliders and radio buttons let you tweak sampling rate, weighting, your favorite algorithm, target MCU, permissible code footprint, and some other variables. Press the blue GO button and you’re done. AutoML hands back executable Cortex-M code, ready to download. It’s zero-coding for MCU-based ML. 

Sang says even if you are an experienced ML coder, AutoML is faster, and therefore, more profitable for programmers and their employers. There’s little to be gained from hand-crafting models and massaging input data. Let the tool do it based on observable criteria like code size and latency. And, if you’re not an experienced ML programmer, so much the better. AutoML can make you look like one to your boss. 

Like a lot of online development tools, there are three pricing tiers. Bronze level is free (for now) and comes with 2GB of online storage for sensor data and resulting models. Silver and Gold levels permit more simultaneous users, include more storage, more training, and more hardware support. Subscription pricing for the latter two tiers is negotiable. 

Machine learning is terra incognita for most of us, like DSPs and GPUs of past years, or like VR now. There’s demand for the talent but no supply. That makes automated tools like AutoML vitally important. Experts may sniff that it’s like putting training wheels on a Ducati. If you don’t know how to operate the machine, stay off it. But product deadlines won’t wait for us to come up to speed. Nobody complains about using a C compiler instead of an assembler, or Verilog instead of a protractor and mechanical pencil. Tools like AutoML raise the level of abstraction and increase productivity by broadening the developer base. It’s a gateway to a whole new world of ML at the edge. 

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

ROHM’s 3rd Gen 650V IGBT for a Wide range of Applications: RGW and RGWS Series
In this episode of Chalk Talk, Amelia Dalton and Heath Ogurisu from ROHM Semiconductor investigate the benefits of ROHM Semiconductor’s RGW and RGWS Series of IGBTs. They explore how the soft switching of these hybrid IGBTs contribute to energy savings and power generation efficiency and why these IGBTs provide a well-balanced solution for switching and cost.
Jun 5, 2024
33,756 views