feature article
Subscribe Now

Cloud-Based Genetic Algorithms and Computer Vision Applications

Do you recall my earlier column When Genetic Algorithms Meet Artificial Intelligence? This reflected my discovery that the chaps and chapesses at Algolux are using an evolutionary algorithm approach in their Atlas Camera Optimization Suite. The idea here is that, when it comes to creating a new camera system, each of the components — lens assembly, sensor, and image signal processor (ISP) — has numerous parameters (variables). This means that a massive and convoluted parameter space controls the image quality for each camera configuration.

Traditional human-based camera system tuning can involve weeks of lab tuning combined with months of field and subjective tuning. The sad part of all of this is that there’s no guarantee of results when it comes to computer vision applications employing artificial intelligence (AI) and machine learning (ML). The problem is that tuning a camera system for a computer vision application is a completely different “kettle of fish,” as it were, as compared to tuning an image or video stream for human consumption. 

The bottom line is that humans are almost certainly not the best judges of the way in which an AI/ML system likes to see its images. The solution here is to let the AI/ML system judge for itself or, at least, let Atlas determine how close the AI/ML system is coming to what is required, using human-supplied metadata as the “ground truth” state for comparison. Furthermore, employing evolutionary algorithms allows Atlas to explore the solution space to fine-tune the camera’s system variables so as to automatically maximize the results of the computer vision application that’s using the system.

A few months after the aforementioned column, I returned with a follow-up article: Eos Embedded Perception Software Sees All. I have to admit that this one was pretty amazing. We started by watching a video showing AAA Pedestrian-Detection ADAS Testing. Be warned, this is not for the faint of heart. I know that — after watching this video — if anyone were to ask me to step in front of an autonomous vehicle, I would be pretty confident they weren’t my friend.

The really scary thing about this video is that it was taken under optimum lighting conditions. Can you imagine how much worse things could get in adverse conditions like rain, hail, sleet, snow, or fog? And so we come to Eos Embedded Perception software. As described by the folks at Algolux, “Through joint design and training of the optics, image processing, and vision tasks, Eos delivers up to 3x improved accuracy across all conditions, especially in low light and harsh weather.” If you look at my earlier column, you’ll see various videos of this in action, but it was the following still image that really blew me away.

Eos-designed/trained camera system detecting like an Olympic champion (Image source: Algolux)

As you can see, this image shows a camera system designed/trained using Eos detecting people (purple boxes), vehicles (green boxes), and — what I assume to be — signs or traffic signals (blue boxes). As I noted in my earlier article, “I’ve been out walking on nights like this myself and I know how hard it can be to determine “what’s what,” so the above image impresses the socks off me (which isn’t something you want to have happen in cold weather).”

Moving on, the reason I’m waffling on about all this here is that I recently heard from my mate Max at Algolux (I know, that confuses me too — sometimes it feels like I’m emailing or talking to myself — and Max doesn’t like that — LOL). Anyhoo, Max ended up sharing all sorts of interesting nuggets of knowledge and tidbits of trivia with me.

We opened with the fact that Algolux has been named to the 2021 CB Insights AI 100. This is a prestigious list showcasing the 100 most promising private artificial intelligence companies in the world. According to an associated press release, “The AI 100 was selected from a pool of over 6,000 companies based on several factors including patent activity, investor quality, news sentiment analysis, market potential, partnerships, competitive landscape, team strength, and tech novelty.”

Now, it’s no secret that cameras are one of the sensors of choice for system developers of safety-critical applications, such as automotive ADAS, autonomous vehicles and robots, and video security. However, as we alluded to earlier, camera development currently relies on expert imaging teams or external image quality service companies hand-tuning camera architectures. This painstaking approach can take months, requires hard-to-find deep expertise, and is visually subjective. As such, this process does not ensure that the camera provides the optimal output for image quality or computer vision applications.

As we also noted earlier, the Atlas Camera Optimization Suite automates traditional months-long manual ISP tuning processes to maximize computer vision accuracy and image quality in only days, thereby providing an improvement of up to 100x in scalability and resource leverage. The Atlas workflow permits rapid evaluation of different camera sensors and lenses for cost reduction, best performance, or to adapt to changes in customer requirements.

So, you can only imagine my surprise and delight to hear the next tempting teaser from Max, which involved the fact that the Atlas Camera Optimization Suite is now enabled in the cloud. Even better, it supports an extended set of camera ISPs from Arm and Renesas, thereby allowing for further scalability.

In the case of SoC providers deploying Arm Mali-C71AE and Mali-C52, they can leverage the Atlas workflow to automate and significantly scale support for customers that are developing vision systems, predictably reducing ISP tuning time and program risks. For teams developing computer vision applications, Atlas can quickly determine the optimal Arm Mali ISP parameter set to achieve the highest vision accuracy, which is not possible with today’s hand-tuned ISP approaches.

Furthermore, the new cloud-enabled workflow supports the ISPs embedded in Renesas R-Car SoCs, such as the R-Car V3H and R-Car V3M for intelligent and automated driving (AD) vehicles, and the recently announced R-Car V3U ASIL D SoC for advanced driver assistance systems (ADAS) and AD systems.

In closing, as I mentioned in my previous column New Paradigms for Implementing, Monitoring, and Debugging Embedded Systems — in which we discussed the Tracealyzer and DevAlert tools from Percepio and the Luos distributed (not exactly an) operating system from Luos) — I’m going to be giving a presentation at the forthcoming 2021 Embedded Online Conference (EOC). The topic of my talk is Not your Grandmother’s Embedded Systems. The reason I mention this here is that, as part of my presentation, I will be mentioning Percepio, Luos, and — of course — Algolux.

Dare I hope to have the pleasure of your company at my presentation? As always, I welcome your comments and questions (preferably relating to what you’ve read here, but I’m open to anything 🙂

One thought on “Cloud-Based Genetic Algorithms and Computer Vision Applications”

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 13, 2024
Implementing the classic 'hand coming out of bowl' when you can see there's no one under the table is very tempting'¦...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
37,326 views