editor's blog
Subscribe Now

Lightweight Embedded Multicore Task Management

The Multicore Association has released the latest of its multicore management APIs. The first such API they released was MCAPI, which allows data to be communicated throughout a potentially complex heterogeneous embedded multicore environment. The next was MRAPI, which deals with the management of resources, allowing virtual extension of scope beyond what an OS would provide in a single process.

This time it’s MTAPI, for managing tasks. Now… you may ask, “Why do we need yet another task-management capability when we have pthreads and OpenMP and MPI?” There are a couple of reasons:

  • Pthreads and OpenMP only work within a given process and/or assume a level of homogeneity. Heterogeneous AMP systems can’t use them. In other words, you can’t invoke a task on some different-ISA core that’s managed by a completely different instance of an OS (or no OS at all).
  • MPI is far too heavyweight for embedded applications having thousands or more tasks to manage.

While you might think extending something like pthreads to cross process boundaries courtesy of a quiet little runtime might be straightforward to the programmer, MTAPI has, in fact, introduced some abstract notions with which I had to struggle a bit for understanding. There’s no overall high-level description of the relationships with examples, so I sort of pieced it together by reading the various bits of the standard and deciding I think I knew what was going on. (Danger!)

So here’s my take on it. We’re used to simply invoking a task (typically a thread). But on complex systems, there may be any number of different “candidates” for implementing that task.

  • You may have multiple cores, each of which has a function that can implement the task.
  • These cores may or may not be the same – one may be a CPU, the other may be a DSP.
  • You may have dedicated hardware for implementing the function.
  • You may have a mix of cores and dedicated hardware accelerators, any of which could be chosen for a given execution.

So they’ve included an extra layer of abstraction, yielding three different notions:

  • An action is the “potential” for executing a task. Let’s say CRC generation is something you need to have done, and you have one CRC accelerator and four different cores, each of which has a “GenerateCRC” function. The accelerator and GenerateCRC functions are all actions. The software versions are registered with their local MTAPI runtimes; the hardware versions are built into the system. Each of these is a candidate with the potential for executing a specific run with a specific set of data.
  • A job is an abstraction of all of the different available actions for a given thing that needs to be done. So you might have one “CRC_job” representing the five different ways of generating a CRC. This supports the use of queues or load balancing. When you actually need to get a CRC, you don’t call one of the specific action functions/hardware; you call the job, and the system decides which action gets chosen to run the specific instance.
  • A task is a specific instance or execution of… something that needs to be done. (It’s really hard to describe this stuff casually without using words like “task” and “job,” which have specific meanings in this context… it’s why your head can end up spinning.) The task is the specific call you make when actually running; it makes reference to a job and, via the job, gets assigned to one of the actions tied to the job for execution (say, a software implementation on one of the cores).  It can be cancelled while running; it can also be run as blocking (non-blocking is assumed as the typical usage). It can also be “detached,” meaning that it “floats free” and is no longer accessible by the calling code, in which case it can’t be cancelled or configured as blocking – it strikes me as similar to a terminal thread. Tasks can also be grouped, with an entire group acting as a blocking mechanism.

The other aspects of MTAPI struck me as more accessible. It covers how the details of this are handled, as well as such aspects as whether or not memory is shared, parameter- and result-passing, status checking, and the like.

You can get more info from their release, and the full API (as well as a “nutshell” document) is now downloadable from the Multicore Association website.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,803 views