feature article
Subscribe Now

What’s Yours Is Mine

MRAPI Lets You Manage Embedded Resources

Arrrr, Captain, we’re abeam o’ the port now. When do we start the shellin’ and the pillagin’ and the mayhem?

Now, just hold yer fire there, hotpants, we’ve got three ships in this here operation, and I don’t want no one with a fidgety trigger finger jumpin’ the gun, ya hear? And, before ya do, I’ll say it now: don’t ya be criticizin’ me mixed metafers.

Arrr, aye aye Captain. Oh, and we spotted an extra warship in port that we wasn’t expectin’. I’ll be needin’ someone ta fetch some extra cannonballs from below as we might be usin’ more than we thought. Can we get the other ships to do the same?

Well, I can order the swabs on the Purple Plague to get more, seein’ as she’s my ship and all. But the Pickin Plaice, well, that’s ol’ Deadbug’s. He runs his own show. I can’t be orderin’ his men around, and he takes advice from no one. He’ll have to figger it out hisself.

Arrr, aye aye Captain. I’ll signal fer positions on the Plague.

[Signals, with no response]

Arrr, Captain, I’m getting’ no response from the lazy louts.

Ah crabs, I fergot ta have them suspend their usual naptime, fer criminey’s sake! Oh, why did I ever let them experiment with their so-called “ergonomic piratical practices”? Someone had oughta keelhauled me fer such a lamebrained, lily-kneed thing as that! Here we are, dead in the water, and we can’t do a thing about it until they power back up. Ah well, nothin’ ta do about it now, then. Just got ta wait. Have everyone stand down and do some pushups or somethin’.

Arrr, aye aye Captain.

 

Once upon a time, life was simple.* There was a CPU that did the computing, and programs could be written telling that CPU what to do. And those programs might need resources from around the CPU, and that was OK because there was an operating system that managed those resources, and a program just had to ask the operating system for what it needed and it all just sort of worked. (Well, except when you had to restart the system because, well, because you just had to, because it stopped working.)

And then we started adding cores.  But all the cores could remain under the control of a single operating system, which treated them all as equals and parceled out jobs to each of them and handled any of their resource needs.

And then we started getting persnickety about which cores were best for what purpose, and some cores started fretting about the fact that other cores might be hogging their resources right when they needed them and so they needed their own private resources and massage therapists. Which they might let someone else use if they’re nice and ask permission properly first. And other anarchist cores decided to tell their operating system, “Hey, you’re not the boss of me!” and fired the OS and went bare metal. Other cores decided to do the system a power-reducing favor and go take a nap for a while now and again.

Sounds like a bunch of five-year-olds or teenagers or pirates.

So you end up with a problem for your random program that might need to run on a specific core and access specific resources that may or may not belong to that core and may or may not even be present or awake when needed.

Add to that problem the context of an embedded system, where resources are scarce and time is critical.

Add to that problem the fact that some programs and systems are simple and don’t want to be bogged down by all of the rigmarole that might be needed for a complicated system.

And so, more or less, you are faced with the problem that the Multicore Association is addressing with their just-released Multicore Resource API (MRAPI) spec. Note that there is no coincidence surrounding the similar look to the sibling Multicore Communications API (MCAPI) standard of a few years ago.

The idea is to allow for a lightweight infrastructure that will give programmers a better way to manage resources. But, in deference to simple systems, it’s just an API: much of how the API is implemented is left to the system designer, allowing complexity where desired and avoiding it where possible.

MRAPI builds on the MCAPI domain and node concepts and adds three fundamental entities: synchronization, memory, and system resource metadata.

Synchronization helps deal with a couple of pirates both trying to fire the same cannon at the same time. This lets them play nice and friendly-like: one gets exclusive use first, then the other. The standard defines three mechanisms: mutexes, semaphores, and reader/writer locks.

Mutexes (shouldn’t that be “mutices”?) represent the simplest, lowest-level synchronization. It’s a simple binary lock. Think of it as “dibs.” No one else can access the resource until the lock is removed by the owner. A resource can actually be locked recursively – that is, the owner of the first lock can add separate locks, which have to be removed in order. Kinda has the feel of the twelve deadbolts on your average Manhattan apartment door.

Next step up are semaphores. These allow for allocation of multiple instances of a resource. If eight whatevers exist, then the semaphore can issue up to eight locks before someone has to wait for someone else to release a lock. Semaphores typically don’t know which resources are in use; they simply know how many.

Finally, reader/writer locks address the situation where you are likely to have many readers and a few brief writers. Reader/writer locks have to handle the situation where many concurrent (virtually concurrent, that is) reads are interrupted by a write. So when a lock is requested, it’s either of a reader (shared) or writer (exclusive) type. Read locks can mostly be readily given, but, if there’s someone awaiting a write lock, then no new read locks can be issued until that write lock is granted.

Next up are memory issues, and these come in two varieties: shared and remote. Shared memory may sound like a solved problem because it can be handled through POSIX calls. But that only works within one operating system instance. If you’ve got physical memory being shared by different threads under different operating systems, POSIX won’t work; this is where MRAPI comes in.

Remote memory is a bit more complex. It’s conceptually similar to shared memory, except that, rather than a single shared block of memory, these are distinct memories. So core 1 may need to access memory that’s managed by the OS running on core 2. That’s moderately straightforward if the memory can be physically accessed by both cores. Because MRAPI operates above the operating systems, it can use them the get the appropriate addresses.

If core 2’s memory block isn’t physically accessible by core 1, then some DMA or other mechanism must be created to make a copy of the remote memory that is accessible to core 1. The API read and write calls support this, including scatter/gather reads/writes. The latter read from or write to multiple disjoint (but evenly spaced) regions. Each such read or write is a “stride”; a simple read or write is the degenerate case, with just one stride.

Because you may be working with a copy of the original data, that copy may become stale. So flush and sync calls are provided to allow the application itself to maintain coherency.

Finally, a system resource metadata facility is provided. This is simply a way to build a map, typically as a tree, of what resources exist, and it is entirely implementation dependent. A static system could have an XML file or some other simple means of storing the configuration; this can then be read at initialization for use while the system is running. The configuration can even be compiled into the MRAPI implementation.

On the other hand, for a system where resources may come and go (due to plug/unplug events or perhaps power up/down events), the system designer can provide a means of querying live resources and then attach that to the API calls. The resource tree can be built only as a whole; you can’t add or remove branches from the tree incrementally, so if a resource can’t be accessed, it can’t be removed from the tree: the entire tree must be rebuilt. 

 

So, armed with a custom piratical implementation of MRAPI, our ill-fated captain would have had a way to requisition more cannonballs from ol’ Deadbug. And he could have had a better way to figure out that the Purple Plague was powered off. Meaning he coulda relaxed a bit ‘steada standin’ around tappin’ his stump and chewin’ his hook… 

 

*Seems like we’ve said that before, in more than one context. No, not pining for life back on the prairie… much…

 

More info:  MRAPI Working Group (with download link) 

 

Oh, just gotta say it one more time (with feeling!): Arrrrrrrrrrrr

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Energy Storage Systems
Sponsored by Mouser Electronics and Amphenol
Increasing electric vehicle sales, decreasing battery sales, and a shift in energy consumption has made energy storage systems more important than ever before. In this episode of Chalk Talk, Amelia Dalton chats with Gijs Werner from Amphenol FCI Basics about the functions and components involved in commercial energy storage systems, residential energy storage systems and EV charging stations. They investigate the qualifications needed for connectors in energy storage systems and what kind of connectors Amphenol FCI Basics offers for your next energy storage system design.
Apr 3, 2023
40,428 views