Officer McReady scratched his head in confusion. “OK, tell me again why you think Bostwick stole the papers?”
“Because, well, first of all I thought he was going to, see?” he started.
“Why did you think that?”
“Well, something was up. I had some of the documents on my desk, and Debra had some on her desk, and then they were gone. And we found them on his desk. That was just weird; why was he gathering these things up? It wasn’t even his project!”
“OK, but just because it’s weird doesn’t mean he stole the papers.”
“Yeah, but after we all left, he was the only one here. I had this iggy feeling in my gut about this, so I sorta hung around outside. After about 15 minutes, this courier guy shows up and leaves with a big manila envelope fulla papers. And about 5 minutes after that Bostwick takes off. And now the papers are gone.”
“OK, but did you actually see what was in the envelope?”
“No, it was closed.”
“So we have no proof that the envelope had these papers in it.”
“Well, no, but what else would it be?”
“It could have been anything; you guys got more papers in here than all the bathrooms in Shea Stadium put together. I never seen so many papers. I have no idea how you guys keep it straight. Hell, it’s probably lost, not stolen. Besides, if you were so sure something was going to happen, why didn’t you stay in the building, or go back in?”
“Because if I did, he’d know, and obviously he wouldn’t take them then, would he… I mean, what am I supposed to do? If I watch him, he won’t do it. If I don’t watch him, then I can’t prove he did it. Either way, I lose!”
It’s always hard to figure out what’s happening when you can’t see what’s happening – or when trying to see what’s happening changes what’s happening. And this problem is getting worse as we stuff more things inside chips. Once upon a time, we had circuit boards with traces on top and bottom and good old-fashioned oscilloscope probes and logic analyzers that you could touch to traces and see what was going on.
Now we stuff most of those traces inside chip packaging, and most of the ones that are left over get buried in the middle layers of boards where you can’t get to them. If you want to figure out what’s going on inside the chip, it’s basically impossible from the outside. You might be able to infer something based on what you see going into and coming out of the chip, but inference isn’t always proof.
So we go inside. But that’s only the beginning. Even from the inside, it’s not always easy. And there are a thousand different situations that this could apply to, but here we’re going to look at debugging and optimizing Linux programs on SoCs and the boards they’re on. And in particular, trying to figure out what’s going on in the kernel. The problems are getting visbility into the kernel and then avoiding heisenbugs – bugs that disappear when you try to study them.
Just to baseline us all, the Linux operating system is divided into two distinct spaces. Most mere mortals operate in the user space; this is where applications are written or where higher level services, modules, and interfaces can be added and modified. This lies over the kernel, which is where Linux keeps its deepest, darkest secrets. The interface between the kernel and user-land is stable from version to version. The details of what goes on in the kernel, however… stays in the kernel… and may vary widely from version to version. When you cross the kernel event horizon, you enter a maze of twisty passages, all alike. OK, maybe not that bad, but put it this way: the kernel provides a level of stability so that, if it’s working, then at least the system will be more or less up. If you mess up the kernel, then all bets are off. It’s like editing the registry in Windows: beware all ye who enter there, for verily can ye screw up your system big-time, with few clues as to how to recover.
So if we’re trying to debug Linux kernel code, how can we do it? Well, the simple way we learned in college is to instrument the code: that’s a fancy way of saying, go add some lines that will show you what’s happening as the code runs. The most popular debugger in the world is called “printf.” Of course, here you start to get a flavor for things: printf (or some other such way of viewing values) requires the OS to work, which could be a problem if you’re debugging the OS.
But although printf may work, that’s just the beginning of things. Kernel debugging did improve with the advent of KGDB (prior to that, there weren’t really any debuggers that worked on the kernel), but a debugger messes with the code – it’s invasive. Changing the code may change the behavior of the system. Things like breakpoints can destroy real-time behavior and interrupts that may in fact be part of some problem.
Another important task – especially for something as fundamental as a driver or other kernel module – is optimizing performance, memory space, etc. Profiling is one way of determining how much time is spent in various parts of the code. And again, a quick and dirty way of doing that is to add some code that makes a “note” somehow that the code passed a particular point or ran a function or something like that. Kind of like embedded Twitter. Even if everything is working, this will typically require sending this note out to memory – there’s no place to store all these mile-markers in the actual processor itself. Storing to memory takes time, so in addition to the time taken simply to execute the added lines of code, you may also have to wait until each of the notes is stored.
So again, you’ve perturbed the behavior of the system, such that while you might figure out how many times a function executed, it becomes harder to determine the absolute time spent because you somehow have to subtract out the time spent storing your data. And if you have interrupts and real-time requirements and multiple processors working on independent tasks, the mere act of trying to observe behavior may affect the behavior.
So we’ve got two problems on our hands. First is simply the ability to observe unobtrusively. Second is the ability to get the data we want to analyze out of the chip and off somewhere else. The first brings us to the domain of on-chip instrumentation (OCI); the second gets to the topics of both how you interface to the chip and what – and how much – data is sent off-chip.
MIPS provides an interesting way to look at this, since they’ve had a couple related new products over the last several months. While this isn’t unique to them – ARM, for example, has embedded trace modules and other IP for things like this – MIPS dove headlong into the world of OCI with their acquisition of First Silicon Solutions (FS2) in 2005. FS2 had been the main visible driver of OCI and the debug solutions that it enabled, and that was put to use for the MIPS processor.
In a plain-vanilla processor, there’s no special hardware that can help debug or profile code; as we saw, any attempt to see what’s going on has to use the hardware that’s supposed to be executing the code. So the whole idea behind OCI is to add separate hardware to snoop on what’s going on and report back to headquarters. Exactly how you do that leaves much to the design imagination. Watching instructions is different from watching a bus is different from watching what goes on in some custom logic. Because this extra hardware takes space, you want only just enough, not too much.
This decision of how much is enough also relates to the second problem we have to solve: getting the things we see out of the chip. Depending on how much we’re monitoring, there could be a lot of data. We can use a high-speed interface to ensure that the data can all be streamed off in real time, or we can add buffering so that we can monitor for a while (without overflowing the buffer) and then let a slower interface drain the buffer. Clearly, buffers are going to take more space.
The other element affecting how fast data needs to stream or how big the buffer (or buffers) has to be is how the data is represented. For example, when tracing instruction flow, you might feel you have to capture each instruction as it happens and send it out so that you can recreate what happened outside, along with a timestamp so you know when things happened. On the other hand, if some software tool has access to the code being executed, then you can assume that whoever is going to analyze the reported data knows what’s supposed to happen, or what the options are, so that the instrumentation simply has to indicate whether an instruction executed successfully, or whether a branch was taken, for example, without actually saying what the instruction was or what the branch target was. This reduces the amount of data to be streamed off.
This points to the importance of tools for interpreting in a human fashion the cryptic goings-on being reported from the chip. It may be knowledge of the program being executed or of the symbol tables or who knows what. With a little bit of data coming from inside the system, a lot can be learned by correlating that data with other info. At a very low level, OCI can provide some of that data; at a higher level, the data can be taken without OCI.
As an example, MIPS has announced three tools this year, two of which are used for profiling and one for debugging. For non-invasive real-time kernel monitoring at the lowest level, they have a Hot-Spot Analyzer that takes advantage of OCI and streams low-level profiling information out on their EJTAG port (which is basically JTAG outfitted to allow debug usage). For profiling at a higher level, they have a Linux Event Analyzer that sits alongside Linux and, without OCI, captures all events that Linux issues and reports them out via an Ethernet port (which is much faster than the JTAG port, although USB and even RS-232 are also options).
And for debugging, MIPS has an arrangement with Viosoft for their Arriba Linux Debugger module, called VMON2, which allows full kernel debugging without changing any kernel code or pre-empting the kernel; in other words, it’s also non-invasive. It works at a high enough level that it doesn’t require the OCI circuitry. It streams its data out via Ethernet for speed, although for really low-level debugging during things like board bring-up, the EJTAG port can be used as well.
All three of the products rely on host tools to interpret and present the volumes of data that may be spewed out of the chip or the board, so not only can you understand what they mean, but you can actually do something about it.
It’s clearly too late to use this sort of thing to figure out who stole those papers, but, for the future, you can take a cue from Tony Blair and the instrumentation of London. If you instrument your office with enough cameras and some wireless interconnect to shuttle the data off to that totally obvious-looking black van across the street, maybe you can figure out who’s messing with the papers. Without affecting the outcome. On second thought, if you don’t want to affect the outcome, paint the van.
Link: MIPS Tools