Sometime during the last twelve months at one of the typical industry conferences, I took a typical briefing. To us in the journalism world, a “briefing” is when a company sits down with an editor like me and tells a story. Their hope is that we will retell that story. Because if the story comes from their pen, it sounds like marketing; if it comes from our pen, it sounds like journalism, which gives it an extra air of legitimacy.
Of course, if you’re like me or my EE Journal colleagues, you probably take their story, invert it, fold it ten ways, look at that one pointy bit, and focus on that. It all comes out very different from the original (but hopefully the facts are correct).
Such briefings can be a good opportunity for a conversation. The questions we editors ask can provide good feedback to companies not only about how their messaging is working, but also, if we have experience in the industry, we may even be able to have some substantive technical back-and-forth. Good for both of us.
Occasionally, we’ll get an actual demonstration as part of the briefing. So, at the particular briefing I mentioned, a company that’s been working on some tough stuff for a while showed me their tool – an actual working version of it.
It happened to be an area with which I’m relatively familiar, which was good: it was pretty deep and arcane. And at one point, the presenter showed how the tool displayed some piece of information. I happened to know what that info meant, but I asked, “OK – how will an actual designer use that information?”
If this had been a vinyl record playing, the next sound would have been a needle flying off the record and then silence. I hadn’t meant it as a trick question, although I had my suspicions that their tool was presenting the user with a lot of information that was not useful, and I picked this as an example to see. My suspicion was correct, and much about the tool had that flavor.
I then had a decision to make. I don’t like to pan products unless they’re really absurd and worthy of derision. But that wasn’t the case here. It was an earnest company working its tail off on what was probably one of those “wow, this is really harder than we expected” problems. And they were hoping that they were getting close to something that would be usable. I thought – and still think – that they have a long way to go, but that they have a chance to get it right. And so I have no desire to tarnish them now for that time when they might indeed be ready to go.
In addition, and perhaps more importantly, I’ve seen this before. More than once. This isn’t a case of some company that took a wrong turn; this happens over and over again, and the more I thought about it, the more I realized that there’s a bigger picture here that’s worth discussing.
And so I decided not to name the company and instead talk about what seems to happen with certain kinds of development tools.
This is certainly not a new phenomenon. I remember running into it for the first time when I worked at a large FPGA house. I was a few layers up in management at the time, so I wasn’t a practicing user of their design tools. But those tools had historically been known as extraordinarily easy to use, making it really clear where you were in the design process and, most importantly, what the next step was at any given point.
But all tools outlive their usefulness: platforms change, expectations change, and, mostly, FPGAs get bigger and outgrow their old tool infrastructure. And so you end up having to redo your tools. I sat down with an early version of the newly revamped FPGA tool suite, and – the good news – it had a modern-looking (for the time) interface. I could open a design and, playing with the menus, I could see dozens of ways of analyzing the design. This had not been possible with the old tools.
But what I couldn’t see was what to do next.
Way down in item 23 on menu 8 (making those numbers up, but you get the point) was an entry called “Compile.” OK… maybe that was what to do next, although I didn’t know if there were any other items that needed doing before the design was ready to be compiled. “Compile” shared equal standing with about 30 (guessing) other things to do.
So while the tools looked “cool” and comprehensive, they were anything but intuitive or obvious. They were static – there was zero workflow orientation. As it turns out, the eventual launch of this family of tools was disastrous (I was in the meeting where the CEO made it pretty clear that anyone who didn’t think the tools were ready for release would be chewed up and spit out, and then the tool would be released).
Useless – or use-case-less – data
I’ve seen this again in other situations, and here’s my teardown of what’s going on. The whole process of creating design tools is about taking some intricate domain and abstracting away the hard parts and then – and this is key – presenting the information in the language or paradigm of the user. How do you do that?
Well, the first thing you have to do is come up with models of the conceptual entities you’re working with. For an FPGA, you have to deal with abstract logic designs on the one hand and specific representations about the available hardware on the other hand. Those models take a lot of work, and getting them wrong is really easy.
I’ve sat down numerous times to sketch out what seemed like a rational model of some process. But as soon as I start to think through use cases, I find that my “logical” approach is nowhere near appropriate. If you think in class terms, I might have some of the data right, but the interfaces would be all wrong. And the interfaces can impact how you want to organize the data.
If you don’t figure this out early, you can spend a lot of time working on data models – and then have to rip-and-reroute the whole thing numerous times when you start thinking through the use cases. It can be tempting to argue that first you need to get the data right and then, if you do a good job, the use cases will simply fall into place. That might be possible, but more often than not, things don’t seem to play out that way.
Now: overlay onto that difficulty the fact that many of these companies are trying to solve problems that are themselves intrinsically hard. It’s not simply a matter of “I know the process, I just need to find the most efficient model”; it’s “Here’s a model of how I think the process works; let’s try it out.”
This can go on over and over as you refine your models and throw them away and try something new. Long days of work become long nights of work; updates to the Board become explanations as to why things are way further behind than had been forecast. Let’s face it: these problems are always harder than they seem at first. If it were easy…
All of which is to say that getting the foundation of your tools in place is hard. You will probably struggle with the models and iterate often. At some point, you get your model and your basic internal algorithms to a point where – TA DAAHH – they work! And you rush gleefully out with well-deserved pride to show what you’ve done.
Unfortunately, at this point, you don’t really have a product yet. You have an internal tool that you’ve used to validate your work. Perhaps things are all command-line; little in the way of graphics. You’ve got verbose code running – you use that to confirm that you got the right answer by doing things right, not by accident. And when you show your customer, you can point to those verbose lines of monospace type flashing down the black 1970s-era teletype-style window as proof that something is actually happening. And you can create crude graphics with numbers and data all over the place, showing that, yes, you really did model this properly and everything is working. Doesn’t this look great?
Or perhaps your graphics look cleaner, but instead of showing you how to proceed through the project, it’s more focused on umpteen ways to analyze your FPGA design – which is really just about showing off the underlying data model.
Translation please
The problem is, the user has no idea what you’re talking about. If he or she understood the inner workings of your problem and solution, they wouldn’t need you – they could do it themselves. If you are really solving a problem for them, then you need to bury all that crap useful information and hide it under a presentation layer.
This presentation layer often gets short shrift. This is where you translate what’s happening inside the tool into something useful to the user. This is where you move from the internal problem-modeling paradigm into one that fits the user’s mindset and workflow.
Remember that piece of data I asked about in the display I saw in the briefing? The one whose main value was to prove that the tool was modeling and analyzing things correctly? A working engineer using the tool would have no specific use for the information – it was, more or less, trivia. No big deal? Well, perhaps, unless you’ve filled the screen with it – which is easy to do.
But even so, why present something that the user won’t need and probably doesn’t understand? You’ve probably seen what happens when you do this in a presentation to management. You paste into your slide some chart that has an important data point on it, but, in your haste, you neglect to remove all of the data points that don’t matter. And what happens? Some high-up manager locks up on one of the other data points that you didn’t remove and the entire discussion gets hijacked over something that was completely irrelevant.
Same thing with the tool. And think of it this way: everything that goes on the screen becomes a potential support question. “What does this data mean?” can be a poor use of customer support hotline time if the data isn’t even really useful. (It’s also a poor use of their time if it is useful, but then it simply means that the data is presented in a way that’s not clear.)
Ideally, what you need is one person or team well versed in how the user will use the tool. That team will put together prototypes to run by your lead customers. (You do have a lead customer, right? If not… you may be in trouble… but that’s a separate discussion.) And they will probably iterate numerous times – what sounds like a good idea initially often proves not to be.
The good news is that, between OS display capabilities and browser-based graphics, you can do some really innovative visualization schemes that aren’t necessarily locked to the usual Windows (or Mac) interface standards. But expect to go through a lot of versions as your customers-in-waiting give you feedback.
This presentation layer has nothing to do with what the bulk of your developers are working on, which is the underlying data model and the algorithms. At some point, you have to marry the two together. (To be sure, this plan should be done early so that, when the time comes, the marriage works…)
At that point, when all the models have been proven to the satisfaction of the developers, they can be shown to your customers – using the presentation layer. You need to get outside of your own technology and present the results from the standpoint of your user’s technology.
“Think like your user” probably sounds trite and obvious. But when you consider what that really means, it’s not. The “solution” I posit above may be overly simplistic in practice, but, in my opinion, it’s important in principle, regardless of how you actually execute it. Problem is, when you’re a scantily-funded start-up trying to make the most of your seed money, you’re likely to forego frequent travels to customers and delay hiring any developers that aren’t working on the core technology.
But what invariably happens when you cut that corner is that you arrive at the point where you say, “We’re ready!” And then you find that you have a year or two of additional unplanned work to turn an internal data-model-analysis tool into something that your customer can actually use. And, in fact, you resist this realization for as long as you can – no one wants to tell that to the Board! But resistance merely delays the inevitable.
I guess this is simply a really long, drawn-out way of saying: Don’t mistake your data models and algorithms for an actual tool. They are only the engines. They are necessary but far from sufficient. An independent presentation layer couched in customer terms has to be part of the design.
Do you have examples of tools that were too reflective of internal technology instead of the user’s mindset?