feature article
Subscribe Now

Tyranny Take Two

Software Scheduling Revisted

If all of this is seems a bit confusing, you might want to re-read part one of this series – “Tyranny of the Metaphor,” (read) where we discussed the problems with planning software projects using conventional methods like PERT charts and Gantt diagrams. This time, however, we’re going to roll up our sleeves and start solving the problem one piece at a time. As with almost any good therapy, we need to look deep inside ourselves first. As a group, software engineers are terrible at giving accurate estimates for work to be done, and we need to tackle that issue effectively in order to earn the credibility with management that will allow us to bring about real change in our embedded software development operations.

In the previous article, I (jokingly) proposed an experiment. I said that if you chose a software engineer and a software development task at random, then asked the engineer for an estimate of time required to complete that task, you’d most likely get the answer “about three to four weeks.” Surprisingly, several of you wrote that you’d tried just that experiment, and it had worked remarkably well. If we were publishing a conference research paper on the topic, we would have just proven that our results were repeatable. How rewarding! How sad.

We also explained that this trick works because of what we called the 90/10 rule. Software appears to be about 90% complete when it is, in reality, only 10% done. This 90/10 illusion often affects even the engineer himself, lulling him into a sense of security when the system he’s developing seems well on the road to completion only a short time into the actual development and debug process. If you regularly believe you are 90% done when you’ve really just started, your sense of true project time is blown away, and your ability to accurately predict schedules goes out the window with it.

So, what’s the big deal? We’re paid to write software, not schedules, right? Unfortunately we software engineers are usually at the top of the victim list when our estimates are off. It plays out like this: We give estimates. Our project leads roll them into group schedules. Management assigns budget, and marketing starts making project launch plans based on the “official” schedule. After the first few weeks, we’re looking great. We can already give a “demo” of the system (with lots of the less important functions stubbed out, of course) even though we’re only 20% of the way into our schedule.

Next we settle in for the long haul. Weeks go by with no externally visible signs that progress is being made. Bugs are being fixed at a ferocious rate, and most of those stubs are being replaced with real code. In the process, of course, several design problems are discovered that require us to go back and rewrite some of that great, simple, original stuff, replacing it with more mature, enlightened code. We soon find ourselves running out of time in the schedule with weeks of debug left and even a few major functions and features still to add. Our weekends disappear, and we start to subsist on pizza slipped under our office door at 2 AM. Our family considers filing a missing persons report.

Finally, we move into the “phase of doom.” We’re several weeks past our deadline, and even management now knows that we’re behind. Of course, they will never think to blame us for bad estimates in the planning phase. They always assume that the estimates were correct. Instead, they begin to think that we’re not competent programmers or that we’re just not working hard enough. They left the office to play golf at 4:30 PM, didn’t know about our all-nighter, and wondered quietly why we were “slacking” because we came into the office at 9:15AM, when they’ve been there since 8:45. Marketing starts to frown at us in the hallway and make openly rude comments about us at lunch in the company cafeteria. We don’t hear them, of course, because we’re still at our desks, eating leftover pizza from the night before while we continue to work at a frantic pace.

When our project ships later than anticipated, revenues don’t start when the company expected, and the downsizing begins (to prove to the investment community that the company takes the quarterly shortfall seriously), guess who’s first on the chopping block? Here’s a hint: It isn’t that guy down the hall who padded all his estimates by 4X and only worked 7.9 hours each day.

How do we solve this problem? Successful software estimation is a combination of psychology and accounting. We need something we can measure and some feedback mechanism (usually experience from previous projects) that will help us calibrate our estimate yardstick. The trick is to find a set of metrics that is reasonably accurate and that can be applied from experience, even though a new project may be dramatically different from our previous efforts.

In 1985, I attended a two-week “Programming Project Management Course” that gave detailed instructions on estimating software development time based on lines of code. According to this less-than-practical class, all one needed to do was to correctly predict the number of lines of code in a project, and accurate estimates would fall from the sky, securely anchored in statistics from years of measurement of development times for various programming projects. The measurements were based on the final number of lines of code and actual project schedules across a large number and variety of applications. Perhaps, in some distant past reality where most projects were ground-up development of freshly hatched Fortran, and programmers had some magic insight into the number of lines of code required for any arbitrary future application, this idea would work. In today’s reality, however, it falls more than a bit short.

Instead, the simplest approach I’ve found as a software development manager is to cultivate the known. When, as a rule, we have a problem like the “three to four week” syndrome, we need to overcome it by looking for the reliable exception. The biggest (and most accurate) exception to the “three to four week” example above is what I call the “two to ten” effect. Engineers are actually pretty good at estimating any software sub-task that is in the two-to-ten-day range. If you try to go beyond ten days, you almost immediately fall victim to the old trick, rounding up to a “three to four week” estimate — which, as we have noted, is really secret code for “somewhere between 11 days and infinity.” On the other end of the spectrum, when tasks are less than two days, we tend to trip up on the “one hour equals one day” problem, where engineers allow a full day for a task because it’s a line item on the schedule, even if it’s really just a one-hour operation.

You can apply this technique yourself, just by breaking down any project into sub-tasks that fall in the 2-10 day range. If a task seems like more than 10 days of work, break it down further into smaller tasks. If a task seems like less than two, combine it with others to make it slightly more substantial. Create a work breakdown for yourself (or your team) that is entirely made up of 2-10 day individual tasks. At first, this may seem difficult, but later we’ll discuss a feedback step that will make this quickly become an intuitive and accurate process.

The success of this method of software project estimation relies on only two things: 1) your ability to create an accurate list of tasks that accounts for everything that needs to be done to complete your project, and 2) your ability to accurately estimate 2-10 day tasks. Both of these skills can be easily developed by adding a feedback mechanism that lets you track and refine your estimate performance from project to project. You can only know that your estimates were accurate if you correlate actual development times back with your original plans.

The use of a feedback mechanism is critical. Without it, your project will almost certainly suffer from estimate amnesia. If you ask a software engineer for an estimate and they give you the expected “three to four weeks” as an answer, a strange thing happens. Six months later, when they’re still working on the task (just a bit, you understand, to clear up those last few bugs), and you ask them how long the task took them to complete, they’re likely to still tell you “three to four weeks”. While this behavior is difficult to understand in well-educated engineers, it is remarkably consistent across the industry. We just don’t do a good job of mentally keeping track of where we’ve spent our time.

Since we’re all embedded software developers, we’re probably tempted to implement our task tracking system with a Perl script that uses our webcam connected to high-performance embedded image edge detection combined with feedback from various sensors and pattern recognition algorithms to monitor our work activity and credit the time to the correct task. A more practical system, however, might be to use an Excel spreadsheet. For each task, make a row with a description of the task, a column that has the original estimate (in days) and another column that has the number of days you’ve spent on that task so far. At the end of each week, you can account for the days you worked by adding to the “so far” column for any tasks you worked on. This system is nice, because you account for all the time you spent on the project (very important for later analysis), but you aren’t forced to work on tasks in any pre-defined order. You can parallelize and switch tasks however it is most convenient for you.

At the end of the project, you’ll see how close each individual estimate was to the actual time required and how many tasks were created in the course of the project that weren’t anticipated at the beginning (nice to know for future planning). You can also see the actual ratio of time spent on related tasks – like adding a feature and debugging that same feature. This data is extremely helpful in planning your next project, and your week-to-week experience is invaluable in helping you make more accurate estimates next time.

One of the most interesting things you’ll learn by tracking with a method like this is the enormous number of tasks you failed to include in your initial planning, and the equally enormous number of non-productive days that tick by in the course of a normal project. As you track your time, you’ll find that day that you spent at the company off-site doesn’t map to anything on your plan. The easiest way to account for this is to add a task for “non-productive” or “overhead” to your original task list. After a few projects, you’ll know what percentage of the total schedule should be thrown into that category to account for real-world company life.

You’ll also find a number of unplanned activities that are definitely productive, but unforeseen at the onset. Over time, you’ll begin to anticipate more of these as you plan. Chances are, if you spend a week writing a hardware driver, you’ll need some time later on in the project to debug or modify it. After examining a few projects, you’ll be better equipped to plan.

One of the beauties of this technique is that you can implement it yourself, as an individual engineer, without any support or buy-in from management. It will simply help you provide more accurate estimates for your work and plan your own part of each project more accurately. As a team leader, you can implement it as a flexible tracking system with almost no investment or special training, and you can immediately collect and use highly relevant experience data. Even without the benefit of a lot of historical data, you’ll almost immediately improve your intuitive skills for task sizing by paying attention to your own actual results. It is a very lightweight and unimposing tracking technique that works well in a wide variety of situations.

Once we’ve conquered our own development demons and learned to plan and estimate our own and our team’s work accurately, we’re much better equipped to deal with management that might not specialize in software project planning. In embedded development, we are almost always working as part of a larger team that designs both hardware and software. That means that somewhere in the management hierarchy, we’ll likely encounter someone with the traditional misconceptions about software scheduling. With our own house better in order, we’re better equipped to negotiate a peaceful and productive co-existence with these non-software types.

Leave a Reply

featured blogs
Nov 22, 2024
We're providing every session and keynote from Works With 2024 on-demand. It's the only place wireless IoT developers can access hands-on training for free....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
50,903 views