feature article
Subscribe Now

Putting the User First?

A mathematics professor was in full flow in a post-grad seminar. The board was covered in formulas, and, as he finished writing an equation, he said, “And from this, gentlemen,” (it is a very old story). “And from this, gentlemen, it is obvious…” and his voice died away. He stood there for a few seconds and then he sat down for a few minutes. He left the room and returned after fifteen minutes. Picking up the chalk he resumed, “And from this, gentlemen, it is obvious that…” and wrote another equation on the board and continued the seminar.

We all know that mathematicians are different, but, to some extent, so are engineers. And this brings us to the starting point for today’s sermon. What is obvious to one person is not to another. Even more, what is obvious to a developer of a product is often not at all obvious to the user of that product. And this is symptomatic of the way in which we often don’t think about the user during the whole cycle of product development.

Start with the development of a new product. It could be a consumer device, it could be a web site, or it could be something for other engineers to integrate into a bigger system. In many areas it has now become commonplace to use focus groups to try to understand what the customers want. Now just getting a group of people in a room and discussing what they think they want looks very sensible. And focus groups have their uses, if you are interested in what people are thinking. They are very good for gathering information about how customers view one company against another or for measuring their understanding of a new or emerging technology. They are also great for developing and monitoring marketing communications programmes, for example. And, properly used, they can identify the issues that people have. But, as Henry Ford is reported to have said, “If I had asked people what they wanted, they would have said faster horses.” Instead he identified what people needed – better transportation. No one at Sony carried out focus groups for the Walkman. And Steve Jobs is famously against them. But in all these cases the understanding of what customers will need drives a creative person or team to create products that meet, or go beyond merely meeting, customer needs.

Wittgenstein, the philosopher whose own thought processes are not straightforward or easy to understand, said, “Whereof one cannot speak, thereof one must be silent.” Until you have words for something, it is difficult to visualise it, to think about how you can use it, or to even know whether you will want it. A personal computer means nothing if the only computer you have ever seen is the behemoth of an IBM mainframe. And, even if you know computers can be far smaller, it still requires a leap of imagination to see how they can be used. Ken Olsen, of Digital Equipment Corp, certainly understood that computers could be smaller — the PDP11 could easily fit under a desk — but he is widely quoted as saying, in 1977, “There is no reason for any individual to have a computer in his home.” What he was attacking, in fact, was the idea that, in the home, heating, air conditioning, lighting etc, would be controlled by a central computer. What his imagination didn’t stretch to was the idea that processors would become so cheap that they would be widely embedded and linked, as is happening now. Olsen also said, “People will get tired of managing personal computers and will want instead terminals, maybe with windows,” predicting, as others have, the idea of the Cloud, being energetically puffed today by Google et al.

Back to the central thread: when thinking about a new product, the starting point should be defining what people need, whether they are consumers or the guys building the next embedded system. And I know it is easier to write that sentence than it is to actually define what people need.

Once the need is established, then the thinking should be extended to imagining how people are going to use the product. User interfaces are far too often added almost as an afterthought. (And designed by the people who built the product.) Instead, the user interface can be the starting point of the design: implementing what the user requires the system to perform rather than adding an interface that reflects what the developer has built. The user interface has to be tested by potential users, not the guy who is going to implement the system. And, ideally, it should be implemented by someone who understands human-machine interfaces and has at least a smidgen of design sensitivity. Apple’s iPhone and iPad interfaces look good and work well: many of the copies of them do neither.

Multi-layer user interfaces are a great approach, but they should have as their top layer what the user requires, not what the developer or the marketing group think is interesting, sexy or fun. Years ago, a book on Word for Windows claimed, “The toolbar you see [in Word] is not the toolbar that is most useful to you: it is the toolbar that Bill Gates and the marketing guys have decided will sell you Word.” And we all remember Clippy – Microsoft’s extreme expression of how the company always knows better than you what it is you want to do.

(As an aside – I have almost given up arguing with people who tell me that their interface is “intuitive.” Often what they mean is that users who have learned how to use the Windows interface will find it easier to use their product. Or that their engineers find it easy to use. In neither case is that intuitive – it is learned behaviour. To a child, red does not intuitively mean Stop, nor does green mean Go. We have learned it. This is probably a more widely learned reaction in a developed society than the Windows interface, but you can’t use the word intuitive to describe it.)

Working on the user interface often helps to make a clearer product definition. And testing it with a range of users, in formal or informal contexts, as well as making sure that the interface is useful, may well provide a better understanding of whether the product meets a user need.

Prior to the building of a product is also a good time to begin writing a user manual, even if it is to be implemented as help screens rather than as a printed document. The manual can be seen, perhaps, as an expression of the product specification in language the user understands, and it should not be written by a developer. Again, it should be test driven. There is no point in support telling a user with a problem to RTFM when the FM is F-unreadable.

If it is necessary for a developer to write the manual, perhaps because of time or budget constraints, then at least they should see if it can be translated into English (from engineer-speak) by a non-engineer and road-tested with a few potential users.

Error messages are another significant issue. “You have committed an illegal action,” may make sense to the person who wrote it, but, as an error message on a consumer item, it is not helpful and has caused serious concern, with naïve users expecting a police raid.

Things are improving in some areas. I have recently been talking to companies making chips for the automotive market, and they are talking, not just to the people who will buy their chips, the Tier One suppliers, but to the car makers themselves, and drawing on information that the car makers have on what they think drivers need. In the last few weeks we have seen chipmakers signing deals with ARM so that they can influence the longer-term definition of ARM’s products. Yet, as an industry, there is still too much of the thinking that says, “This is really cool — let’s turn it into a product and get rich.” Apart from the boring detail that turning something into a product takes at least as much time and money as building it in the first place, who is going to want it? What user need does it fill? And is the user going to be able to use it?

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Package Evolution for MOSFETs and Diodes
Sponsored by Mouser Electronics and Vishay
A limiting factor for both MOSFETs and diodes is power dissipation per unit area and your choice of packaging can make a big difference in power dissipation. In this episode of Chalk Talk, Amelia Dalton and Brian Zachrel from Vishay investigate how package evolution has led to new advancements in diodes and MOSFETs including minimizing package resistance, increasing power density, and more! They also explore the benefits of using Vishay’s small and efficient PowerPAK® and eSMP® packages and the migration path you will need to keep in mind when using these solutions in your next design.
Jul 10, 2023
29,509 views