by Paula Bach on August 01, 2007 03:35pm

In my last blog I talked about interdisciplinarity and multidisciplinarity and a little bit about my research this summer. In part 1 and 2 of this blog I am going to talk more about the research I have been doing here at Microsoft. Over the last few months I have been looking at a phenomenon called usability expertise. Anybody who has had difficulty using a product has some experience with usability expertise. Usability expertise is knowledge about how to design an artifact to ensure users experience product effectiveness, efficiency, and satisfaction in a specified context of use. Even if people are not experts in Human Computer Interaction (HCI), they can experience a lack of usability expertise in the design of the product.

HCI experts are actually quite rare because the field is young and underdeveloped. The field of HCI is newer than computer science. HCI grew out of computer science about fifteen years after the software engineering crisis in the sixties and although Human Factors is about fifty years old, it has not necessarily been linked to software engineering like HCI has. Software development has included a user interface role to design and develop the human-computer interface, and although some companies still employ user interface developers, HCI experts include UI designers, Usability Engineers, User Experience Researchers and Designers, and Interaction Designers—roles that go beyond the interface and include field research, visual design, and lab studies, for example. 

Although the obvious place to look for usability expertise is in the knowledge of HCI experts, I am interested in what role this expertise plays in software development. Just having HCI experts available is not enough to ensure good usability. I want to know who has usability expertise, how it is communicated among project members, and how it is used to make decisions. To find these things out the research looks at both proprietary and open source software development settings. What I am reporting here is an overview, or summary, of preliminary findings. I am still analyzing the data and will publish “official results” in the next year and a half while I work on and finish my dissertation. The research seeks to understand the role of usability expertise in software development and takes that understanding to inform the design of a feature or tool on CodePlex that will support usability expertise for projects interested in making sure their software is usable by their intended user base.

Usability expertise in the context of design is related to design rationale, or more specifically usability design rationale. Design rationale is the "the capture, representation, and use of reasons, justification, notation, methods, documents, and explanations involved in the design of an artifact" (from the book Design Rationale by Moran and Carroll). Since design rationale is a well defined concept that has many details, its presence in real design discussions may be fragmented. This fragmentation might be better understood as usability expertise. So a rough definition of usability expertise might be the “stuff” needed to talk about and make decisions about usability during software development. The “stuff” could be the elements in design rationale or something people have not talked before. In this sense my discoveries made while investigating the role of usability expertise could be groundbreaking or they could be well known in the software development communities. Either way reporting the findings of the role of usability expertise should be interesting. In fact, several people, both at Microsoft and the open source communities I surveyed have already stated that they would like to see the findings, so this is encouraging.

I am collecting data in a number of ways: surveys, interviews, and observations. I surveyed people at Microsoft who are part of the software development process of a project, namely usability experience researchers and designers, developers, and program managers. In the open source world I posted the survey to major projects who met criteria for overtly caring about usability, namely that they had a usability list and at least one person listed as a usability expert. The Microsoft usability expertise survey is still collecting responses, and although I am still working with the data on the open source survey, I can mention a few things.

In the open source survey, fatigue affected about half of the 125 respondents with 56 making it to the last question. The survey had two open ended questions asking about the importance and challenges of usability in open source. Usually open ended questions are best saved for the end after other more important questions are answered. The tradeoff was that the open ended questions were important and that the survey could have biased the open ended responses if they were at the end because the survey included questions that asked about specifics with the importance of and challenges with usability.

Data clustered around categories of ease of use, simplicity, and consistency for usability importance, with each category claiming about a quarter of the responses. About 10% of the respondents stated that issues related to system performance were important for usability. Usability challenges included about a quarter of the respondents reporting that challenges with usability in open source software development were developer based. This included not valuing usability, not having usability expertise other than self-referential (based on own experience), and communication problems related to common ground. Common ground is when two people reach a mutual understanding such that one person knows that the other person knows that the first person knows. Common ground is more difficult to reach in computer-mediated environments than in face-to-face environments because not as many channels exist to help with understanding—in face-to-face you can use people’s expressions and gestures to help you understand what they are saying. Other categories included lack of resources and lack of process (both at about 10% of the responses). Other questions I am asking the data include the following:

1.    Who has usability expertise?
2.    How is usability expertise communicated?
3.    How is usability expertise used to make decisions?
4.    Who cares about usability expertise?
5.    How available is usability expertise?

The data may not be able to answer the above questions in full, but it will get me closer to asking different questions that may be more relevant to the data. I am conducting interviews which may also be able to address the questions and get at depth surveys cannot.

I have been scheduling and conducting interviews with Microsoft people and will report on those preliminary findings in the next blog. I will conduct the open source interviews via video conference when I get back to Penn State. The open source usability people I am going to talk to are all over the world: US, Canada, Germany, Australia, and France.

I have also been observing three open source projects looking at email lists and other interesting things like conversations in the bug tracker, how a usability issue is handled in the bug tracker, and reading UI specifications. I chose three ‘big’ open source projects that attend to usability. I wanted diversity in the projects and a wide user base. I spent 8 weeks observing the workings of usability in Firefox, KDE, and The discussions on the email lists vary considerably. Some are short and polite with a developer inquiring about the usability of a particular design change or feature he is thinking about. Others are heated and get users, developers and usability people involved trying to hash out the merits of a feature.

The most often used design rationale, or type of usability expertise, is self-referential. The people on the lists, and in the beginning mostly users or user/developers respond to the feature proposal, speculate about the usability of the change based on their own experience. Since most of the users on the lists are advanced or power users, this might not be representative of the main user base, at least for the three projects I was studying. I don’t know if they have any data about the user base, but it may be that the email lists are only one input to the decision making about usability of those projects. Despite the openness of the discussion list and other aspects of the development, there are other decisions that are made ‘behind the scenes’. Possibly, the ‘behind the scenes’ usability expertise that contributes to decision making about which usability fixes to include in the next release is similar to how proprietary usability expertise is used in decision making. This is something I will consider when investigating the role of usability expertise in both environments.