tisdag 6 oktober 2015

Before seminar #2

Chapter 13 focuses on evaluation frameworks, and how to properly evaluate the design of your product in different stages of the development. The authors mentions how a final design often is the result of an interplay between several iterations of designs and evaluations. As the designers learn more about what works and what does not, the evaluations will in turn provide more meaningful data, giving the designers more to work with. There is a powerful synergy to this process which guides designers towards intuitive designs with a high level of usability.

One of these evaluation frameworks is the so called DECIDE framework, which contains six distinct items to work with before, during and after the evaluation. The DECIDE framework have a lot in common with the data gathering and data analysing techniques that we have discussed in the past, such as determining goals, choosing appropriate methods, identifying issues, how to deal with ethical questions and how to properly analyse and interpret the collected data.

A significant part of the chapter is devoted to explaining how to approach and treat the participants of an evaluation. The authors mention several important steps that should be considered, such as finding appropriate participants (i.e. the intended users of the final product) and to keep the participants involvement and data confidential. The authors stress the importance of confidentiality and that the scope of the evaluation process should be known to the participants beforehand.

Different kinds of biases are also discussed and how they can influence how the data is interpreted and distort the result of an evaluation. This ties in to an earlier part of the chapter where the authors present a concept called ecological validity, which describes the impact that the evaluation environment can have on the result. For example, an evaluation carried out in a laboratory might yield results with high validity in and of itself, but the participants will not necessarily act like they would naturally outside of the laboratory. I found this concept particularly interesting, and I hope we get to use it in our project.

The authors also mention budget and schedule constraints. Not much is said of how to deal with these constraint however, just that they exist and should be taken into account.

Chapter 15 continues on the topic of evaluation and presents another method called Heuristic evaluation. Instead of letting users participate in the evaluation, the product is instead evaluated by an expert with knowledge on good design and usability. The design is judged against a list of heuristics (usability principles) in an effort to identify usability problems and areas of improvement. More often than not, the heuristics in use have been developed specifically for the type of product being evaluated.

One of the key points of heuristic evaluation is that the end result becomes better the more evaluators you have. Together, the evaluators will pinpoint more design flaws and usability problems that one evaluator would have. This to me feels like another take on the concept of data triangulation mentioned earlier in the book, since different evaluators tend to focus on different flaws in the design.

The second part of chapter 15 focuses on walkthroughs, methods where the evaluator(s) go through a product design step-by-step, noting usability problems and areas of improvement along the way. The authors specifically mentions two variants of walkthroughs; the first one being cognitive walkthroughs that focuses on the user’s experience and thought processes. Here, the evaluator will often roleplay a first-time user and simulate the process of achieving a specific goal.


The other variant is called pluralistic walkthroughs and involves a group of evaluators making individual notes on the design, which they then share with each other and discuss before moving on. I found both these types of walkthroughs intriguing, and I can definitely see the benefits of applying them to our project at a later stage.

Question for the seminar: How do you tackle constraints that limit your ability to conduct evaluations?

Inga kommentarer:

Skicka en kommentar