tisdag 6 oktober 2015

Seminar #2

Our project has allready done most of data-gathering and it's not time to analyse the data. To our help we have chapter 13 & 15 in the course book "Interaction Design - beyond human computer integration". Chapter 13 introduces the DECIDE framework, which will help us plan the evaluation of our product.

One of the points, the "C", stands for "Choose your approach and methods", where i think we got alot of planning to do. Since our target group is highly mobile we're not gonna be able to take them to an offsite location to evaluate our product. We're instead gonna have to find a way to follow them in an outdoor setting and still track their usage or our product.

We also gotta make sure to not change the people we are observings behavior. This is noted as a dilemma in the book, where some argue that it's almost impossible to study peoples behaviour without changing it. I think we mostly have to think about the fact that we might change the behaviour of the people we are evaluating, and therefore take extra precautions to make sure that we do everything we can not to.

Chapter 15 gives an alteranative to this, so called Heuristic Evaluation where we instead invite experts on the field (maybe ourselves) to examine the product against some heuristics (usability principles). The more experts you invite the better of a result you're gonna get.

I believe that our products success is gonna lie partly in how easy we make the HCI, but also in the actual idea itself. We have to first make sure that the product in its core brings value to users, without even thinking about the actually interact with the product. Then we will have to make the interaction as easy as possible. 

I see our product becoming very self aware, presenting info to the user without any input, and therefore most of our evaluation should be doable without the product avaliable. If the task is about informing users about how full the next bus is, this can be simulated by standing on different bus-stops around Stockholm and telling people about the next bus's capacity.

Reading seminar #2

In chapter 13 we are introduced to the evaluation framework DECIDE. This is a useuful checklist for when you are going to plan an evaluation study. DECIDE stands for:

Determine the goals
Explore the questions
Choose the evaluation methods
Identify the practical issues
Decide how to deal with the ethical issues
Evaluate, analyze, interpret and present the data

This is pretty straightforward and recognizable for everything planning. It doesn’t feel redundant though as it is important things to know. To know a goal and how to achieve it is something that always shows up in group work and is really important.

Choosing the evaluation methods are also important as the evaluation can’t be done if you have gathered the wrong data. Practical issues depend on the chosen methods as well and of course the type of product/service you are designing. Appropriate participants, facilities, equipments, scheduling and budget are other practical issues to think about. Pilot studies can help to spot different types of issues.

When evaluating/gathering data, ethical issues are important too, and for me is very important. Universities have their own boards for reviewing activities involving human studies (IRB, Institutional Review Board). The book has a list of important things to think about when doing studies on human to ensure they are done ethically and protect the user rights. Most of it is about being clear and true to the test users and let them know they can stop anytime, and of course to preserve anonymity. The last point on the list confused me though: ”Gain them trust by offering to give them a draft of the study … ”. That sounds a bit like you are going to lure them in to something, or like a dog. 

Evaluating, analyzing, interpreting and presenting the data are also covered in this chapters. Key points in this are reliability, validity, ecological validity, biases and scope. 


Chapter 15 also covers evaluations but focuses on methods, mainly inspections, analytics and models. Inspection methods include heuristic evaluation and walkthroughs. Analytics are a more quantitive evaluation methods which include logging the users interactions and handling large volumes of data, anonymized. Predictive models are used without users being present and is used to predict user performance.

Heuristic evaluation is a method where an expert ”role-plays” a user and follows a set of usability principles to identify usability problems. The first set of principles are still used but new technologies and different types of products has developed new sets of principles and new oens are always developed.
The expert should iterate many times to try to find all the usability problems. Still, most of the times all aren’t found. Many experts/evaluators can be hired but is more expensive. But the method requires no users or special facilities so it is cheaper in a way. General design guidelines can be very similar to heuristic principles, so designing for the heuristict principles can be a good idea.

Walkthroughs are simulating a users experience and is a method of finding usability problems by walking through the product step by step. Cognitive walkthrough is a simulation of a user, pluralistic is more of a team effort involving different types of people working together on a walkthrough. 

Analytics is a big part of the web nowadays and can be a useful evaluation method. It’s mostly just logging large volumes of data of the user interactions, anonymized and analyzed statistically. A very quantitive method. 

Predictive models are methods for predicting without the users being present. GOMS, KLM and Fitt’s Law are methods introduced in this chapter and can be used to predict the users performance.
GOMS stands for goals, operators, methods and selections rule. With this method you can follow the steps and get a rough cognitive model of the user. 
Fitt’s law is interesting as it is a mathematical formula for ”predicting the time it takes to reach a target using a pointing device”. In other words, how fast you can click something depending on how big it is, how far from your pointer and etc. 

All these methods are not set in stone and you can, and probably should, tailor them for the type of product/service you are designing.

Before seminar #2

Chapter 13 focuses on evaluation frameworks, and how to properly evaluate the design of your product in different stages of the development. The authors mentions how a final design often is the result of an interplay between several iterations of designs and evaluations. As the designers learn more about what works and what does not, the evaluations will in turn provide more meaningful data, giving the designers more to work with. There is a powerful synergy to this process which guides designers towards intuitive designs with a high level of usability.

One of these evaluation frameworks is the so called DECIDE framework, which contains six distinct items to work with before, during and after the evaluation. The DECIDE framework have a lot in common with the data gathering and data analysing techniques that we have discussed in the past, such as determining goals, choosing appropriate methods, identifying issues, how to deal with ethical questions and how to properly analyse and interpret the collected data.

A significant part of the chapter is devoted to explaining how to approach and treat the participants of an evaluation. The authors mention several important steps that should be considered, such as finding appropriate participants (i.e. the intended users of the final product) and to keep the participants involvement and data confidential. The authors stress the importance of confidentiality and that the scope of the evaluation process should be known to the participants beforehand.

Different kinds of biases are also discussed and how they can influence how the data is interpreted and distort the result of an evaluation. This ties in to an earlier part of the chapter where the authors present a concept called ecological validity, which describes the impact that the evaluation environment can have on the result. For example, an evaluation carried out in a laboratory might yield results with high validity in and of itself, but the participants will not necessarily act like they would naturally outside of the laboratory. I found this concept particularly interesting, and I hope we get to use it in our project.

The authors also mention budget and schedule constraints. Not much is said of how to deal with these constraint however, just that they exist and should be taken into account.

Chapter 15 continues on the topic of evaluation and presents another method called Heuristic evaluation. Instead of letting users participate in the evaluation, the product is instead evaluated by an expert with knowledge on good design and usability. The design is judged against a list of heuristics (usability principles) in an effort to identify usability problems and areas of improvement. More often than not, the heuristics in use have been developed specifically for the type of product being evaluated.

One of the key points of heuristic evaluation is that the end result becomes better the more evaluators you have. Together, the evaluators will pinpoint more design flaws and usability problems that one evaluator would have. This to me feels like another take on the concept of data triangulation mentioned earlier in the book, since different evaluators tend to focus on different flaws in the design.

The second part of chapter 15 focuses on walkthroughs, methods where the evaluator(s) go through a product design step-by-step, noting usability problems and areas of improvement along the way. The authors specifically mentions two variants of walkthroughs; the first one being cognitive walkthroughs that focuses on the user’s experience and thought processes. Here, the evaluator will often roleplay a first-time user and simulate the process of achieving a specific goal.


The other variant is called pluralistic walkthroughs and involves a group of evaluators making individual notes on the design, which they then share with each other and discuss before moving on. I found both these types of walkthroughs intriguing, and I can definitely see the benefits of applying them to our project at a later stage.

Question for the seminar: How do you tackle constraints that limit your ability to conduct evaluations?

Before Reading Seminar #2

For this seminar we read the chapters 13 and 15. Both of which contained theory about evaluating systems generally. Frameworks and methods are described.
In chapter 13 the main key point was the description of the DECIDE framework, listing six different steps when planning to do an evaluation study. In relation to our project the framework could be very suitable when starting to evaluating our first mockup.
As this will be the first experience of doing an evaluation study, the DECIDE framework is a great  model for us to start with. Similar issues from the previous chapters read in the previous seminar are brought up in these chapters, but in a context where the issues are put in practice. Though ethical issues aren’t as current seeing that we don’t use the private data and does not therefor need to be as much considered. However, other issues concerning the evaluation is higly important and the DECIDE framework is useful in that way whereas it gives a great overview of the many key points.

Chapter 15 included methods for an evaluation study, mainly the methods Heuristic Evaluation and Walkthroughs.
A Heuristic Evaluation is a method using a set of usability principles, the heuristics, to guide the inspectors doing the evaluations. The method Walkthroughs can be divided into two parts; cognitive and pluralistic. This method involve walking the inspectors through a task with the evaluated product.
The primary is that these methods don’t have to include the users, or more importantly, it doesn’t. Instead the methods are tools for understanding and getting the knowledge about the user’s behaviour and consequently finding the problem issues, fitting requirements or whether the functionality of a system serves the user or not. All by the use of so called experts & specialists that works as evaluators whose task is to examine the interface of an interactive product, often role-playing typical users, and suggest problems users would likely have when interacting with it(...)”.
An important note is that the evaluators can be several and the more evaluators, more accurate results will appear when finding the user’s issues.
Using methods that also doesn’t require involving users, fewer practical and ethical issues have to be considered.
I find the approach of the evaluation needs to be clearly set since a Heuristic evaluation compared to using Walkthroughs are good for different evaluations. A Heuristic evaluation is suitable when evaluating a complete system whereas a Walkthrough is suitable for evaluating a small part of a system.
These methods desribed should not be the primary methods, but instead it’s a good compliment to user testing when analyzing a system or a product.


My question for the 2nd reading seminar: As our group of a small amount of people that will maybe the most knowledgable of our system, what determine who should be the evaluators(experts) in our case scenario?

måndag 5 oktober 2015

Before the second reading seminar

Chapter 13 is about evaluation framework and includes practical and ethical aspects of the research process and evaluation process.
The main goal of any design process is to develop a product that meets the user's requirements.
DECIDE (determine, explore, choose, identify, decide, evaluate) is an example of an evaluation framework.

User study methods like 'think out loud', 'into the wild', lab studies and 'wizard of Oz' are useful tools to understand how the user interacts with the product. When designing, it is important to determine the goals - who is the user and why is he/she interacting with the product like this?

A study should never take more than 20-30 minutes per person. It is important that the participant does not get tired or uncomfortable because that will affect the results. We separate between 'into the wild' and lab studies. Lab studies are good for factual, quantitative data, but the data might not be very realistic because the user and the product are taken out of the intended context. 'Into the wild' studies are a bit more like ethnographical studies. The user is usually more comfortable in his natural environment and is it easier to obtain qualitative data.

Another thing that is important to consider are practical and ethical aspects of user studies. Practical matters are e.g. cost, Schedule and time management. Ethical aspects are about preserving the participant's anonymity by leaving out sensitive information, being honest with the participant and letting him know what is going to happen beforehand and making sure that the participant never is uncomfortable.
Five keywords to consider when  evaluating, analysing, interpreting and presenting the data are

Reliability
Validity
Ecological validity
Biases
Scope

 
Chapter 15 is about inspection methods, evaluating data and two predictive methods: GOMS and Fitt's law.
Sometimes users are not easily accessible, or involving them is too expensive or too time consuming. In such circumstances other people, called experts, can provide feedback. Experts can also compliment user studies.

Heuristic evaluation is a usability inspection method where experts guided by a set of ruls evaluate whether UI-elements correspond to tried and tested principles. These rules include visibility, error prevention, simple design and aesthetics and recognition. Heuristic evaluation has three stages: a briefing session, the evaluation period and the debriefing session.

Walkthroughs are an alternative approach to heuristic evaluation. A walkthrough usually don't have a user. Then, the developer or expert take the product through a task and note usability errors. Pluralistic walkthroughs include users, developers and usability experts. An example is the cognitive walkthrough where the user's problem-solving process is simulated to see if the user's actions can lead to the next action. If he can perform a task as intended by the developer.

Analytics is a method for quantifying the user traffic through a system. User activity is logged so that the data can be analysed to understand what parts of a web site is being used and when. This can for intance be used for advertising and investigating mapping.

GOMS is an acronym for goals, operators, methods and selection rules. GOMS is an attempt to model the cognitive processes of a user that interacts with the product. It takes in consideration the state the user wants to achieve, the actions that needs to be performed, learned procedures and selection rules.


Fitt's law is a mathematical model that predicts the time it takes to reach a target with a pointing device. It is based on the size and of the object and the distance to it. This helps designers orient and give shape to buttons on for instance a website. The bigger the target, the easier it is to locate it. 

Question for the reading seminar: I have not quite understood what the what characterizes the 'experts' in heuristic evaluation. Are they a part of the developer team or ate they independent people with expertise in interaction design?

Before Reding Seminare 2

Chapter 13 introduces the DECIDE framwork wich is a tool to help you plan evaluation.Decide stands for
  • Determine the goals
First step is to identifying goals, the goals guide the evaluation by helping to determine its reach.

  • Explore the questions
By breaking down questions you can arrow down specific sub-questions to make the evaluation more punctual. If even more specific issues need to be addressed sub-questions can be broken down to finer problems. 

  • Choose the evaluation methods

After identifying your goals and formulated some questions, you choose your evaluation method. Your choice will depend on what data is needed to answer the questions and which theories or frameworks are appropriate to your context. Usability evaluation typically deals with whether or not the system meets the requirements, it rarely explain the reasons behind the problem.Formative evaluation helps to design the system, iterative testing. Summative evaluation tests the entire system at the end.Sometimes, combinations of methods are used as they gives a broad picture of how well the design meets the usability and user experience goals that were identified during the requirements activity.
  • Identify the practical issues
It helps to know in advance witch practical issues to consider when conducting an evaluation. Issues that should be taken into account include access to appropriate participants, facilities, and equipment, whether schedules and budgets are realistic, and whether the evaluators have the appropriate expertise to conduct the study.
  • Decide how to deal with the ethical issues
Participants' privacy has to be protected. A coding system should be used to record each participant's data and, the code and the person's demographic(name, employment, education, financial status etc.) details should be stored separately from the data.
  • Evaluate, analyze, interpret, and present the data.
Different evaluation methods have different degrees of reliability
Validity is concerned with whether the evaluation method measures what it is intended to measure, this includes both the method and the way it it performed.

Bias occurs when the results are distorted, they may selectively gather data that they think is important.Two aspects play a big role in the evaluation; practical constraints i.e tight schedules or low budget,  and ethical considerations i.e. confidential information(medical records) or information that is private.


Chapter 15

Inspection methods typically involve an expert role-playing the users for whom the product is designed, analyzing aspects of an interface, and identifying any potential usability problems by using a set of guidelines. Heuristic evaluation and walkthroughs can be used at any stage of a design project. They can also be used to complement user testing.

Heuristic evaluation in which experts, guided by a set of usability principles, evaluate whether user-interface elements, such as dialog boxes, menus, navigation structure, online help, and so on, conform to tried and tested principles.

Walkthroughs are an alternative approach to heuristic evaluation for predicting users' problems without doing user testing. Most walkthrough methods do not involve users.

Lifelogging is another interesting variation that can be used for evaluation as well. A typically, lifelogging involves recording GPS location data and personal interaction data on cell phones.

torsdag 1 oktober 2015

After exercise 3.

Thea and Johan brainstorming.

Cecilia and Gustavs brainstorming-map. 


Out of this seminar and brainstorming we developed two main ideas from working with pain-points, personas and scenarios.
The first idea is a real-time-information app letting the user now how crowded the busses are. This app you can use anytime and se information on any bus-line and time when you want to travel. 
With weight-sensor on the bus we will approximately now how many people that are on the bus and also how many people that are exiting the bus.

The second idea is a tool to let parents know what activities there are around their stop. For example, if they need to wait for an hour due to the fact that they are in the middle of rush hour, the app will show the closest park, museum, library or child friendly cafe etc.