fredag 9 oktober 2015

Evaluation from another group

From the exercise 4 we also received an evaluation from another group. The feedback received was as following:

Focus group
- Apparent to who the target audience are. However, the target audience could also be for travelers with wheelchair.

Functionality
- Assuming the user will have a smart device.
- How many there are on the route real time information. One can view statistics over the general amount of travelers together with travelers with strollers.

"The Feeling"
- Users will return to the app.
- Different opinions in the group whether another step is needed or not
- Stops using the app when on the bus

Interaction & Over Arching Structure
- Reverse button from the Jarlaplan screen should be added.
- Information about how to use the app - help button/introduction
- Part of the already existing "Reseplaneraren"?

Primary & Secondary functions
- Clarify the "more information" screen
- Maybe add an alternate traveling route
- How many strollers/travelers
- The accessibility of the route

Design/Composition
- Placement about the information of the strollers should change place from the top to the bottom of the screen.
- Colours of the stroller icons is not intuitive, different patterns are not consistent
- It's not clear if one should swipe backwards
- Reverse + information button should be added
- Combine together with the app "Res i Stockholm"


Help
No, no help support is provided.

This will be notes to think about when having our next meeting to design the final Hi-Fi prototype.

torsdag 8 oktober 2015

Evaluation to group 3

User group
Is it clear to whom the service is made for?
The user group is clear. Flags are a good way of indicating that it is available in other languages.
Can also be used by others outside the user group.
Is the functionality well adjusted to the particular user group?
Clever solution. We like that the flags are on the main page. Not too much information on the main page. Main language could be English to clarify that the interface is directed towards tourists.
'The feeling'
What does the first impression tell?
Simple and clear. Minimalistic. The user's mission is to buy a ticket.
The zones are not clear. A? A+B? A+B+C? What does that mean? Is the 'Choose you zone?' function necessary? Maybe a tourist map instead of station map. The tourist could click where he wants to go and the interface could tell the user only the price. It is not necessary to know what zone you are in.

How are blind people going to interact with it?
Interaction and overarching structure
It the flow clear?
'Choose your ticket type'  in clear (good that the ages/limits are there!). The receipt is good, nice to get a confirmation that you have the correct tickets. Great that the app tells you is you have to change bus/tube. Good idea to print line switches on the ticket. The zone function is a bit unclear. Could the ticket be colour coded? This could be a good idea since the subway-lines already are color coded, by this you will create similarity in the design to group the interface with the existing subway map solution.  A suggestion: Minimize the user’s memory load, by always showing the steps and choices he has made.
Is it always clear what options and possibilities the interface provides?
We suggest that you instead of the stations map have an interactive tourist map where the users can click on for example 'Slottet'. That way it will be easier for the tourists to find their stop. Is there any solutions for error prevention? What happens if the user click on the wrong button.

Primary and secondary functions
Is the main functions in focus in the design?
Yes. The main action is to buy a ticket, and that is clear in the interface.
Is the secondary functions valuable?
No. We think that the 'zone' function could be swapped with a tourist map.
Design/composition
Is functions and elements naturally grouped? Design/layout?
The design is clear and easy to comprehend. It might be a good idea to colour code the buttons.
Recover from errors:
What happens if you want to change the trip in the last step. How does it work with the steps, will you have to re-do the steps if you go back or does the already tapped in information stay unchanged?

Sketches
Are they clear enough?
Yes. The prototype is well made and it is easy to understand your concept from the popApp.

onsdag 7 oktober 2015

Prototypes


Today we have been working on prototypes. We wanted to have a finished prototype for a heuristic evaluation at tomorrow's exercise.


tisdag 6 oktober 2015

First ideas

During and after the 2nd reading seminar, first ideas of the appearance and design of our first prototype started to come and we sketched the ideas briefly before we decided to have a long group meeting to properly design the prototype:









Seminar #2

Our project has allready done most of data-gathering and it's not time to analyse the data. To our help we have chapter 13 & 15 in the course book "Interaction Design - beyond human computer integration". Chapter 13 introduces the DECIDE framework, which will help us plan the evaluation of our product.

One of the points, the "C", stands for "Choose your approach and methods", where i think we got alot of planning to do. Since our target group is highly mobile we're not gonna be able to take them to an offsite location to evaluate our product. We're instead gonna have to find a way to follow them in an outdoor setting and still track their usage or our product.

We also gotta make sure to not change the people we are observings behavior. This is noted as a dilemma in the book, where some argue that it's almost impossible to study peoples behaviour without changing it. I think we mostly have to think about the fact that we might change the behaviour of the people we are evaluating, and therefore take extra precautions to make sure that we do everything we can not to.

Chapter 15 gives an alteranative to this, so called Heuristic Evaluation where we instead invite experts on the field (maybe ourselves) to examine the product against some heuristics (usability principles). The more experts you invite the better of a result you're gonna get.

I believe that our products success is gonna lie partly in how easy we make the HCI, but also in the actual idea itself. We have to first make sure that the product in its core brings value to users, without even thinking about the actually interact with the product. Then we will have to make the interaction as easy as possible. 

I see our product becoming very self aware, presenting info to the user without any input, and therefore most of our evaluation should be doable without the product avaliable. If the task is about informing users about how full the next bus is, this can be simulated by standing on different bus-stops around Stockholm and telling people about the next bus's capacity.

Reading seminar #2

In chapter 13 we are introduced to the evaluation framework DECIDE. This is a useuful checklist for when you are going to plan an evaluation study. DECIDE stands for:

Determine the goals
Explore the questions
Choose the evaluation methods
Identify the practical issues
Decide how to deal with the ethical issues
Evaluate, analyze, interpret and present the data

This is pretty straightforward and recognizable for everything planning. It doesn’t feel redundant though as it is important things to know. To know a goal and how to achieve it is something that always shows up in group work and is really important.

Choosing the evaluation methods are also important as the evaluation can’t be done if you have gathered the wrong data. Practical issues depend on the chosen methods as well and of course the type of product/service you are designing. Appropriate participants, facilities, equipments, scheduling and budget are other practical issues to think about. Pilot studies can help to spot different types of issues.

When evaluating/gathering data, ethical issues are important too, and for me is very important. Universities have their own boards for reviewing activities involving human studies (IRB, Institutional Review Board). The book has a list of important things to think about when doing studies on human to ensure they are done ethically and protect the user rights. Most of it is about being clear and true to the test users and let them know they can stop anytime, and of course to preserve anonymity. The last point on the list confused me though: ”Gain them trust by offering to give them a draft of the study … ”. That sounds a bit like you are going to lure them in to something, or like a dog. 

Evaluating, analyzing, interpreting and presenting the data are also covered in this chapters. Key points in this are reliability, validity, ecological validity, biases and scope. 


Chapter 15 also covers evaluations but focuses on methods, mainly inspections, analytics and models. Inspection methods include heuristic evaluation and walkthroughs. Analytics are a more quantitive evaluation methods which include logging the users interactions and handling large volumes of data, anonymized. Predictive models are used without users being present and is used to predict user performance.

Heuristic evaluation is a method where an expert ”role-plays” a user and follows a set of usability principles to identify usability problems. The first set of principles are still used but new technologies and different types of products has developed new sets of principles and new oens are always developed.
The expert should iterate many times to try to find all the usability problems. Still, most of the times all aren’t found. Many experts/evaluators can be hired but is more expensive. But the method requires no users or special facilities so it is cheaper in a way. General design guidelines can be very similar to heuristic principles, so designing for the heuristict principles can be a good idea.

Walkthroughs are simulating a users experience and is a method of finding usability problems by walking through the product step by step. Cognitive walkthrough is a simulation of a user, pluralistic is more of a team effort involving different types of people working together on a walkthrough. 

Analytics is a big part of the web nowadays and can be a useful evaluation method. It’s mostly just logging large volumes of data of the user interactions, anonymized and analyzed statistically. A very quantitive method. 

Predictive models are methods for predicting without the users being present. GOMS, KLM and Fitt’s Law are methods introduced in this chapter and can be used to predict the users performance.
GOMS stands for goals, operators, methods and selections rule. With this method you can follow the steps and get a rough cognitive model of the user. 
Fitt’s law is interesting as it is a mathematical formula for ”predicting the time it takes to reach a target using a pointing device”. In other words, how fast you can click something depending on how big it is, how far from your pointer and etc. 

All these methods are not set in stone and you can, and probably should, tailor them for the type of product/service you are designing.

Before seminar #2

Chapter 13 focuses on evaluation frameworks, and how to properly evaluate the design of your product in different stages of the development. The authors mentions how a final design often is the result of an interplay between several iterations of designs and evaluations. As the designers learn more about what works and what does not, the evaluations will in turn provide more meaningful data, giving the designers more to work with. There is a powerful synergy to this process which guides designers towards intuitive designs with a high level of usability.

One of these evaluation frameworks is the so called DECIDE framework, which contains six distinct items to work with before, during and after the evaluation. The DECIDE framework have a lot in common with the data gathering and data analysing techniques that we have discussed in the past, such as determining goals, choosing appropriate methods, identifying issues, how to deal with ethical questions and how to properly analyse and interpret the collected data.

A significant part of the chapter is devoted to explaining how to approach and treat the participants of an evaluation. The authors mention several important steps that should be considered, such as finding appropriate participants (i.e. the intended users of the final product) and to keep the participants involvement and data confidential. The authors stress the importance of confidentiality and that the scope of the evaluation process should be known to the participants beforehand.

Different kinds of biases are also discussed and how they can influence how the data is interpreted and distort the result of an evaluation. This ties in to an earlier part of the chapter where the authors present a concept called ecological validity, which describes the impact that the evaluation environment can have on the result. For example, an evaluation carried out in a laboratory might yield results with high validity in and of itself, but the participants will not necessarily act like they would naturally outside of the laboratory. I found this concept particularly interesting, and I hope we get to use it in our project.

The authors also mention budget and schedule constraints. Not much is said of how to deal with these constraint however, just that they exist and should be taken into account.

Chapter 15 continues on the topic of evaluation and presents another method called Heuristic evaluation. Instead of letting users participate in the evaluation, the product is instead evaluated by an expert with knowledge on good design and usability. The design is judged against a list of heuristics (usability principles) in an effort to identify usability problems and areas of improvement. More often than not, the heuristics in use have been developed specifically for the type of product being evaluated.

One of the key points of heuristic evaluation is that the end result becomes better the more evaluators you have. Together, the evaluators will pinpoint more design flaws and usability problems that one evaluator would have. This to me feels like another take on the concept of data triangulation mentioned earlier in the book, since different evaluators tend to focus on different flaws in the design.

The second part of chapter 15 focuses on walkthroughs, methods where the evaluator(s) go through a product design step-by-step, noting usability problems and areas of improvement along the way. The authors specifically mentions two variants of walkthroughs; the first one being cognitive walkthroughs that focuses on the user’s experience and thought processes. Here, the evaluator will often roleplay a first-time user and simulate the process of achieving a specific goal.


The other variant is called pluralistic walkthroughs and involves a group of evaluators making individual notes on the design, which they then share with each other and discuss before moving on. I found both these types of walkthroughs intriguing, and I can definitely see the benefits of applying them to our project at a later stage.

Question for the seminar: How do you tackle constraints that limit your ability to conduct evaluations?

Before Reading Seminar #2

For this seminar we read the chapters 13 and 15. Both of which contained theory about evaluating systems generally. Frameworks and methods are described.
In chapter 13 the main key point was the description of the DECIDE framework, listing six different steps when planning to do an evaluation study. In relation to our project the framework could be very suitable when starting to evaluating our first mockup.
As this will be the first experience of doing an evaluation study, the DECIDE framework is a great  model for us to start with. Similar issues from the previous chapters read in the previous seminar are brought up in these chapters, but in a context where the issues are put in practice. Though ethical issues aren’t as current seeing that we don’t use the private data and does not therefor need to be as much considered. However, other issues concerning the evaluation is higly important and the DECIDE framework is useful in that way whereas it gives a great overview of the many key points.

Chapter 15 included methods for an evaluation study, mainly the methods Heuristic Evaluation and Walkthroughs.
A Heuristic Evaluation is a method using a set of usability principles, the heuristics, to guide the inspectors doing the evaluations. The method Walkthroughs can be divided into two parts; cognitive and pluralistic. This method involve walking the inspectors through a task with the evaluated product.
The primary is that these methods don’t have to include the users, or more importantly, it doesn’t. Instead the methods are tools for understanding and getting the knowledge about the user’s behaviour and consequently finding the problem issues, fitting requirements or whether the functionality of a system serves the user or not. All by the use of so called experts & specialists that works as evaluators whose task is to examine the interface of an interactive product, often role-playing typical users, and suggest problems users would likely have when interacting with it(...)”.
An important note is that the evaluators can be several and the more evaluators, more accurate results will appear when finding the user’s issues.
Using methods that also doesn’t require involving users, fewer practical and ethical issues have to be considered.
I find the approach of the evaluation needs to be clearly set since a Heuristic evaluation compared to using Walkthroughs are good for different evaluations. A Heuristic evaluation is suitable when evaluating a complete system whereas a Walkthrough is suitable for evaluating a small part of a system.
These methods desribed should not be the primary methods, but instead it’s a good compliment to user testing when analyzing a system or a product.


My question for the 2nd reading seminar: As our group of a small amount of people that will maybe the most knowledgable of our system, what determine who should be the evaluators(experts) in our case scenario?

måndag 5 oktober 2015

Before the second reading seminar

Chapter 13 is about evaluation framework and includes practical and ethical aspects of the research process and evaluation process.
The main goal of any design process is to develop a product that meets the user's requirements.
DECIDE (determine, explore, choose, identify, decide, evaluate) is an example of an evaluation framework.

User study methods like 'think out loud', 'into the wild', lab studies and 'wizard of Oz' are useful tools to understand how the user interacts with the product. When designing, it is important to determine the goals - who is the user and why is he/she interacting with the product like this?

A study should never take more than 20-30 minutes per person. It is important that the participant does not get tired or uncomfortable because that will affect the results. We separate between 'into the wild' and lab studies. Lab studies are good for factual, quantitative data, but the data might not be very realistic because the user and the product are taken out of the intended context. 'Into the wild' studies are a bit more like ethnographical studies. The user is usually more comfortable in his natural environment and is it easier to obtain qualitative data.

Another thing that is important to consider are practical and ethical aspects of user studies. Practical matters are e.g. cost, Schedule and time management. Ethical aspects are about preserving the participant's anonymity by leaving out sensitive information, being honest with the participant and letting him know what is going to happen beforehand and making sure that the participant never is uncomfortable.
Five keywords to consider when  evaluating, analysing, interpreting and presenting the data are

Reliability
Validity
Ecological validity
Biases
Scope

 
Chapter 15 is about inspection methods, evaluating data and two predictive methods: GOMS and Fitt's law.
Sometimes users are not easily accessible, or involving them is too expensive or too time consuming. In such circumstances other people, called experts, can provide feedback. Experts can also compliment user studies.

Heuristic evaluation is a usability inspection method where experts guided by a set of ruls evaluate whether UI-elements correspond to tried and tested principles. These rules include visibility, error prevention, simple design and aesthetics and recognition. Heuristic evaluation has three stages: a briefing session, the evaluation period and the debriefing session.

Walkthroughs are an alternative approach to heuristic evaluation. A walkthrough usually don't have a user. Then, the developer or expert take the product through a task and note usability errors. Pluralistic walkthroughs include users, developers and usability experts. An example is the cognitive walkthrough where the user's problem-solving process is simulated to see if the user's actions can lead to the next action. If he can perform a task as intended by the developer.

Analytics is a method for quantifying the user traffic through a system. User activity is logged so that the data can be analysed to understand what parts of a web site is being used and when. This can for intance be used for advertising and investigating mapping.

GOMS is an acronym for goals, operators, methods and selection rules. GOMS is an attempt to model the cognitive processes of a user that interacts with the product. It takes in consideration the state the user wants to achieve, the actions that needs to be performed, learned procedures and selection rules.


Fitt's law is a mathematical model that predicts the time it takes to reach a target with a pointing device. It is based on the size and of the object and the distance to it. This helps designers orient and give shape to buttons on for instance a website. The bigger the target, the easier it is to locate it. 

Question for the reading seminar: I have not quite understood what the what characterizes the 'experts' in heuristic evaluation. Are they a part of the developer team or ate they independent people with expertise in interaction design?

Before Reding Seminare 2

Chapter 13 introduces the DECIDE framwork wich is a tool to help you plan evaluation.Decide stands for
  • Determine the goals
First step is to identifying goals, the goals guide the evaluation by helping to determine its reach.

  • Explore the questions
By breaking down questions you can arrow down specific sub-questions to make the evaluation more punctual. If even more specific issues need to be addressed sub-questions can be broken down to finer problems. 

  • Choose the evaluation methods

After identifying your goals and formulated some questions, you choose your evaluation method. Your choice will depend on what data is needed to answer the questions and which theories or frameworks are appropriate to your context. Usability evaluation typically deals with whether or not the system meets the requirements, it rarely explain the reasons behind the problem.Formative evaluation helps to design the system, iterative testing. Summative evaluation tests the entire system at the end.Sometimes, combinations of methods are used as they gives a broad picture of how well the design meets the usability and user experience goals that were identified during the requirements activity.
  • Identify the practical issues
It helps to know in advance witch practical issues to consider when conducting an evaluation. Issues that should be taken into account include access to appropriate participants, facilities, and equipment, whether schedules and budgets are realistic, and whether the evaluators have the appropriate expertise to conduct the study.
  • Decide how to deal with the ethical issues
Participants' privacy has to be protected. A coding system should be used to record each participant's data and, the code and the person's demographic(name, employment, education, financial status etc.) details should be stored separately from the data.
  • Evaluate, analyze, interpret, and present the data.
Different evaluation methods have different degrees of reliability
Validity is concerned with whether the evaluation method measures what it is intended to measure, this includes both the method and the way it it performed.

Bias occurs when the results are distorted, they may selectively gather data that they think is important.Two aspects play a big role in the evaluation; practical constraints i.e tight schedules or low budget,  and ethical considerations i.e. confidential information(medical records) or information that is private.


Chapter 15

Inspection methods typically involve an expert role-playing the users for whom the product is designed, analyzing aspects of an interface, and identifying any potential usability problems by using a set of guidelines. Heuristic evaluation and walkthroughs can be used at any stage of a design project. They can also be used to complement user testing.

Heuristic evaluation in which experts, guided by a set of usability principles, evaluate whether user-interface elements, such as dialog boxes, menus, navigation structure, online help, and so on, conform to tried and tested principles.

Walkthroughs are an alternative approach to heuristic evaluation for predicting users' problems without doing user testing. Most walkthrough methods do not involve users.

Lifelogging is another interesting variation that can be used for evaluation as well. A typically, lifelogging involves recording GPS location data and personal interaction data on cell phones.