fredag 9 oktober 2015

Evaluation from another group

From the exercise 4 we also received an evaluation from another group. The feedback received was as following:

Focus group
- Apparent to who the target audience are. However, the target audience could also be for travelers with wheelchair.

Functionality
- Assuming the user will have a smart device.
- How many there are on the route real time information. One can view statistics over the general amount of travelers together with travelers with strollers.

"The Feeling"
- Users will return to the app.
- Different opinions in the group whether another step is needed or not
- Stops using the app when on the bus

Interaction & Over Arching Structure
- Reverse button from the Jarlaplan screen should be added.
- Information about how to use the app - help button/introduction
- Part of the already existing "Reseplaneraren"?

Primary & Secondary functions
- Clarify the "more information" screen
- Maybe add an alternate traveling route
- How many strollers/travelers
- The accessibility of the route

Design/Composition
- Placement about the information of the strollers should change place from the top to the bottom of the screen.
- Colours of the stroller icons is not intuitive, different patterns are not consistent
- It's not clear if one should swipe backwards
- Reverse + information button should be added
- Combine together with the app "Res i Stockholm"


Help
No, no help support is provided.

This will be notes to think about when having our next meeting to design the final Hi-Fi prototype.

torsdag 8 oktober 2015

Evaluation to group 3

User group
Is it clear to whom the service is made for?
The user group is clear. Flags are a good way of indicating that it is available in other languages.
Can also be used by others outside the user group.
Is the functionality well adjusted to the particular user group?
Clever solution. We like that the flags are on the main page. Not too much information on the main page. Main language could be English to clarify that the interface is directed towards tourists.
'The feeling'
What does the first impression tell?
Simple and clear. Minimalistic. The user's mission is to buy a ticket.
The zones are not clear. A? A+B? A+B+C? What does that mean? Is the 'Choose you zone?' function necessary? Maybe a tourist map instead of station map. The tourist could click where he wants to go and the interface could tell the user only the price. It is not necessary to know what zone you are in.

How are blind people going to interact with it?
Interaction and overarching structure
It the flow clear?
'Choose your ticket type'  in clear (good that the ages/limits are there!). The receipt is good, nice to get a confirmation that you have the correct tickets. Great that the app tells you is you have to change bus/tube. Good idea to print line switches on the ticket. The zone function is a bit unclear. Could the ticket be colour coded? This could be a good idea since the subway-lines already are color coded, by this you will create similarity in the design to group the interface with the existing subway map solution.  A suggestion: Minimize the user’s memory load, by always showing the steps and choices he has made.
Is it always clear what options and possibilities the interface provides?
We suggest that you instead of the stations map have an interactive tourist map where the users can click on for example 'Slottet'. That way it will be easier for the tourists to find their stop. Is there any solutions for error prevention? What happens if the user click on the wrong button.

Primary and secondary functions
Is the main functions in focus in the design?
Yes. The main action is to buy a ticket, and that is clear in the interface.
Is the secondary functions valuable?
No. We think that the 'zone' function could be swapped with a tourist map.
Design/composition
Is functions and elements naturally grouped? Design/layout?
The design is clear and easy to comprehend. It might be a good idea to colour code the buttons.
Recover from errors:
What happens if you want to change the trip in the last step. How does it work with the steps, will you have to re-do the steps if you go back or does the already tapped in information stay unchanged?

Sketches
Are they clear enough?
Yes. The prototype is well made and it is easy to understand your concept from the popApp.

onsdag 7 oktober 2015

Prototypes


Today we have been working on prototypes. We wanted to have a finished prototype for a heuristic evaluation at tomorrow's exercise.


tisdag 6 oktober 2015

First ideas

During and after the 2nd reading seminar, first ideas of the appearance and design of our first prototype started to come and we sketched the ideas briefly before we decided to have a long group meeting to properly design the prototype:









Seminar #2

Our project has allready done most of data-gathering and it's not time to analyse the data. To our help we have chapter 13 & 15 in the course book "Interaction Design - beyond human computer integration". Chapter 13 introduces the DECIDE framework, which will help us plan the evaluation of our product.

One of the points, the "C", stands for "Choose your approach and methods", where i think we got alot of planning to do. Since our target group is highly mobile we're not gonna be able to take them to an offsite location to evaluate our product. We're instead gonna have to find a way to follow them in an outdoor setting and still track their usage or our product.

We also gotta make sure to not change the people we are observings behavior. This is noted as a dilemma in the book, where some argue that it's almost impossible to study peoples behaviour without changing it. I think we mostly have to think about the fact that we might change the behaviour of the people we are evaluating, and therefore take extra precautions to make sure that we do everything we can not to.

Chapter 15 gives an alteranative to this, so called Heuristic Evaluation where we instead invite experts on the field (maybe ourselves) to examine the product against some heuristics (usability principles). The more experts you invite the better of a result you're gonna get.

I believe that our products success is gonna lie partly in how easy we make the HCI, but also in the actual idea itself. We have to first make sure that the product in its core brings value to users, without even thinking about the actually interact with the product. Then we will have to make the interaction as easy as possible. 

I see our product becoming very self aware, presenting info to the user without any input, and therefore most of our evaluation should be doable without the product avaliable. If the task is about informing users about how full the next bus is, this can be simulated by standing on different bus-stops around Stockholm and telling people about the next bus's capacity.

Reading seminar #2

In chapter 13 we are introduced to the evaluation framework DECIDE. This is a useuful checklist for when you are going to plan an evaluation study. DECIDE stands for:

Determine the goals
Explore the questions
Choose the evaluation methods
Identify the practical issues
Decide how to deal with the ethical issues
Evaluate, analyze, interpret and present the data

This is pretty straightforward and recognizable for everything planning. It doesn’t feel redundant though as it is important things to know. To know a goal and how to achieve it is something that always shows up in group work and is really important.

Choosing the evaluation methods are also important as the evaluation can’t be done if you have gathered the wrong data. Practical issues depend on the chosen methods as well and of course the type of product/service you are designing. Appropriate participants, facilities, equipments, scheduling and budget are other practical issues to think about. Pilot studies can help to spot different types of issues.

When evaluating/gathering data, ethical issues are important too, and for me is very important. Universities have their own boards for reviewing activities involving human studies (IRB, Institutional Review Board). The book has a list of important things to think about when doing studies on human to ensure they are done ethically and protect the user rights. Most of it is about being clear and true to the test users and let them know they can stop anytime, and of course to preserve anonymity. The last point on the list confused me though: ”Gain them trust by offering to give them a draft of the study … ”. That sounds a bit like you are going to lure them in to something, or like a dog. 

Evaluating, analyzing, interpreting and presenting the data are also covered in this chapters. Key points in this are reliability, validity, ecological validity, biases and scope. 


Chapter 15 also covers evaluations but focuses on methods, mainly inspections, analytics and models. Inspection methods include heuristic evaluation and walkthroughs. Analytics are a more quantitive evaluation methods which include logging the users interactions and handling large volumes of data, anonymized. Predictive models are used without users being present and is used to predict user performance.

Heuristic evaluation is a method where an expert ”role-plays” a user and follows a set of usability principles to identify usability problems. The first set of principles are still used but new technologies and different types of products has developed new sets of principles and new oens are always developed.
The expert should iterate many times to try to find all the usability problems. Still, most of the times all aren’t found. Many experts/evaluators can be hired but is more expensive. But the method requires no users or special facilities so it is cheaper in a way. General design guidelines can be very similar to heuristic principles, so designing for the heuristict principles can be a good idea.

Walkthroughs are simulating a users experience and is a method of finding usability problems by walking through the product step by step. Cognitive walkthrough is a simulation of a user, pluralistic is more of a team effort involving different types of people working together on a walkthrough. 

Analytics is a big part of the web nowadays and can be a useful evaluation method. It’s mostly just logging large volumes of data of the user interactions, anonymized and analyzed statistically. A very quantitive method. 

Predictive models are methods for predicting without the users being present. GOMS, KLM and Fitt’s Law are methods introduced in this chapter and can be used to predict the users performance.
GOMS stands for goals, operators, methods and selections rule. With this method you can follow the steps and get a rough cognitive model of the user. 
Fitt’s law is interesting as it is a mathematical formula for ”predicting the time it takes to reach a target using a pointing device”. In other words, how fast you can click something depending on how big it is, how far from your pointer and etc. 

All these methods are not set in stone and you can, and probably should, tailor them for the type of product/service you are designing.

Before seminar #2

Chapter 13 focuses on evaluation frameworks, and how to properly evaluate the design of your product in different stages of the development. The authors mentions how a final design often is the result of an interplay between several iterations of designs and evaluations. As the designers learn more about what works and what does not, the evaluations will in turn provide more meaningful data, giving the designers more to work with. There is a powerful synergy to this process which guides designers towards intuitive designs with a high level of usability.

One of these evaluation frameworks is the so called DECIDE framework, which contains six distinct items to work with before, during and after the evaluation. The DECIDE framework have a lot in common with the data gathering and data analysing techniques that we have discussed in the past, such as determining goals, choosing appropriate methods, identifying issues, how to deal with ethical questions and how to properly analyse and interpret the collected data.

A significant part of the chapter is devoted to explaining how to approach and treat the participants of an evaluation. The authors mention several important steps that should be considered, such as finding appropriate participants (i.e. the intended users of the final product) and to keep the participants involvement and data confidential. The authors stress the importance of confidentiality and that the scope of the evaluation process should be known to the participants beforehand.

Different kinds of biases are also discussed and how they can influence how the data is interpreted and distort the result of an evaluation. This ties in to an earlier part of the chapter where the authors present a concept called ecological validity, which describes the impact that the evaluation environment can have on the result. For example, an evaluation carried out in a laboratory might yield results with high validity in and of itself, but the participants will not necessarily act like they would naturally outside of the laboratory. I found this concept particularly interesting, and I hope we get to use it in our project.

The authors also mention budget and schedule constraints. Not much is said of how to deal with these constraint however, just that they exist and should be taken into account.

Chapter 15 continues on the topic of evaluation and presents another method called Heuristic evaluation. Instead of letting users participate in the evaluation, the product is instead evaluated by an expert with knowledge on good design and usability. The design is judged against a list of heuristics (usability principles) in an effort to identify usability problems and areas of improvement. More often than not, the heuristics in use have been developed specifically for the type of product being evaluated.

One of the key points of heuristic evaluation is that the end result becomes better the more evaluators you have. Together, the evaluators will pinpoint more design flaws and usability problems that one evaluator would have. This to me feels like another take on the concept of data triangulation mentioned earlier in the book, since different evaluators tend to focus on different flaws in the design.

The second part of chapter 15 focuses on walkthroughs, methods where the evaluator(s) go through a product design step-by-step, noting usability problems and areas of improvement along the way. The authors specifically mentions two variants of walkthroughs; the first one being cognitive walkthroughs that focuses on the user’s experience and thought processes. Here, the evaluator will often roleplay a first-time user and simulate the process of achieving a specific goal.


The other variant is called pluralistic walkthroughs and involves a group of evaluators making individual notes on the design, which they then share with each other and discuss before moving on. I found both these types of walkthroughs intriguing, and I can definitely see the benefits of applying them to our project at a later stage.

Question for the seminar: How do you tackle constraints that limit your ability to conduct evaluations?