Introduction

IDF (Interaction Design Foundation) defines UX (user experience) research as the systematic investigation of users and their requirements, done in order to add context and insight into the process of designing the user experience. UX research employs a variety of techniques, tools, and methodologies to reach conclusions, determine facts, and uncover problems, thereby revealing valuable information which can be fed into the design process.

According to Rohrer (2014), UX research methods can be viewed in 3 dimensions:

  • 1st dimension:
      • Quantitative – are much better suited for answering questions about why or how to fix a problem
      • Qualitative – do a much better job answering how many and how much types of questions 
    • 2nd dimension:
      • Attitudinal – what people say
      • Behavioral – what people do
    • 3rd dimension – Context of use:
      • Natural or near-natural use of the product
      • Scripted use of the product
      • Not using the product during the study
      • A hybrid of the above

This report will analyse and compare three user research methods, in this order: 

  1. Diary Study;
  2. Card Sorting;
  3. Usability Testing. 

Methods will be thoroughly explained, containing:

    1. Intro; 
    2. When to conduct;
    3. Example case study discussion.

Diary Study

1. Intro

In 2016, Flaherty defines a diary study as a research method used to collect qualitative data about user behaviors, activities, and experiences over time. During the defined reporting period, study participants are asked to keep a diary and log specific information about activities being studied.

The diary can be unstructured, providing no specified format, allowing participants to describe their experience in the way they find best (or easiest). More often, though, diaries are structured, providing participants with a set of questions or probes to respond to (Baxter & Courage, 2005).

However, according to Flaherty (2016), diary studies rarely provide observations as rich or detailed as a true field study, but they can serve as a decent approximation. Preece (2017) also highlights that even though they are suitable for long-term studies – those lasting longer than two weeks are less likely to be successful.

2. When To Conduct

Flaherty (2016) also highlights how useful are diary studies for understanding long-term behaviours and answering questions such as:

  • Attitudes and motivations — “How are users feeling and thinking when performing specific tasks?”
  • Habits — “What time of day do users engage with a product?”
  • Usage scenarios — “What are the users’ workflows for completing certain tasks?” (These scenarios can be used for user testing later in the process.)
  • Changes in behaviors — “How loyal are customers over time?”
  • Customer journeys — “What is the typical customer journey as they interact with your website/app?”

Diary studies are often organized to focus on one of the following topics:

  • Understanding all interactions within a site or an app
  • Gathering general information about user behaviour 
  • Understanding how people complete general activities

3. Example Case Study Discussion

A Diary Study of Mobile Information Needs

Despoina Xanthopoulou, Arnold B. Bakker, Evangelia Demerouti and Wilmar B. Schaufeli published a case study in which they investigate how daily fluctuations in job resources are related to employees’ levels of personal resources, work engagement, and financial returns. 

They chose 45 participants, which were employees from three branches of a Greek fast-food company. Survey packages were handed in and instructions were provided to each employee individually. Employees were instructed to fill in the general questionnaire as soon as they received their survey package and to fill in the diary over 5 consecutive workdays, at the end of their shift, before leaving the workplace.

The study showed that financial performance is mainly influenced by situational factors (i.e. branch and shift) that the variance left is explained by factors that bear action (e.g. work engagement), rather than by beliefs (e.g. personal resources). When employees are focused on their customers, they have a higher probability to bring in profit, than when they just believe that they are capable to serve their customers adequately. 

This study is a good example of a dairy study method because it clearly shows the reasons behind variety of behaviours each employer has. Although, five days is a short period for a dairy study of this type because samples should be collected through the longer period of time in order to generalize facts about employees. 

Card Sorting

1. Intro

According to Sherwin (2018), card sorting is a qualitative UX research method in which study participants group individual labels written on notecards according to criteria that make sense to them. There are two main variations:

  • Open Card Sorting – most common type, users group cards and assign names to the groups;
  • Closed Card Sorting – users are given a predefined set of group names, in which they have to organize the cards as they want. The downside of this variation is that it might seem it just tests users’ ability to place the content into the “proper” bucket, without matching real content with groups.

Cooper (2014) also highlights the importance of analysing the results carefully, either by looking for trends or using statistical analysis to uncover patterns and correlations.

2. When To Conduct

Card sorting is a good way to enhance usability by creating an information architecture that reflects how users view the content (Nielsen, 2004). This method uncovers how the target audience’s domain knowledge is structured, and it serves to create an information architecture that matches users’ expectations, and it has to be conducted with at least 15 to 20 users. (Sherwin, 2018)

3. Example Case Study Discussion:

Gaining User Insight: A Case Study Illustrating the Card Sort Technique

Angi Faiks and Nancy Hyland published a case study in which they investigated how users would organize a set of concepts to be included in an online digital library help system using the open card sorting technique.

Twelve participants were randomly chosen from the academic community, five undergraduate students, two graduate students, two faculty members, and three staff members. 

The participants were instructed to sort 50 individual cards with Gateway Help topics by placing similar cards into piles. After the cards are grouped, what is learned is how an individual or a composite of individuals combines like concepts.

The study was very valuable in helping the committee gain perspective on how users would organize the help system if given the opportunity. However, it didn’t explain how effectively users could find relevant information in the final Gateway Help system.

The card-sorting method had a positive effect on the design of CUL’s Gateway Help. The committee was able to incorporate user input on the organization of its interface before total system design, resulting in a structure with which everyone was satisfied. Moreover, the simple satisfaction of incorporating the user’s point of view had a tremendous impact on the committee’s confidence.

However, the fact that a user is not able to put one card into two places if a concept falls into more than one category is a disadvantage in this case because using links it can be allowed for one page or concept to reside in multiple places.

 

3. Usability Testing

1. Intro

According to Cooper (2014), usability testing is qualitative/quantitative method, focused on measuring how well users can complete specific, standardized tasks, as well as what problems they encounter in doing so.

Participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service (Rohrer, 2014)

In 2014, Cooper also highlights that results often reveal areas where users have problems understanding and using the product, as well as places where users are more likely to be successful.

There is a lot of discussion about the number of participants needed for usability testing for an academic evaluation. Nielsen and Landauer (1993) found that there’s a better return on investment if more rounds of testing are conducted, with five participants needed per each round. In other words, more usability issues will be found by conducting three rounds of testing with five participants each if you iterate between rounds than if single study has been conducted with fifteen participants. 

According to UX Booth Editorial Team (2018), participants are supposed to match the user personas’ profile.  Recruiting colleagues or users that don’t match personas may create bias. 

There are many variations of usability tests, but only three are commonly used:

  • Moderated usability test – can be conducted in person, or via screen share and video 
  • Unmoderated usability test – conducted online, at the user’s convenience. 
  • Guerrilla testing – is a modern, lightweight take on traditional tests, typically done out in the field and the users are asked to complete basic tasks with a website or service. 

2.When To Conduct

Appropriate place for usability testing is quite late in the design cycle, after there is a coherent concept and sufficient detail to generate such prototypes. Whether the software has been tested, a clickable or even a paper prototype, the point of this test is to validate a product design (Cooper, 2014).

3. Example Case Study Discussion

Janet Chisman, Karen Diller, and Sharon Walbridge published a case study in which they investigated how easily users could navigate the Web-based Griffin catalogue used on Washington State University and whether they understood what they were seeing. The team also wanted to test whether library patrons could find the electronic indexes and links to non-Washington State University library catalogues from the existing Libraries Gateway.

They used usability testing as their main research method and they followed the instructions from Joseph S. Dumas and Janice Redish’s “Practical Guide to Usability Testing” along with “The Handbook of Usability Testing” by Jeffrey Rubin, which contained practical advice and helped lay out a road map to follow. They decided to do the initial Griffin test with eight participants on the Pullman campus (four novice and four expert computer/library users) and four participants on the Vancouver campus. 

First, participants were provided with The Griffin Web Pac test which contained 45 questions. Afterwards, they got the second test with 14 questions which was dealing with electronic indexes and non-Washington State University library catalogues.

The results of the testing showed that the Washington State University team’s categories of novice and expert computer/library user did not correlate to a participant’s ability to use the Web Pac. They didn’t understand serials and they couldn’t identify them in a browse display containing both books and serials. They also found out the users didn’t realize they can narrow the search by date, type of material, etc. Users didn’t understand the use of multiple call number schemes in the library and using the cross references. 

This study is a very good example of how to use user testing to gain insights into user behaviour. In this case, it gave a clear picture of what users think about the tested product, what are they struggling with and what they find useful. After the user testing has been completed, they made the list of actions and changes to be made in the existing system. They also realized the importance of preparation before test and how it can be almost as important as test itself.

 

Conclusion

This report has discussed and compared three user research methods: Dairy Study, Card Sorting and Usability Testing. Even though they are marked as qualitative methods, all of them can gather both qualitative and quantitative data.

Diary study was explained along with the process and with explanation when to use this research technique. This is the best way to have a clear insight into users’ behaviour in their natural behaviour and environment. Even though a large amount of data can be collected using this method, there still might be some discrepancies if the period of using it is longer than two weeks. 

Card sorting method followed next, discussing how the sorting was performed. It’s a very cheap form of research, very easy technique to understand for users and for clients and to get user input for ideas early on in a UX project. It’s very valuable for creating information architecture that matches users’ mental model, when the open variation of card sorting is conducted. Besides collecting the qualitative data, this method can also collect quantitative data (e.g. if counted the cards that appeared together most often).

And in the end, the Usability testing was described with all the possible variations and examples where it can be used. This research methodology differentiates from others because it can collect large amount of data and create a better empathy with users. However, it takes significant amount of time to analyse the session. This can mean analysing the transcripts of the meetings, identifying the problems, categorizing them and studying the frequency of category occurrences.

The example case studies per each research methodology provide fairly good examples of how each method can be used within a project. 

Diary study answered asked questions and therefore proved to be the right choice for this case. 

Even though card sorting method helped in having a better insight into the users’ mental model, it didn’t actually give a perfect data because the users couldn’t duplicate the cards inside the categories. 

The usability testing proved to be the right method for this type of research because it gave a clear picture about the bad and good patterns of the design, and a clear answer on how to fix them.

 

References

  1. Complete Beginner’s Guide to UX Research | UX Booth. (n.d.). Retrieved November 18, 2018, from https://www.uxbooth.com/articles/complete-beginners-guide-to-design-research/
  2. Cooper, A. (2014). About Face: The essentials of interaction design;. Indianapolis, IN: Wiley.
  3. Courage, C., & Baxter, K. (2005). Understanding your users: A practical guide to user requirements: Methods, tools, and techniques. San Francisco, CA: Morgan Kaufmann.
  4. Flaherty, K. (2016, June 5). Diary Studies: Understanding Long-Term User Behavior and Experiences. Retrieved November 17, 2018, from https://www.nngroup.com/articles/diary-studies/
  5. Krug, S. (2010). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. Berkeley, CA: New Riders.
  6. Nielsen, J. (2009, August 24). Card Sorting: Pushing Users Beyond Terminology Matches. Retrieved November 17, 2018, from https://www.nngroup.com/articles/card-sorting-terminology-matches/?lm=card-sorting-definition&pt=article
  7. Nielsen, J., and Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proc. ACM INTERCHI’93 Conf. (Amsterdam, the Netherlands, 24-29 April), 206-213.
  8. Preece, J., Sharp, H., & Rogers, Y. (2017). Interaction design: Beyond human-computer interaction. Chichester (West Sussex, United Kingdom): Wiley.
  9. Rohrer, C. (2014, October 12). When to Use Which User-Experience Research Methods. Retrieved November 16, 2018, from https://www.nngroup.com/articles/which-ux-research-methods/
  10. Sherwin, K. (2018, March 18). Card Sorting: Uncover Users’ Mental Models for Better Information Architecture. Retrieved November 17, 2018, from https://www.nngroup.com/articles/card-sorting-definition/
  11. Stockwell, A. (2016, March 31). UX Foundations: Research. Retrieved November 15, 2018, from https://www.lynda.com/User-Experience-tutorials/UX-Research-Fundamentals/439418-2.html
4 - Evaluating The Prototype
1 - Project overview & Team responsibilities