Navigation


Home

Final Presentation

Final Prototype

Individual Assns

Work Distribution

 

Assignment #1
Project Proposal

Assignment #2
Personas, Goals, and Task Analysis

Assignment #3 (REVISED)
Scenarios, Comparative Analysis, and Initial Design

Assignment #4
Low-fi Prototyping and Usability Testing

Assignment #5
First Interactive Prototype and Presentation

Assignment #6
Heuristic Evaluation

Assignment #7
Second Interactive Prototype and Heuristic Evaluation Integration

Assignment #8
Pilot Usability Study and Formal Usability Test Design

Assignment #9
Third Interactive Prototype and Final Write-up

Assignment #8
Pilot Usability Study
April 29, 2004

Table of Contents
  1. Introduction

  2. Method

  3. Test Measures

  4. Results

  5. Discussion

  6. Formal Experiment Design

  7. Appendix
  8. Work Distribution Table

 

Introduction

The system being evaluated in this usability study is a web interface for a repository of research documents and projects related to the field of IT and Development. The purpose of this study is to determine how changes made to the UI affect the understanding and interaction with the system. This study will allow us to evaluate whether or not the changes we made were effective, and if there are any new issues we need to address as a result of our changes. Most of our changes were a result of the heuristic evaluation of our system by the Road Sage group. We interpreted their evaluation as a list of suggestions and made changes that would address the major issues they presented. The minor issues were discussed as a team and some were addressed in this new interface as well.

 

Method

Participants

We chose participants who were not from our initial prototype testing. The reason for this was to have people with as little bias as possible, and who could provide a new perspective on our prototype. The selected candidates were also highly educated and had exposure to work in the field of IT and Development.

  Participant 1 Participant 2 Participant 3
Gender Male Female Male
Age 26 27 26
Brief Former scientist from MIT who is now doing research as a UNIDO fellow. South Asian Studies, former investment Banker 2nd year SIMS Master’s Student
Interests Currently a Mechanical Engineering PhD student Currently an M.A. in Asian Studies Previous Olympian
Home Berkeley Oakland San Francisco

 

Apparatus
  • Laptop running Windows XP wirelessly connected
  • The interactive prototype was already preloaded using Internet Explorer 5.5
  • Touchpad interface
  • Location: the relatively quiet SIMS lounge

 

Tasks
  1. Task #1 Search: Search for all documents about for-profit organizations in India and select what you might think is most relevant.

    Please start from the homepage and determine what sequence of choices will allow you to accomplish this goal.

  2. Task #2 View Comments: View several comments in the document you found in Task #1.
     
  3. Task #3 Add Comment: Now we want you to add your own comment to this document. Find a piece of information you may find particularly notable and comment on it.
     
  4. Task #4 Edit Project: Assume you have already added your own project to the Collaboration Repository. Please find your project within the system and edit some information about your project. The following information must be changed: url, funding sponsor. In addition ensure that your project will also be categorized under “Morocco.”

 

Procedure
  • Introductions and signing consent form.
  • Explain background and progress of the project.
  • Provide task form to participants and have them read over additional background information for the tasks.
  • Have the user read over each task before they do it and clarify anything they may confused or doubtful about.
  • When they are ready to proceed, start the timer.
  • Analyze user and record necessary test measures while they complete tasks.
  • Conduct post-interview questionnaire and additional questions.

 

 

Test Measures

We took into account several areas during the interview and after the interview to better analyze our user interface. Some measures, such as error rate, were hard to record since we did not want to make the user feel uncomfortable or pressured when mistakes were made. The post-interview questions focused on the user experience, and identify areas we need to improve the most. These questions incorporate a Likert rating scale, yes or no answers, and a few open ended questions to gain additional commentary.

 

Interview Measures
  • Error rate of each task: record the number of errors a user made when attempting each task
  • Average time per task: amount of time needed to complete task
  • Time between user request for help and program feedback: amount of time user takes to find help to their question (if reasonable)
  • Difficulty of task: our general conclusion given the user’s questions and body language

 

Post-Interview Questions
  1. How pleased is the user with the prototype? (Likert Scale 1-5 (best))
  2. Would you recommend this to an associate assuming this was a full working product? (Yes/No)
  3. What was the least enjoyable task or most confusing task to do?
  4. Are you satisfied with the high level categories (Add Project, Home, etc)
  5. It is easy to add a comment. (Likert Scale)
  6. It is easy to view a comment. (Likert Scale)

 

 

Results

Interview Results

 

Timing (in seconds)
  Participant 1 Participant 2 Participant 3 Average
Task 1 201.9 189.6 84.1 158.5
Task 2 53.0 21.2 14.7 29.6
Task 3 130.1 24.8 96.8 83.9
Task 4 317.9 74.4 261.2 217.8

 

Number of Errors per Task
  Participant 1 Participant 2 Participant 3 Average
Task 1 2 1 1 1.3
Task 2 1 1 0 0.7
Task 3 0 1 1 0.7
Task 4 1 2 2 1.7

 

Task Difficulty
  Participant 1 Participant 2 Participant 3
Task 1 Yes Yes Yes
Task 2 No No No
Task 3 Yes No No
Task 4 No Yes No

 

Post-Interview Results
Participant 1
  1. How pleased is the user with the prototype? (Likert Scale 1-5 (best))
    Answer: 4
     
  2. Would you recommend this to an associate assuming this was a full working product? (Yes/No)
    Answer: Yes
     
  3. What was the least enjoyable task or most confusing task to do?
    Answer: The 4th task was the hardest. Confusing to know which fields were required or not, and did not assume they had to hit ‘continue.’
     
  4. Are you satisfied with the high level categories (Add Project, Home, etc)
    Answer: No. Don’t understand difference between Add Project/Document or even what “Browse” means.
     
  5. It is easy to add a comment. (Likert Scale 1-5 (5 being best))
    Answer: 2
     
  6. It is easy to view a comment. (Likert Scale)
    Answer: 4

Participant 2
  1. How pleased is the user with the prototype? (Likert Scale 1-5 (best))
    Answer: 4
     
  2. Would you recommend this to an associate assuming this was a full working product? (Yes/No)
    Answer: Yes
     
  3. What was the least enjoyable task or most confusing task to do?
    Answer: the 4th task was the hardest. Confusing to know whether to choose My Stuff, Edit Project, or Edit Document.
     
  4. Are you satisfied with the high level categories (Add Project, Home, etc)
    Answer: No. Don’t understand difference between Add Project/Document or what “My Stuff” means.
     
  5. It is easy to add a comment. (Likert Scale 1-5 (5 being best))
    Answer: 5
     
  6. It is easy to view a comment. (Likert Scale)
    Answer: 4

 

Participant 3
  1. How pleased is the user with the prototype? (Likert Scale 1-5 (best))
    Answer: 4
     
  2. Would you recommend this to an associate assuming this was a full working product? (Yes/No)
    Answer: Yes (says it was the only one of its kind he has heard of, and feels it would be better than Google since it's specialized).
     
  3. What was the least enjoyable task or most confusing task to do?
    Answer: The 4th task was the hardest. Confusing to know whether to choose My Stuff, Edit Project, or Edit Document.
     
  4. Are you satisfied with the high level categories (Add Project, Home, etc)
    Answer: Yes, though not clear between project/document distinction. Move Help and About Us outside of the main task bar.
     
  5. It is easy to add a comment. (Likert Scale 1-5 (5 being best))
    Answer: 5
     
  6. It is easy to view a comment. (Likert Scale)
    Answer: 5

 

Discussion

We thought that the pilot was study was an excellent way to analyze our second prototype. It was very insightful and made it easier to determine what areas to focus on the most and how to fix them. More specifically, the areas that we should address immediately are:

  • Provide more clarity between top navigation links.
  • Move Help and About Us outside of the top navigation.
  • For Add/Edit Documents/Projects, make the required page first. Ensure user knows there is more information but it is optional.
  • Make editing document process more intuitive since users are unlikely to read the help.
  • Indicate required fields on search pages.
  • Change the name of My Stuff to My Account.
  • Using categories for searching is still difficult. Many times, users get no results and try multiple searches. One interviewee expressed interest in perusing many documents themselves, since their topic may not be very clear. Another suggestion was to keep the number of documents available for certain search options such as for each country.
  • Viewing comments were particularly easy, though it was still unclear on how to view comments for whole page and a particular section.
  • On commenting page, change pencil icon to mean Add Comment rather than View Comments.
  • Make the instructions for adding a comment more prominent.
  • Make documents easier to read (change background color).

From the error rate and average times measured, we realized that providing instant feedback reduced both measures. For example, though many users did not read the instructions on how to add a comment, the pop-up error message provided enough information to quickly go and do the task. However, for searching and editing document information, there was little or no feedback given if errors were made. Thus, these tasks took much longer and had higher error rates.

The overall satisfaction with prototype and goal of the project was well received. Since our interviewees work in the field of IT and Development, they have lauded our work and expressed interest in using this tool.

 

Formal Experiment Design
Hypotheses

A frequent area of discussion among our project group, and with the users with whom we have interviewed to date, is the issue of the granularity for commenting on documents. Our original idea was to allow users to select individual words or phrases in a document and comment on their selection. However, in our usability tests, we received feedback from some users indicating that this level of granularity was too specific, and that the ability to comment on entire paragraphs or on the entire document was sufficient. Our hypothesis going into a formal experiment would be that users will prefer having the option to comment on individual paragraphs rather than being limited to adding comments for entire document only. Secondly, that commenting on individual words or phrases is not a feature that users desire, but that they are satisfied with commenting at the paragraph level.

 

Factors and Levels

Factors (independent variables)

  • 3 interfaces:
    • Document-level only comments
    • Both document and paragraph level comments
    • Document, paragraph, and word/phrase level comments
  • 30 users: between-subjects testing, 10 for each interface

Response Variables (dependent variables)

  • Number of comments added
  • Satisfaction rate (How satisfied the user was with the options for adding comments) i.e. For document level only interface, are users dissatisfied because of the fact that they could not add their comment for an individual paragraph, phrase or word.

 

Blocking and Repetitions

Each group of 10 would test one interface:

  • Group 1: Document-level only comments
  • Group 2: Both document and paragraph level comments
  • Group 3: Document, paragraph, and word/phrase level comment

Each group would be asked to add comments to three documents. We would count the number of comments added per document. For users in groups 2 and 3, where the interfaces allow more than one option on where to add the comment, we would note the level at which users chose to add the comment. For those users in groups 1 and 2, where the interface limits the level of granularity to document only, or to document and paragraph level, we would ask the user if he was satisfied with the commenting level(s) or if he would prefer to add the comment at a more granular level.

If our hypotheses are correct, users will most frequently choose to add their comments at the paragraph level when given multiple options, and will express dissatisfaction when not given the option to add comments at the paragraph level. Furthermore, users would not show any increased level of satisfaction when given the option to comment on individual words or phrases. They would choose to add their comments to paragraphs even if the option to comment words or phrases is available.

   

 

Top of Page Top

 

© Copyright 2004 CollaboRepo Team. All Rights Reserved.