Assignment: Pilot Usability Study
i213, Spring 2007
Due on Tuesday, May 1
Overview and Goal:

    The goal of this assignment is to get experience performing an informal usability test on an interactive prototype, and incorporating the results of the test into design changes in your prototype. In practice, this "pilot" study would be used to redesign your evaluation before running the study with a larger pool of participants.

    You will also get some experience designing a formal usability study, including specifying hypotheses, independent and dependent variables, and the experiment design (although you will not carry out the formal experiment).

Prototype:

    When you are ready to start testing, freeze the interface and do not make changes to the system while you perform your tests.
Participants:

    Find three participants (volunteers who are not in your group) to work through your benchmark tasks. Have the participants sign an informed consent form. If you are going to use videotape or audiotape (see below) be sure to put this on the informed consent form.

    Collect relevant demographic information (e.g., age, gender, education level, major, experience with your type of tasks & application, etc.)

Task Scenarios:

    Use the task scenarios that you have been using for the last few assignments. You may adjust them if your design has changed enough that the old ones no longer cover the design well. If you do change them, make a note of this in the writeup and describe the new scenarios.
Measurements and Observations:

    Although we cannot get statistically significant measurement data with only three participants and a rough prototype, you should measure some important response variables to get a feel for how it is done (i.e., task time, number of errors, etc.).

    In advance, anticipate what you are especially interested in measuring and observing for each task scenario.

    In order to facilitate your final redesign, concentrate on collecting useful process data. This will be similar to what you did for the assessment of your low-fi prototype. Instruct the participant to think aloud and make a log of critical incidents (both positive and negative events). Log when the participant begins each scenario, when they finish, and optionally, when they complete subtasks. For most projects, the clock should be visible only to the observers, so the participant is not overly aware of the time.

    If you happen to have access to a video camera, and you have the participant's permission, it is fine to use it -- point it at the computer screen, and note the time that you start taping so that you can find your critical incidents later on tape. You may wish to use a tape recorder if you don't have a video camera, but neither is required.

Followup Interview:

    Design a followup interview to assess user satisfaction with the design to gain further insight about the participants' response to your design.
Procedure:

(Not all of these will apply to every project. For example, in some cases you may want to see if the participants understand the design without a demo. Use your judgement.)

    Give each participant a short demo of the system. Do not show them exactly how to perform the task scenarios; rather show how the system works in general and give an example of something specific that is different from the scenarios. It is a good idea to write up a script of your demo and follow the same script with each participant.

    Then give the participant directions for the first task scenario. Tell them what they are trying to achieve, but not how to do it. When they are finished, give them the directions for the next task and so on. Allow them to take breaks if they seem to tire. Each participant should perform all 3 tasks.

    Finally, have the participant fill out the followup interview. You can either have them answer the questions in writing or have one observer interview them and another write down or record their responses. The latter technique can yield more detailed responses since people tend to speak more easily than they write. Or do a combination -- have them fill out a written questionnaire containing Likert scales, and then ask them to answer the more open-ended questions orally.

Results:

    Report your results (values of response variables, summaries of those values, and summaries of the process data, and summaries of the followup interview). In the "Discussion" section draw some conclusions with respect to your interface prototype. You should also say how your system should change if those results hold with a larger user population. This is the most important part of the write-up, since you need to think about how you would fix your system as a result of what you observed.

Formal Experiment Design:

    Although we are not doing a formal study for this design, you should create a hypothetical formal study that you could run to evaluate your interface. This might require comparing two variations on one of your design decisions, or comparing two different interfaces (but not too different or the results might not be valid).

    Hypotheses:

      Formulate some hypotheses for your interface that the formal study would allow you to test.

    Factors and Levels:

      Define the Factors (independent variables) that you would vary in order to be able to assess the effects on the Response variables. What levels would your factors take on? Describe the Response Variables (the dependent variables) that you would measure as well. Describe how these choices will allow you to assess whether or not you can accept or reject your hypotheses.

    Blocking and Repetitions:

      Describe how you would block out the experiment and how many repititions of each factor and level you would need.

Write-up:

Turn in the writeup on the web, including the following information:

  1. Introduction
    • Introduce the system being evaluated
    • State the purpose and rationale of the study
  2. Method:
    • Participants (who -- demographics -- and how were they selected)
    • Apparatus (describe the equipment you used and where)
    • Tasks (can link to earlier task descriptions if they haven't changed)
      • Describe what you looked for when each task scenario was performed. If you made new scenarios, describe them first, otherwise a link to the earlier descriptions is fine.
    • Procedure
      • Describe what you did and how
    • Screenshots
      • In order to have a record of what the interface looked like when this study was performed, provide screenshots of key views of the interface in action.
  3. Test Measures
    • Describe what you measured and why
  4. Results
    • Results of the tests
  5. Description of Formal Experiment Design
  6. Discussion
    • What you learned from the pilot study
      • what you might change in your interface from these results
      • what you might change for the "real" experiment
  7. Appendices
    • Materials (all things you read --- demo script, instructions -- or handed to the participant -- task instructions).
    • Raw data (e.g., entire merged critical incident logs)

    Presentation:

    Each group will get 20 minutes to discuss their project. This leaves 5 minutes between groups for setup time. Be sure to make a link to your presentation from your project page, to save on setup time. The talk should cover at least the following:

    • Main points taken from the heuristic evaluation
    • Current design, including how/why it differs from the first interactive prototype
    • Demo of the current design
    • Results of the pilot study
    • Most interesting or useful lesson(s) learned
    • What you plan to do for the last iteration

      Presentation Schedule:

      Tuesday, May 1

      • Delphi
      • FAST

      Thursday, May 3

      • SkillShop
      • iNaturalist.org
      • Play it by ear.