SIMS 213
Assignment 9
Pilot User Study
Due on Tuesday, April 27, 1999
Overview and Goal:
The goal of this assignment is to get experience performing an
informal usability test on an interactive prototype, and incorporating
the results of the test into design changes in your prototype. In
practice, this "pilot" study would be used to redesign your evaluation
before running the study with a larger pool of participants.
This will be mainly an informal usability study, in order to
facilitate your final redesign. However, we will mix in
some formal elements as well, just to give you some practice.
Prototype:
Freeze the interface you produced from the second interactive prototype and
do not make changes to the system while you perform your tests.
Participants:
Find three participants (i.e., volunteers who are not in your
group) to work through your benchmark tasks. Have the participants to
sign an
informed consent form If you are going to use videotape or
audiotape (see below) be sure to put this on the informed consent
form.
Collect relevant demographic information (e.g., age, gender, education
level, major, experience with your type of tasks & application,
etc.)
Task Scenarios:
Use the task scenarios that you have been using for the
last few assignments. You may adjust them if your design has changed
enough that the old ones no longer cover the design well. If you do
change them, make a note of this in the writeup and describe the new
scenarios.
Measurements and Observations:
Although we cannot get statistically significant measurement
data with only three participants and a rough prototype, you should
measure some important response variables to get a feel for how it
is done (i.e., task time, number of errors, etc.).
In order to facilitate your final redesign, concentrate on
collecting useful process data. Instruct your participant to think
aloud and make a log of critical incidents (both positive and
negative events). Examples of critical incidents are when the user
makes a mistake or when they see something they like and say
"cool". Log when the participant begins each scenario, when they
finish, and optionally, when they complete subtasks. You should
set up a clock that only the observers can see (one or more
of you should observe) so the participant is not overly aware of the
time. It is a good idea to keep the data for each task and
participant separate.
In advance, anticipate what you are especially interested in
measuring and observing for each task scenario.
If you happen to have access to a video camera, and you have the
participant's permission, it is fine to use it -- point it at the
computer screen note the time that you start taping so that you can
find your critical incidents later on tape. You may wish to use a
tape recorder if you don't have a video camera, but neither is required.
Followup Interview:
Design a followup interview to assess user satisfaction with the
design to gain further insight about the participants' response to
your design. A good starting point is the QUIS survey that can be
found in the Shneiderman chapter in the reader.
Procedure:
Give each participant a short demo of the system.
Do not show them exactly how to perform the task scenarios; rather
show how the system works in general and give an example of
something specific that is different from the scenarios.
It is a good idea to write up a script of your demo and follow the same
script with each participant.
Then give the participant directions for the first task scenario.
Tell them what they are trying to achieve, but not how to do
it. When they are finished, give them the directions for the next
task and so on. Allow them to take breaks if they seem to tire.
Each participant should perform all 3 tasks.
Finally, have the participant fill out the followup interview. You can
either have them answer the questions in writing or have one
observer interview them and another write down or record their responses. The
latter technique can yield more detailed responses since people tend
to speak more easily than they write. Or do a combination -- have them
fill out a written questionnaire containing Likert scales, and then
ask them to answer the more open-ended questions orally.
Results:
Report your results (values of response variables, summaries of
those values, and summaries of the process data, and summaries of
the followup interview). In the "Discussion" section draw some
conclusions with respect to your interface prototype. You should
also say how your system should change if those results hold with a
larger user population. This is the most important part of
the write-up, since you need to think about how you would fix your system
as a result of what you observed.
Formal Experiment Design:
Write-up:
Turn in the writeup on both paper and the web, including the following
inforamtion (number of pages/section
are approximate):
-
Introduction
-
Introduce the system being evaluated (1 paragraph)
-
State the purpose and rationale of the study (1 paragraph)
-
Method:
- Participants (who -- demographics -- and how were they selected) (1/2
page)
- Apparatus (describe the equipment you used and where) (1 paragraph)
- Tasks (1/2 page)
- Describe what you looked for when each task scenario was
performed. If you made new scenarios, describe them first,
otherwise a link to the earlier descriptions is fine.
- Procedure (1 page)
- Describe what you did and how
-
Test Measures (1/2 page)
-
Describe what you measured and why
-
Results (1 page)
-
Discussion (1 page)
-
What you learned from the pilot study
-
what you might change for the "real" experiment
-
what you might change in your interface from these results
-
Formal Experiment Design (1 page)
-
Describe the information requested in the description of the
hypothetical formal experimental design described above
-
Appendices
-
Materials (all things you read --- demo script, instructions -- or handed
to the participant -- task instructions).
-
Raw data (e.g., entire merged critical incident logs)