[an error occurred while processing this directive] |
The results of our
usability testing pilot study were generally very good. All four participants
were able to complete the three tasks that we gave them. We found the
following positive results:
- Participants
liked the overall interface design. One tester commented that the
colors were pleasant in that they do not shout out at the user. A
second tester favorably commented that the site was uncluttered and
minimalist.
- Participants
liked the idea of the site. One tester thought that the site will
fill a real need that is currently lacking. Another said that reading
comments by other students gave real insight into students' feelings
about the courses.
- Most participants
understood the ratings system. Only one participant had a question
about what end of the scale meant "difficult" and what end
indicated "easy" for the course difficulty ratings. Otherwise,
everyone understood the scale and direction of the rating system.
Another participant couldn't quite remember whether the ratings were
out of 5 or out of 10 during the post-test interview, but had not
had any difficulty during task performance.
We also found the
following negative results:
- The search mechanism
only finds exact matches, not close matches. One user tried to search
for IS213. It didn't return a result because the text in the database
is IS 213 (with a space between the letters and numbers).
- When a user attempts
to submit her username and password from the main login page, she
must either tab to the submit button or click it with her mouse. Testers
complained that the focus is not on the submit button, which would
allow users to hit enter to submit.
- One tester thought
she had chosen a professor rating when in fact she hadn't. This user
previewed her submission before submitting it to the system, but the
ratings were not very prominent and the user did not notice that one
was missing.
- The categories
used in "browse by course category" were insufficient. We
asked the testers to find a user interface course which is in the
Human Computer Interaction category. Although everyone found the course,
they all found it by scanning all of the courses, not by using the
category headings.
- The testers did
not understand who had access to the site. Because they had to log
in, they realized that it was protected from general access, but they
were unsure whether it was limited to SIMS students or to a wider
or narrower community. They wanted to know who would read the comments
before submitting them.
- Three of the
four testers commented that the text box in which they typed comments
did not wrap the text at the end of a line; rather, it scrolled right
and continued the text on a single line. This confused the users,
although only one took action (manually added carriage returns).
There were also
a number of additional functions that the testers would be interested
in seeing on the site.
- The testers wanted
to see additional ratings. One tester wanted to see the distribution
of the ratings, perhaps in a bar graph form. The others all expressed
interest in seeing a new rating category for workload, i.e. how many
hours per week is spent on course work.
- All of the testers
expressed an interest in an edit/delete function that would allow
an individual user to edit or delete a comment he or she had written
earlier. One tester said that an audit trail - a flag indicating that
the comment had been modified - would be useful if this function were
introduced.
- One tester expressed
an interest in a "rate the rater" system where users could
rate other users and particular comments.
In addition to these
observations, we asked the testers about specific topics that had come
up in the past: anonymity of users, the rating system, and threading.
- The testers generally
disliked the idea of an anonymous system. One tester thought that
anonymity, while possibly giving user the freedom to express their
true thoughts, would also allow users to cross the line into unconstructive
criticism and flaming. Another tester indicated that he would not
trust the comments of an anonymous user as much as those of a named
user.
- As mentioned
above, the rating system, although only briefly explained, was correctly
interpreted by all of the testers for the most part. No one suggested
that the numeric scale should be replaced by a pictoral representation.
The only suggestion was to add a little more explanation next to the
average ratings.
- This set of users
did not express a great deal of interest in adding threading (the
ability to respond directly to previous comment). One tester was actually
against organizing the site in a threaded manner. The others had no
objections to threading, but they did not think they would reply to
specific comments very often (for instance, they might reply if they
had an extreme disagreement with what was said).
Finally, we made
some general observations about the testers' behavior that was unexpected
but not a problem.
- All of the testers
seemed to prefer browsing over using the search function. Even when
we pushed the testers towards searching for a course, they tended
to browse.
- Only one of the
users used the noun-verb method of posting a comment (navigate to
a course page and then click "add a comment"). The other
testers used the verb-noun method (click "rate a course..."
and then choose a course).
- No one modified
their search unless zero results were returned. This may have been
because the testers were satisfied with their results, or because
the testers are very familiar with the list of courses already. When
search results are returned, the user can easily recognize if the
courses they were expecting are in the results or not.
- Testers used
both the preview and direct submit options on the "add a comment"
page, indicating that having both options is indeed useful (we forced
the user to preview in our first interactive prototype).
|