Low-fi Prototyping &
Usability Testing
By Haydee Hernandez, Qun Liang, and Hailing
Jiang
[Home|
Introduction|
Prototype|
Method|
Test
Measures|
Results|
Discussion|
Appendices]
Introduction
[top]
Knowledge management (KM) is an emerging field in business studies that
has seen explosive growth in recent years. Despite the many KM resources
available on-line, there are various kinds of information needs that are
not well satisfied in this field at present. There is a strong user demand
for a web site that offers sufficient novice support, and serves as a KM
portal. The Gotcha system is meant to meet this demand.
In this experiment, we constructed a low-fi prototype and tested it
on 3 potential users. Low-fi prototyping and testing detect interface problems
during early stages of UI design and at a low cost. The purpose of this
experiment is to see if our interface is clear, logical and intuitive to
the users, and if it can guide the users to complete their tasks efficiently.
By observing the users' behaviors and listening to their suggestions and
comments, we will be able to better understand the users and their needs
and improve our interface accordingly.
Prototype [top]
The prototype used in this experiment is our second-iteration prototype.
The initial prototype is also available on-line here.
The low-fi prototype is a paper representation of our system interface.
It was made with paper, colored post-it notes, pen and tape. A piece of
paper with Netscape browser screen served as the background. An upper frame
stays consistently on the top of each page. It contains the title of our
system, help, contact us, and your folder on the top right corner. It also
has 5 navigation tabs (Home, About KM, Resources, Full Search, Browse Subjects).
Our prototype is comprised of the following pages:
Home
On the top of the "Home" page is a "Quick Search" which allows users
to do a free text keyword search. The users can choose to search print
materials, online resources, or both. Below "Quick Search", the left frame
contains "Press Releases" and information about our project, in the right
frame is information about the process of developing the system and establishing
the site as an authoritative resource.
About KM
In the left frame is a question list about KM. Initially in the right
frame is a brief description about Knowledge Management. After the user
clicks one question in the left frame, the article answering this question
will appear in the right frame.
Resources
In the left frame is the index of "Resources" and explanation of the
ranking of the resources given by the system. The right frame describes
the different types of resources: web sites, publications and organizations.
Full Search
Search Help is listed in the left frame. In the right frame are the
search fields and options. The users can search by topic, title, author,
subject or combination of them. They can also restrict their search by
date or limit the search results to print materials, online resources or
both. When the users click on "Select Subject", they are given a hierarchical
list of top 2 level subject terms, from which they can choose which terms
they want to include in their search and set them to "Mandatory" or "Optional".
(see Sketch).
Search Results
After the users type in search words and click on the "Search" button,
the system will return the search results to the users. On the top of the
"Search Results" page are the query words users typed in. In the left frame
it lists the categories and sub-categories that contain documents satisfying
the users' query. In the right frame are the records. The users can click
the check boxes on the left side of each record and save them in their
Folder.
Browse
Subjects
The left frame lists the top-level categories for Knowledge Management.
Initially in the right frame are some explanations of "Browse". After the
users click on one top-level category in the left frame, the sub-categories
and records under the selected top-level category will appear in the right
frame (see Sketch).
The users can continue browsing the sub-categories or read the records.
On the right side of each record is a check box. Users can select the records
they are interested in and save them in the folder.
Your Folder
"Your Folder" is a temporary place to save the users' search or browse
results. In the left frame, the uses can choose "View Folder Contents"
or "Email Folder Contents". The right frame explains how "Your Folder"
works and how to use it.
Method [top]
Participants
The three selected participants were chosen because they had reasonable
experience surfing on-line and because they had little familiarity with
knowledge management (KM). Because the UI is directed at domain novices
it was important that participants have little domain familiarity. But
being a domain novice doesn't preclude you from being proficient surfing
the Web, using search engines, and browsing web pages. In fact, we do make
the assumption that users will have a reasonable amount of web experience
before coming to our site. Participant 1 is a master's student in the Engineering.
He had heard of knowledge management before and had a minimal understanding
of KM. Participant 2 is a software engineer. He has an understanding of
how KM is different from IT, but he lacked the holistic understanding of
the discipline. Participant 3 is a SIMS student in IS213. He had no knowledge
of KM at all other than recognizing the term.
Task Scenarios
The scenarios set forth in the task analysis were used as the basis for
the task scenarios. They were re-written using "you" instead of a fictitious
name to give the participant the feeling that he was assigned a task. Only
one scenario, scenario one, required the participant to achieve two goals
that built upon each other. As participants tried to achieve their tasks,
group members observed their behavior and actions. Group members were trying
to see whether participants followed the thought processes we expected.
The demo script labeled them "our guess". If users followed unexpected
paths, we asked them why. If users expected different content when they
viewed a selected page, we sought out their rationale. For instance, was
it a confusing label on the page? Approaching the testing this way provided
insight on user thinking to improve this iteration. It also provides tangible
user experience to draw from when discussing new iterations. Now, we can
say "I don't think that feature will work here. It would create the same
kind of problems <Bob> had a hard time with when we did the last round
of testing."
Procedure
The participant was escorted to the room where the testing would be held.
A verbal pre-test questionnaire was given to determine the participant's
understanding of knowledge management. The facilitator read the statements
on the demo script explaining the testing process and his responsibilities
as well as our inability to provide help during the testing. There was
also time allotted to answer any questions that he might have already.
The participant was asked to sign the informed consent form. The participant
was handed a strip of paper with the goal he needed to achieve. In most
cases, the facilitator had to prompt the participant to "think aloud".
When the participant wanted to take an action, he asked the wizard to show
him the next screen. Because our project is content intensive, it was unfeasible
to prepare paper versions of every possible route participants could take.
Instead, the wizard told the participant what would have been available
on the screen. For instance, green post-it notes represented textual content.
If the participant wanted to read it, he asked the wizard to summarize
the intended content on the page. Throughout the testing, the note taker
kept a log of critical incidents both positive and negative regarding the
participant's actions and thought process. When the participant stated
that he had achieved the task, the next scenario was handed to him. In
only one case did a user ask whether it would be fine to quit. He thought
he couldn't find the answer and he grew tired of looking. When the testing
was over, the group members spent time asking the participant questions
regarding his decisions. For instance, why he pursued one course of action
over another, why he didn't use certain features, and what general comments
the participant had. Group members also answered questions the participant
had during the testing. We ended by thanking the participant and telling
him that all prototypes had screen shots placed on-line. He could go there
to see what direction the prototype took after the testing if he was interested.
Test measures [top]
In this experiment, we measured the following:
-
First, we wanted to see if the users followed the path as we expected.
This may tell us if our initial guesses were on target regarding the best
path for task completion.
-
We looked for troublesome features causing them to slow down, pause or
ask questions. These may expose the potential problems in our interface
that should be reconsidered.
-
We also wanted to measure the users' evaluation of our interface. We want
to know what they like, and what they don't like about our interface. This
may help us better understand our users' preferences and improve our interface
to meet their needs.
With users' feedback about our interface, we will be able to fine-tune
our prototypes. We will be able to cut the functions that the users never
use or think unnecessary and add things the users feel they need but we
currently haven’t thought of yet. Also some elements of design may be changed
due to negative user feedback.
Results [top]
In general, the 3 evaluators completed the 3 required tasks without help
from our group members. It seemed that Task 1 was very easy. All 3 evaluators
followed the path we expected more or less. 2 users started from "Full
Search" in completing Task 2 as we expected. But the path they took after
that varied. Task 3 was troublesome for all the 3 evaluators probably because
the task question was vague. With little knowledge on Knowledge Management,
it seemed difficult for them to find a right starting point.
All 3 users liked using the "About KM" page and the "Resources" page.
They found the question list in the "About KM" page very helpful. They
also suggested adding other information, such as companies, to the "about
KM" page or the "Resources" page.
The major interface problem exposed in the experiment was "Browse Subjects".
We expected that "Browse Subjects" would be very helpful for novice users,
especially when they don't know what search terms to use or which point
to start with. But it turned out that all 3 users felt confused or frustrated
with "Browse Subjects". First, they did not realize that "Subjects" are
from a manually-constructed thesaurus. Some users confused subject terms
with free text keywords. Second, when presented with the top-level subjects,
all three users were new to KM domain and thus had no idea which one would
contain the information they needed (if the top-level subject term did
not hint anything related to the information they were looking for). One
user also expressed his fear that he would have to go down several sub-levels
to find what he needed, and at each level, he would be faced with the choice
of which one to go down.
Another problem was that none of the users found the "Quick Search"
on "Home" page useful. We thought it might be useful for users who like
searching quickly. It turned out that all users went to "Full Search" when
they wanted to search for something. Two users did not use "Quick Search"
at all. One user was confused with it. He was wondering the difference
between "Quick Search" and "Full Search".
Some terminologies used in our system also seemed confusing for the
users. One user thought that "Full Search" referred to "Full text search"
instead of "Full feature search". In the "Full Search" page, all 3 users
commented that the label "Topics" is confusing and "Keyword" is a better
choice. "Topics" and "Subjects" are hard to differentiate on the same page.
Two users did not realize that they could click on the "Select Subject"
to choose the subject terms. Instead, they tried to type in something in
the "Selected subject" field.
The users also suggested adding a site map or site information on the
"Home" page. All users did not understand the purpose of "Your Folder",
so they did not use it in their search process. Once they figured out,
they thought it would be very helpful. Background explanation is necessary
early on, perhaps on the home page.
Another thing we should point out is that our system is highly content-oriented.
Some confusion was caused because of the lack of enough content in our
low-fi prototype.
Discussion
[top]
The biggest lesson we learned from the
low-fi evaluation was that users do not necessarily interact with the interface
in expected ways, and they often have a different understanding of a feature
or term. Some of the functionality specifically designed for novice users
were simply ignored. We also identified user information needs that are
not currently supported by the system.
Changes to Make
The user feedback leads us to make a
number of changes.
- Eliminate the explanation on the "Home"
page of how Gotcha was developed. Users showed no interest in this information.
Instead, a link labeled "About Gotcha development" will be placed in an discrete
location for interested parties (Home
Page version 3).
- Eliminate the Quick Search function
from the "Home" page. Users found it useless and misleading (Home
Page version 3).
- Change the tab name of "About KM" to
"About Knowledge Management", because the acronym KM was not straightforward
to domain novices. We will only use the acronym after showing the full term
first (About KM Page
version 3).
- Add additional topics to the "Resources"
page. Because novices were attracted to this page, we have added company/products,
case studies, and reading lists as topics. They represent other typical information
needs mentioned by other users. Rather than force them to formulate a query,
they can refer to this page instead (Resources
Page version 3).
- Define each of the topics on the "Resources"
page, e.g., websites, publications, organizations, companies/products. Users
had difficulty understanding what to expect. Therefore, a definition is critical
(Resources Page version
3).
- Rename "topics" to "keyword" on the
full search page, because users had a hard time distinguishing topics from
subjects. They all suggested using keywords as more consistent with other
search engines (Search
Page version 3).
- Delete the field "Select Subjects" from
the "Full Search" page. They either totally ignored the field, used it incorrectly,
or found it troublesome to use (Search
Page version 3).
- Move the action buttons (search and
clear fields) to the bottom of the page. One user considered this more consistent
with traditional search interfaces (Search
Page version 3).
-
Eliminate the "Browse Subjects" page.
It was hard for users to determine which top-level category to click in
order to find a lower-level category. In all cases, users preferred searching
for information using the "About KM", "Resources", or "Full Search" pages
rather than explore our hierarchy.
-
Eliminate the folder. Even though most
users said they might take advantage of the feature, it wasn't visible
or intuitive enough for novices to begin using immediately. One user also
stated that he just preferred bookmarking interesting sources.
- Rename "Organization" to "Professional
Organizations". The latter term was less confusing to users than the former
(Resources Page version
3).
- Eliminate the category view of the "Search
Results" page. This is occurring more for the implementation needs of the
master's project than usability problems. The group has decided to pull search
results from other search engines rather than pull records exclusively from
our system. Therefore, it is not possible to display a category view (Search
Results Page version ).
Unevaluated Aspects
There are several things that this evaluation
cannot tell us. First, since much of the content is not made available
to the user, we are not sure whether the users will regard the content
to be provided as helpful. Second, there is no way for us to tell whether
our thesaurus will facilitate searches. Third, we have no idea of what
color scheme would be appealing to the user. Fourth, we do not know what
help messages, hints, or warnings are helpful to the user, since these
are not available yet. Last, we are not clear about the effects of such
constraining contextual factors as the connection speed upon user satisfaction
with our site.
Appendices
Informed consent form
Demo Script
Task descriptions
Test raw log