IS 213 -- User Interface Design & Development

ReadingTree: Lo-Fi Prototyping/ Usability Testing


 
 
 
lo-fi prototype
 
 
 

   

Introduction
Prototype Description
Method
Test Measures
Results
Discussion

Appendices


Introduction

We developed a paper prototype of our online book community website for kids in grades 1 - 5. At this site, kids can find out about books, get personalized recommendations, write book reviews, and exchange messages through bulletin boards and chat.

The purpose of the experiment was to determine whether the navigation and structure of the site could be easily understood by children. We were specifically interested in testing how easily users could perform necessary and common tasks, determined by using our persona's tasks analyses and scenarios.

Prototype Description

Development
First, we gathered copies of our previous three preliminary designs, along with their interaction flows, and talked about strengths and weaknesses of each: the first design had the basic structure we wanted but interaction flows showed that users had to back up and move around quite a bit to complete tasks, the second design allowed users to find books without logging in but would as a result have reduced functionality, and the third design had language we wanted to use.

With those ideas in mind we next discussed content structure and information architecture. While we agreed on icon-based navigation at the top of the screen we were initially unsure whether content should be organized horizontally or vertically. After looking at sketches of both we decided on a vertical structure in order to allow users to see all of a page's content headings/titles without having to scroll. Next we listed the site's functionality and grouped items into task areas, which later became our site pages. At this point we determined the pages we needed to create by looking at our test task list, writing pages necessary to complete each task, and including the task number next to the page name in order to let us know how many of our tasks would require that page.

Next we blocked out the content of each screen on index cards, using pencil and markers. We were concerned about how well the child testers would respond to the paper prototype and that, in particular, they might be put off from completing the tasks if they had trouble reading our handwritten content cards. Therefore, we decided to use Microsoft Word to type the contents of each screen section. We then pasted the printed out sections onto the poster-board using colored paper in the center section to help distinguish the three columns.

We practiced with the prototype, by having first two of our team members, then a classmate not involved with the project play the child tester. Based upon the results of these pilot sessions, we made several changes, mostly having to do with wording and navigational assistance. For details of what we changed, please see our notes in the Appendix.

We made an additional set of changes following the first two actual user tests.

Materials
We used paper, posterboard, index cards, colored paper, rubber cement, glue-stick, post-it notes, plastic, scissors, colored markers, and pencils to construct our paper prototype. In addition we gathered (but did not use) crayons, a compass, transparencies, rulers, watercolors, paint pens, feathers, colored pencils, and sequins. We used Microsoft Word to lay out the content in each screen section and the menubar.

Prototype
Our prototype consists of 10 pages, 5 dialogue boxes, 11 search topics, 3 sets of search results (containing 4 books each), and a pointer (to represent a mouse).

All pages have a title/navigation bar at the top of the page, consisting of site area icons and labels. We also developed post-it rollovers for this area, but used them for only the first two tests.

The prototype's Home Page provides links to and information about all other areas of the site. It also provides links to sign up and sign in to Reading Tree. As part of this function, we created a dialogue box to notify users they had to sign up/sign in if they want to access password protected areas.

The Find a Book main page has links to keyword, alphabetical, and subject search as well as personalized book recommendations. We developed sample searches and search results based on our test tasks, as well as a book information page for the book users were instructed to find. From the Book Information page users could also review the book. The personalized recommendations page allows users to improve their recommendations/ suggestions by rating a book (successful completion of which returns a thanks for rating dialogue box) or answering a poll (successful completion of which returns a thanks for answering a poll dialogue box)

The TreeHouse main page provides a list of member "treehouse" names, a list of bulletin board topics, chats currently underway, chats scheduled for the future, and a link to reviewing or rating a book. Selecting a message board topic brings up a page with specific threads, and selecting a topic thread allows the user to see individual postings.

The What's New main page has listings and links to featured books, most popular and unpopular books, as well as the member of the week.

Method

Participants

We issued a general call to SIMS students as well as directly soliciting the assistance of classmates with children. We tested 4 children in all, ranging in age from 6 to 10. All 4 testers were Caucasian boys who enjoy reading. The levels of computer experience varied--two had almost no Internet experience while two use the Internet on a regular basis. We also tested one proxy user who takes care of a 7 year old boy.

Task Scenarios

For our first 2 user tests, we asked subjects to complete 5 tasks in the following order:

  1. Sign up to become a member of ReadingTree
  2. Find Harry Potter and the Goblet of Fire and rate it.
  3. Find out which book is #2 on ReadingTree's "What's Hot" list.
  4. Find out what 2 kids think about Hermione from Harry Potter.
  5. Get personalized book recommendations.

The first tests showed us that these tasks were not arranged in a natural order. Therefore, we revised the task scenarios as follows:

  1. Sign up for Reading Tree
  2. Find out what the number 1 and 2 books are on the Reading Tree "What's Hot" list
  3. Go to the book club message board and find out if anyone is talking about Harry Potter. Find out what 2 kids think about it.
  4. Find Harry Potter and the Goblet of Fire
  5. Rate (and review) Harry Potter.

Following the first user tests, we also developed a shorter list of alternative tasks, in case we encountered a test subject who seemed to lack the motivation of a Jenny or an Ayisha. We did not, in the end, use these tasks but we had the screens prepared just in case.

When the children performed these tasks, we were looking to see how quickly and easily the user accomplished each task. We were also alert to any signs of dissatisfaction with the task or confusion about its purpose. We focused on these tasks for the following reasons:

  1. Sign up should be extremely simple and quick. Otherwise, users may be discouraged from trying any of the "members only" features, or from exploring the site at all.
  2. The "What's Hot" task was primarily navigational, to see if users could easily locate this section of the site. A secondary purpose was to check the terminology--is the phrase "what's hot" meaningful to children?
  3. The message board task was also a navigational task. We also wanted to see how the testers received the bulletin board concept.
  4. Finding out about specific books is a central task for each of our personas. We were curious to see which search methods our testers would use and how they would deal with the search results.
  5. Rating and reviewing books are less central tasks for our personas but are essential for the collaborative filtering dimension of the site. We wanted to see whether our book rating dialog and book review form were simple enough for children to quickly master.

Procedure

We ran three separate test sessions. The first was in a team member's home, the second and third in the upstairs lab in South Hall. The testing process was similar in each session (see script for details). The facilitator greeted the children and explained the purpose of the test. She provided an overview of what would happen during the session, assured confidentiality, and explained that they could ask questions or stop at any time. The facilitator then asked the child and accompanying parent to sign a consent form (reviewed by the Committee for the Protection of Human Subjects).

Next the facilitator administered the pre-test questionnaire, which asked about the child's book-finding habits and computer experience. The facilitator, with assistance from the "computer" team member, explained how the paper computer would work and demonstrated how to use the pointer to click on items.

The facilitator asked the testers to complete the five tasks, one at a time. After each task, the facilitator provided minimal feedback (e.g. "good job") to signal that it was time to move on to the next task. At the end of the test, the facilitator asked 4 summary questions, to find out what the testers did and did not like about the interface. Testers were also ask to assess whether a child who was 1-2 years younger than themselves would be able to use the interface easily. We also gave them an opportunity to make general comments about their experience with ReadingTree--none chose to comment.

Testers received $5 book gift certificates for their participation in our project.

Test Measures

  • Was the on-screen terminology clear?
  • Were the navigational icons intuitive?
  • Did the testers understand how to search for a book?
  • How did testers respond to the interface overall? Which aspects caught their attention?
  • Did testers seem interested in the idea of rating and reviewing books? Were they interested in reading messages on a bulletin board?

Results

Testing environment and process
Kids are easily distracted during testing. Although they easily took to the idea of paper and plastic standing in for a computer and an input device, it didn't seem to hold their attention the way a "live" website might. However, when the environment was more formal, kids seemed to "behave" better and dedicate more energy to focusing on the task at hand.

The task flow was also important. Users appeared to have a difficult time navigating the site when the flow from task to task seemed arbitrary and unnatural. When we made minor modifications to the tasks in order to achieve a more natural flow through the site, and to more closely map to the goals of at least one of our personas, the users seemed to have an easier time finding where to go next.

Internet/Web experience
Navigation of the site was difficult for kids without Internet experience; the conventions of browsers and hyperlinks were not clear. However, users who were familiar with the Internet had very little problem identifying underlined text and icons as links. The Web-savvy kids could also easily identify concepts such as "home" represented by a house icon and linking back to the home page.
Following from this, our users (with and without Internet experience) also see icons as a link to something else, or at the very least "clickable" for some result.

Content and interaction
Our users had a difficult time differentiating between "sign up" (for the first time) and "sign in" (for returning visitors). Even when we modified the main page to try to make the difference clearer, users still had trouble. Additionally, the grouping of information within the site (i.e. what features were where) was not immediately apparent to the users. Again, when we modified the prototype main page to draw attention to the grouping of information, our users still had some difficulty working through it.

One important feature that our users wanted, addressed in our personas' goals and scenarios, but not completely present in this design, is the ability to assess the "qualities" of the books on the site (i.e. to "flip through" it). This feature emerged as being very important to decision-making, especially in the test where the parent helped the child, but with older children as well.

Finally, visual recollection and interaction seemed to be important. Too much explanatory text appeared to get in the way of using the site, and the ability to visual recognize and assess a book or other feature seemed to be important. Younger kids especially are drawn to graphics and icons.

Discussion

What we learned from the evaluation
First, we learned that given the special circumstances of our target user group, a controlled environment is necessary in order for a valid test to take place. It is particularly difficult to test two kids in succession while one is waiting for the other, unless it's possible to essentially isolate each. On a related topic, we learned the importance of striking the proper tone when interacting with a child tester; without assuming an intimidating position as an authority figure, the facilitator must make it clear that she is in charge of the test session and that she expects the child to make an earnest attempt to complete each task. Another aspect of testing with children was the need to balance providing enough positive feedback (children need much more than adults) without guiding them too much. The natural urge was to step in and help when we observed them struggling with a task.

We also learned that familiarity with the Internet and with navigating websites is essential for use and enjoyment of the ReadingTree site. This is somewhat different from simply having computer experience through games or educational software, which provides familiarity with input devices and point-and-click functionality, but doesn't expose the user to browser conventions. We might consider screening for Internet experience when recruiting the next group of test users .

Through the testing, it also became clear that some of the information grouping and site terminology is derived from what we, the designers, know of the system's functionality and what we want the system to do, rather than a logical mapping to the user's tasks and expectations. For example, we know that the collaborative filtering mechanism is a "community" aspect (i.e. requires community participation in order to work) and so "rate a book" shows up in the TreeHouse. However, users would not be looking to rate a book when they were looking for chat features, and that feature's presence seems odd and confusing.

In this same way, we noticed our design contained extraneous links and information, which occasionally got in the way of users pursuing both their tasks and goals. For example, since our database supports a number of features relating to "modify your account," our initial design contained an entire "Members Only" section, which gave this small functionality far too much importance and served to further confuse users as to what was where.

Our single biggest interface problem, however, was obviously the "sign in/sign up" confusion. The "sign up" function is also something that users have to do only once -- after that, they are "members" and just have to sign in. Unfortunately, since all of our users were "first time" users, we never got to see how kids advanced past the sign up stage and whether at that point the sign-in confusion disappears.

Evaluation results: Intended changes to the interface
Note: Some of these changes have been partially implemented in the paper prototype.

We modified the home page to provide a clearer map to what features the site had, and where they were, as well as to try to alleviate the sign up/sign in problem. We also modified the persistent navigation bar to reduce the number of options to those most pertinent to users' pursuit of their goals.

We will attempt to include additional means to assess the "qualities" of a book on the book information page, in particular example pages from the book itself. We may also attempt to add a non-grade-based reading-level assessment of each book, such as "read with parent", "beginning reader", "chapter book" and so on. (This is similar to the reading level system that Scholastic employs in its young-reader catalog.)

We will attempt to revamp the underlying site structure to more clearly support our users' goals and to more clearly represent where a user might expect to find certain features. We may ask others to participate in a card-sorting exercise or other means of gaining outside perspective.

What we could not learn from the evaluation
Our lo-fi prototype evaluation did not answer some major questions about the design and viability of this site. First, the ability to interact with a computer is potentially problematic for younger or less experienced members of our user group. Use of a mouse or a keyboard to input information or to navigate a site may be much more difficult when the user doesn't have to simply say "click" to click and speak out loud in order to type in a box. Secondly, the lo-fi prototype evaluation could not tell us how exploratory learning could potentially contribute to use and enjoyment of the site. A kid might much prefer the ability to simply click around and get acquainted with the site, rather than being limited to pages that actually exist.