SIMS 213 * SIMS * UC Berkeley
 
NAMES ACCESS PROJECT: Task Analysis | Heuristic evaluation | Pilot user study | Prototypes: Lo-fi > First > Second > Third


Pilot user study

1. Introduction
Considerable time and effort is required to access information to help trace the individual histories of Holocaust survivors and victims. In an effort to improve access to a valuable collection of resources, an online search service was designed in cooperation with the U.S. Holocaust Memorial Museum in Washington, DC.  Search tools have been developed to help users formulate and expand their queries, and browse tools facilitate exploratory access to information.

In an effort to get user feedback and identify problems before the service is made publicly available, a pilot study was conducted with seven potential users.  On-site participants were given three tasks, enabling us to observe their interactions with the search and browse tools.  Remote participants were given the same tasks, and asked to complete a questionnaire afterwards.  Both of the tests were helpful in identifying a number of areas that could be improved, such as the site menu and Revise Search.

2. Method
2.1 Participants
On-site participants included two Holocaust survivors from the East Bay and one SIMS student, while the remote participants included two survivors from the South Bay and two Museum researchers.  Participants were selected based on how closely their background fit the user profiles identified in the original task analysis.  For the remainder of this report, participants will be referred to as P1, P2, and so on.  See table below for additional demographic information.

Participant

Sex

Age

Background

Computer use

Internet use

Search experience*

On-site

           

P1

F

40

SIMS Student

Daily

Daily

High

P2

M

70

Survivor/Analyst

Daily

Daily

High

P3

F

70

Survivor/MLS

Monthly

Monthly

Medium

             

Remote

           

P4

F

60

Survivor/Therapist

Daily

Daily

Medium

P5

M

65

Survivor/Genealogist

Daily

Daily

High

P6

F

28

Museum Researcher

Daily

Daily

High

P7

M

26

Museum Researcher

Daily

Daily

High

2.2 Apparatus
On-site user tests were conducted in the computer lab on the second floor of South Hall.  The same workstation, a 350MHz Pentium running Windows NT, was used for all of the tests.  Observations and post-task interview results were noted in a journal.  Equipment used for the remote tests varied widely, with the majority of users on Windows 95, and one on a Macintosh Performa.  In terms of browsers, all of the on-site participants used Netscape 4.0, whereas the remote participants used AOL 4.0 (Internet Explorer) or Netscape 4.0.  Test instructions and post-task questions were sent to remote participants via email.

2.3 Tasks
Earlier scenarios were revised to include names of individuals in the database, and to exercise site features not considered previously, specifically date range queries.

Scenario 1
You work in the Survivors Registry division at the Holocaust Museum.  A letter arrives from a woman in Israel looking for information about her cousin, Chana Rubinstein, who may have married sometime after the war.  She was born in a town called Maciejow in Poland.  The town was also known as Matseev, but is now called Lukov and is currently part of the Ukraine.

Scenario 2
You are a survivor from the Netherlands.  You are looking for information about family friends who may have survived the War.  You can’t quite remember, but their last name was something like Frank, Frenkel, or Franken.  They were from Amsterdam and had a daughter named Elizabeth.  You recall hearing that they were in Rotterdam during the War.

Scenario 3
You are a historian writing a book about an underground group in Warsaw that was apparently led by men and women in their late teens (born between 1924 and 1926).  You want to try and locate all of the survivors from this city that could have somehow been related to, or knew about, this group of teenagers.

While participants were working through the tasks above, the note taker answered the following questions to help determine whether specific features needed improvement:
 

  • How do participants revise their searches? Do they go back to Home, use Revise Search, or use Advanced Search?
  • How long does it take for them to complete each task?
  • What kind of errors do they make? How often?
  • What tools do they use for each scenario? Browse? Search? Both? Were any search options used?
  • Do people get lost in the frames? If so, where?
  • Do they use the menu provided on each page?
  • Is Advanced Search immediately obvious, or do they need to poke around?


2.4 Procedure
On-site participants were brought to the computer lab in South Hall.  While the browser was being launched, participants signed the Informed Consent Document.  Their computer and Internet experience was roughly assessed by asking, “How often do you use a computer/the Internet? Daily? Weekly? Monthly?”, and their search experience was assessed by asking questions such as, “What types of search tools have you used before, i.e., Lexus-Nexus, Alta Vista, or Yahoo!?” and “How often do you use these tools?”.  Next, they were told what aspects of the system were incomplete, such as the thesaurus and help resources.  After providing a brief description of the type and number of tasks, participants were presented with the first task described in 2.3 above.

In addition to the observations mentioned earlier, the paths taken to complete the tasks were noted.  When they could not find a particular tool, this fact was noted and the participant was encouraged to continue the task.  If they reached a point where they seemed especially frustrated, they were provided with a small hint.  After the first task was completed, they were given an opportunity to rest before moving onto the next task.  Once all of the tasks were completed, participants were asked the following questions:

  1. Do you think the “short” results format was missing any vital information? If yes, what kind of information?  If no, do you think there was too much information?  Did you find the source information valuable?
  2. (Explain the difference between automatically expanding and selecting terms to expand your query.)  If given the option, do you think you would want to choose from the Soundex and thesaurus results, or have the results automatically added to your query?
  3. Did you find the number of records retrieved overwhelming? If yes, how many would be more manageable? If not, would you like more displayed?
  4. (Present alternative frame layouts.) Did you prefer the alternative frame layouts? Why or why not?
  5. What did you like most about the system? Least?
  6. Outside of search and browse, how do you envision using the service? Would you want to update your record?  Connect with other survivors?
  7. (Explain search by type, i.e., by maiden or alias name.)  Do you imagine wanting to search by these types?
  8. (Explain the problems associated with organizing towns by country.) What are your thoughts on the current organization of Browse Places? How would you search for a town, knowing that the borders and names might have changed significantly?
  9. If they didn’t use Revise Search, ask them why?
  10. Seek general comments.

3. Test Measures
Response variables measured included the time to complete each task, and the number and type of relevant errors.  Each scenario was designed to be slightly more difficult than the previous one, thus we predicted that task completion times would increase from the first to the last task.  If participants had difficulty finishing the earlier tasks within a reasonable period of time, this might suggest that some of the basic site functions require significant changes.  The number and type of relevant errors were noted to identify common interface problems, and help prioritize interface improvements.  If the majority of the participants made several errors using the same feature, this aspect of the interface would take higher priority than another feature that caused problems for just one user.  Errors beyond the scope of the interface were noted, however, they were not counted as relevant errors.  If participants had difficulty using the mouse or pull-down menus, this would be more a function of their computer experience than the design of the user-interface.

4. Results
Task completion times and the number of errors for each participant was relatively consistent for the first two scenarios but varied significantly for the last one.  With regard to process data, most of the participants experienced the same confusion while completing the second and third tasks, such as not being able to find Advanced Search or the link back to Home.  Interview results confirmed which aspects of the interface worked well, such as Browse names by Place and the number of records displayed in search results, and identified areas that could benefit from slight interface changes.

4.1 Response variables
Below are the approximate task completion times and errors organized by task and participant.  Because many of the participants commented on the site while completing the tasks, or stopped to relay a story related to the task, the time spent on working through the actual task is somewhat lower.  In Section 5, we discuss how this problem could be alleviated in the formal experiment.

Task Completion Time (minutes)
 

Task 1

Task 2

Task 3

P1

3

3

4

P2

1

2

4

P3

2

7

10

Task Errors
 

Task 1

Task 2

Task 3

P1

2

0

1

P2

0

0

2

P3

2

3

2

As predicted, the completion times increased from the first to the last task.  P3 took longer than expected to finish all of the tasks, which was primarily a function of inexperience with the mouse.  Most task errors were related to navigation and entering information in the date range fields.  For example, P3 clicked on the browser Home button instead of the site menu Home link, and P2 initially entered the date query in two-digit instead of four-digit format.  See critical incident logs for detailed task errors.

4.2 Process data
All but one participant, P3, submitted a query from the start page when presented with the first two tasks, and then looked for additional tools if the result set was relatively large.  Although participants could initiate the first and second tasks from the start page, it was difficult to begin the last task without using Advanced Search.  When participants were presented with the last task, it took a significant amount of time to even find the links to Advanced Search.  One participant remarked, “You can’t search by date.”  Instead of eventually finding Advanced Search among the menu options, participants located the links within the page, near the start page search form or in the Revise Search frame.  Once they located Advanced Search, most of the participants filled out the form correctly, reading the “Search tips” as they entered the information.  When it came time to actually submit the form, P2 and P3 had difficulty finding the search button at the bottom of the page.  When one participant grew frustrated and started to wander-off to the other pages, they were encouraged to look through the entire form, and eventually found the button.  Similar to the Advanced Search problem, most participants didn’t take the shortest path back to the start page.  Nearly all of the participants used the back button instead of clicking on the Home link in the site menu.  The frames layout used in search results was not problematic, though one user, P3, was interested in “jumping ahead” to the end of the results.  For example, if 1000 records were retrieved, P3 would have liked the option to skip to the last twenty-five.  This participant also thought that Revise Search stored the previous search criteria, however, the form actually initiates a new search.

4.3 Interview results
When asked about the content of the “short” search results, most of the participants said they would be interested in seeing the individual’s place of birth, but thought the source information would be more appropriate in the “long” format.  One participant, P3, also suggested that we provide a way to indicate when the short records are for the same individual but under their previous names, such as an alias or maiden name.  Similarly, within the long record, P3 thought the “other” names could be displayed more effectively.  Search is structured such that users can enter an alias or maiden name and still find the record containing the current name.  However, users became confused when they clicked on a previous name to view the long record, as the current name is set in large, bold letters, drawing attention away from the “other” names formatted in a smaller font.  With regard to the number of records displayed on each page, all of the participants thought the current amount was appropriate.  When presented with alternative search results layouts, combinations with and without the menu and/or Revise Search, all of the participants said they would not want Revise Search removed but they weren’t sure about keeping the site menu.  Participants were split on whether or not they would like the Soundex and thesaurus results automatically added to their query.  The survivors were interested in searching names by type, however, the SIMS student thought users might be overwhelmed by additional search options.  All of the participants thought Browse Places was easy to navigate, and organized in a logical manner.  In terms of general comments, one participant, P2, thought it would be helpful to indicate who registered individuals entered in the Survivors Registry and to provide information about rescuers.
 

5. Discussion
One of the most valuable lessons learned from the pilot study was the importance of selecting participants who are truly representative of potential users.  While the SIMS student provided helpful feedback, she could not critique certain aspects of the system because she had limited knowledge of the domain.  For example, since the survivors understood the problems associated with place names, they could provide valuable feedback on the organization of the Browse names by Place interface.  Thus, to be most effective, the “real” experiment should include a larger group of representative users.  Since the interviews yielded more specific feedback than the comments submitted via email, on-site testing is definitely preferable.  For the formal experiment, it would also be helpful to videotape sessions and use logs to track search paths, as it was difficult to note down each and every click made by users.  Finally, computer experience should be assessed before the on-site visit.  One of the participants, P3, was a Macintosh user who had never used a two-button mouse.  This became evident when she attempted to select items from the pull-down menus.  If we had known this in advance, the first few minutes could have been spent teaching her the basic click functions.  Based on the pilot study results, several aspects of the interface will be changed:

Navigation

  • Make site menu more visible.
  • Provide additional Advanced Search links.
  • Include extra search button near the top of Advanced Search.
  • Provide access to search results by page, instead of just “back” and “next”.


Search

  • Add search by type in Advanced Search.
  • Reconsider Revise Search functionality.
  • Develop Soundex preview option.
  • Add clear button to start page search form.


Content

  • Provide way to indicate that multiple names are for the same individual.
  • Improve the display of “other” names in the “long” format.
  • Change “Contact Info” to “Contact Us”.
  • Consider removing Source from “short” search results
  • Make Advanced Search features more evident.
  • Add year only search in search tips, and change example to MM/DD/YYYY.
  • Suggest addition of rescuer information to Registry records.


6. Formal Experiment Design
Considering the navigation problems observed in the informal user study, it would be helpful to evaluate alternative site menus in the formal experiment.  We might hypothesize that changes to factors, such as shading, position, alignment, and grouping, would increase site menu use and significantly reduce task completion times.  Factors and levels under consideration are shown in the table below.

Factors and associated levels

Factors

Level 1

Level 2

Shading

None

Light-blue

Position

Above header

Below header

Alignment

Left

Center

Groupings

Two

Four

To help reduce the number of overall comparisons required, we would first try to identify the most important factors through blocking.  Blocking the factors and levels would allow us to observe the effects of combinations that we suspect will be most influential.

1. Shading/Alignment: 1) Shaded/Centered, 2) Shaded/Aligned Left, 3) No shade/Centered, 4) No shade/Aligned Left

2. Position/Alignment: 1) Above Header/Centered, 2) Above Header/Aligned Left, 3) Below Header/Centered, 4) Below Header /Aligned Left

3. Shading/Groupings: 1) Shaded/Two, 2) Shaded/Four, 3) No shade/Two, 4) No shade/Four

Based on the repetitions above, we should be able to isolate the important combinations.  For example, if alignment has no effect in 1) and 2) above, it should probably be eliminated from the study.  In contrast, if shading seems to effect task completion times in 1) and 3), that level should be included in the formal study.  To reduce the learning effects, a between groups study would be preferable.  As discussed in Section 5 above, a video camera and logs would help us document user paths as they worked through the tasks in Section 2.3.

7. Appendices
Script
Tasks
Observation sheet
Interview questions
Observations and critical incident logs


NAMES ACCESS PROJECT: Task Analysis | Heuristic evaluation | Pilot user study | Prototypes: Lo-fi > First > Second > Third
 
SIMS 213 * SIMS * UC Berkeley
Please send questions to Suzanne Ginsburg at ginsburg@sims.berkeley.edu
Last modified on April 27, 1999