Methods

Participants

The participants were selected from a list of attendees for a future IMG workshop at JGI. The particular individuals we chose reflect the diversity of research interests among those at whom our system is targeted (e.g., interest in a narrow range of organisms vs. a broad range, eukaryotes vs. prokaryotes). We also selected users with strong experience with the IMG system or other annotation systems. Summary profiles are given below:

User A

Sex: Male
Age: 35 - 40
Position: PhD Student
Department: Cellular and Molecular Biology
Computer Fluency: Average
Annotation experience: Very High
Fluency with IMG system: Low

User B

Sex: Male
Age: 25 - 30
Position: PhD Student
Department: Plant Biology (specialty: fungi)
Computer Fluency: High
Annotation experience: Low
Fluency with IMG system: Very High

User C

Sex: Male
Age: 30 - 35
Position: Postdoctoral Scientist
Department: Molecular Microbiology
Computer Fluency: Average
Annotation experience: High
Fluency with IMG system: High

Task Scenarios

We identified four main tasks that were loosely based on the three scenarios of assignment 2 . To demonstrate how the paper computer worked, we walked users through the IMG system, starting at the home page, doing a search for a specific gene, and navigating to the Gene Details page for one of the search results.

Task 1 was to make a substantial addition to an annotation. Starting from the results page from the initial gene search, users were encouraged to locate and study a specific gene's annotation history, gather annotation data from its closest homologue, and use that information to update the GO Function, E.C. Number, COG Group, and name in the existing annotation. Once they were done, they were encouraged to mimic a "coffee-break" scenario, which would require them to save the annotation as a draft and return to submit it later. The task was complete when the annotation had been updated. We looked for difficulties in finding the data needed to update an annotation, using the annotation controls, recognizing the "save as draft" option as such, and finding the saved draft after the "coffee break."

Task 2 was to provide feedback on an existing annotation. We asked them specifically to "express [their] opposition", since we needed to couch the task in terms that would not evoke the word "agree". From the Gene Details page, the participant had the option of clicking on "I Agree/I Disagree," leaving a comment on the discussion page, or even modifying the annotation. All were considered valid approaches, although we felt that the first option was the easiest and wanted to see how readily the users would take advantage of it. The task was complete once the user submitted any user feedback.

Task 3 was to make a small addition to an existing annotation, this time beginning from the Gene Cart page (which represents genes of interest intentionally placed in a participant's personal cart). The participant was instructed to fill in missing COG information for a particular gene. We felt that users could easily infer, from the other genes we placed in the Gene Cart, that the malate dehydrogenase in question should have the same COG annotation as another malate dehydrogenases in the cart. However, users were given the freedom to turn to whatever resources they felt were necessary, before navigating to the Annotation page to enter the COG ID. The task was complete when the user clicked, 'Submit.'

Task 4 was to comment on a set of genes. From the Gene Cart page, the participant needed to select the genes of interest and click "discuss selected," enter a comment for the selected genes, and submit it. The task was complete upon submission. We looked to see whether users found the required controls easily and understood the "select, then click" approach.

Procedure

One interviewer played the role of note-taker, one played the interviewer, and one acted as the computer. We met with users in their regular work environments and set up our paper prototype on a reasonably large conference or lunch table. While the computer was "booting up," we introduced our study, completed the consent form, assured the user that we were testing not them but the interface, and demonstrated how the human "computer" would function. With the subject's consent, we recorded the audio of the entire interview. For each task, we dictated the demo script included in the appendix. When users became lost or confused, we encouraged them to talk aloud but tried to refrain from answering questions that could affect their use of the system. Once all the tasks had been completed, we commenced a less formal debriefing session, where we requested the participants' subjective evaluation, suggestions, and opinions on general questions of concern to us. A summary of all of the debriefing sessions can be found in the appendix as well.