Participants
We only selected participants who are currently using the metadata tagging tool or would be interested in using the metadata tagging tool in the future. Participant 3 is a junior majoring in computer science at UC Berkeley. He was the head TA for CS3 (a computer science course taught using the UCWISE system). He worked as a developer for UCWISE. Participant 1 is a graduate student in the school of education. He has taught CS3 for more than four semesters. His research interest is in the area of computer science education. Participant 2 is a post doctorate on the UCWISE project. He is currently the instructor for CS3. He is especially interested in using the metadata tagging tool and has many ideas about what it should do.
Task scenarios
We developed a testing script where the user was to tag a specific quiz lesson unit titled,
Tail Recursion, with metadata. We asked them to perform the following tasks having this quiz in mind.
- Adding personal metadata that does not exist in the default metadata input form.
We tried to infer how intuitive the user perceived our interface in associating this task with using the Templates or Additional Metadata features which are one click away from the default metadata input page.
- Deciding what to do when given a Basic Quiz Info metadata page that does not contain all the desired information to be attached to the quiz.
We wanted to find out how helpful the user finds the Templates page to be in addressing this problem.
- Attaching a piece of metadata titled, Publisher, to the quiz.
We wanted to observe how intuitive it is for user to know to use the Administrative Metadata page to locate the Publisher metadata.
- Personalizing the arrangement of the metadata on the Basic Quiz Info input form.
We tried to see if the user is able to quickly find the Edit option on the Metadata Template page as the way to complete this task.
- Tagging the quiz with domain-specific metadata (topics) titled Decomposition, and Identify Similarity.
We want to see if the user is able to find the progress bar at the bottom of the interface, and make use of it to navigate to the next step of the tagging process. We also wanted to find out if the user liked the arrangement of the metadata, and find the methods of adding metadata to be simple.
- Tagging the quiz with metadata regarding personal notes, specifically, Discussion Benefits of Offline Activities, and External Links to Lecture.
In this task, we wanted to see if the user is still seeing the natural progression of the tagging process and is inclined to go to the next step to achieve this goal. This was also an area that we wanted to observe whether or not the user found the Flamenco style of metadata arrangement easy to use.
- Authoring the quiz.
We want to find out what the user thinks of having the authoring interface being at the end of the metadata tagging process, whether they find it inconvenient, or whether it encourage them to input more metadata or whether they know that they can jump around to start authoring at any stage of the tagging process.
Procedure
All of our team members were present and participated in carrying out each of the low-fi prototype usability testing, except for the last testing session when Tofer could not be present due to a family emergency. One of the team members acted as the facilitator. He introduced the project, the goals of the low-fi prototype, the logistics and finally assigned different tasks for the participant to carry out. Each of the participant were instructed to think aloud, tell if they clicked or browsed over a feature and to feel free to comment on features which they found were good, clear, obscure, difficult to understand etc. Also the participant was given a mouse pointer and a hand pointer to simulate clicking or mouse over features. They were encouraged to physically interact with the screen prototype. The second team member acted as the computer. She displayed the paper screen, switched pop-ups, pages, navigation states, messages etc. corresponding to the participants mouse actions. The third team member acted as the observer. She noted the comments of the participant and observed the tasks and features that the participant found easy or difficult. We also audio taped each of the usability testing session.
Although we had planned a particular sequence of tasks and questions to ask during the test, each test evolved in a very different way distinct from one another. In many cases we had to play along with what the participant tried to carry out to see which task or feature was intuitive and generally drew attention. During the test, the participants were often reminded to think aloud. The facilitator often asked the participant whether he found the feature intuitive. This effort to keep the participants talking about the interface as they stepped through the tasks gave us greater feedback about our design.