......uc berkeley ........is 213 course project... ... school of information management and systems


bin xin
 
rosa ren
 
monica fernandes
 
hong cai
 
   

Method ..................................................................................................................................

Participants: Working with limited time and resources, we tried to find three participants who match our personae and target audience as closely as possible. We tested three male participants. Two of the participants are between 31-45 years old, while the third between 21-30. All three enjoy going out to bars and clubs, sometimes for various events. Since our ultimate target customers are Interent/web savvy, we also looked for relevant experience from our testers. One tester has used the Internet between 5-8 years while the other two have used the Internet for more than 8 years. The two participants who used the Internet for more than 8 years have used only Sfgate and have not created a personalized page with websites such as MyYahoo. The youngest participant whose years of usage is between 5-8 years turned out to be someone who has done some studies of personalization. This user not only used more than one entertainment site but has created a personalized page with various portal and major websites ranging from MyYahoo, to MyCnn to MyAmazon... Also, another participant has been working in the filed of human-computer interaction design.
 
Apparatus and Testing Control: Since a substantial portion of the prototype has been implemented using Dreamweaver, Perl CGI, and MySQL database, the usability test is conducted using the latest version of the interactive prototype on SIMS lab computers. We also decided to control the experiment, using the same browser --Internet Explorer as browser, which gave better results A small section of the second floor lab was blocked off with signs requesting quiet from passersby. Bagels and juices were also prepared to help sustain the participants and team members throughout the sessions. 

Task Scenarios: After reviewing the Experiment Kit for the Low-fi Prototype Usability testing, Task 2 and Task 3 were changed to help us better focus our efforts on the customization processes with higher priority. The original tasks were

  1. Sending some options of things to do to friends
  2. Signing up for a newsletter, and creating
  3. Customizing and exploring the MySFnight page. 

The new tasks became:

  1. Sending some options of things to do to friends
  2. Signing up MySFnight
  3. Customizing and using the MySFnight features to make plans. 

The team felt that by not testing the newsletter signup process and by separating the MySFnight signup process from the actual MySFnight page use, the tests would provide more useful information for the more critical features of MySFnight. The tasks are still in order of level of customization and thus order of expected usage. The first task is available to all users, requiring no identifying information and uses a feature that is temporary in nature. The second tasks requires complete registration, perhaps some preference selection. The third task is a follow up exploration of the calendar planning and management functionality that MySFnight provides. The users need to add specific events and venues to their My Picks area.

Task 1: We decided to first test how well a user can create a list of options and email the choices to friends. This feature is available to all visitors of SFnight, whether or not they have a customized account with SFnight. This feature is also intended to encourage users to start developing a meaningful relationship with SFnight. In previous tests, users have a difficult time understanding the name of the concept we are trying to present. The test is an opportunity to try a combination of text labling and distinct icons. 

Task 2: To take full advantage of the services that SFnight has to offer, users need to sign up for a MySFnight account. Most sites loose customers because of the amount and nature of information asked, indentification and other wise. On the other hand, we needed to strike a balance with the information that is needed in order for the service to be useful to a user. We wanted to see how easy it was for users to get to the signup page, how they felt about the amount of personal information required, the amount of preference selections presented, and the sense of completion once they click on the submit button.

Task 3: This task involves the most of amount interaction and a bit of exploration. We were especially interested in seeing how users perceive the calendar planning/management capabilities that MySFnight page offers. We wanted to see if the different kinds of interactions for an individual event/venue selection were intuitive and/or easy to figure out. We also wanted to see if the users can figure out the different ways an event/venue can be added to the My Picks area. 
 

Procedure: We prepared a test kit for each user. The kit included a consent form, a simple demographic and Internet/Web experience survey, instructions to each task, surveys for each task and the overall site, reminder instructions for the team members, and two copies of Observation Log forms for two observers. (See Test Kit, Observation Log forms)

We scheduled all three participants on a late Monday evening in one hour slots. When a participant arrived, a facilitator greeted the individual and asked the individual to sign a consent form and complete the simple personal background survey. The facilitator then followed a script to introduce SFnight, the goals of the test and the speak out loud technique. The facilitator also briefly showed the participant some general areas of the site that is working, while pointing out to the parts that are not. The participant is informed that when he runs into other areas of the site that are not implemented, he will be notified. Finally, the participant is encouraged to ask questions and reminded again of the speak out loud technique.

When the test began, the facilitator read the task instructions with the participant, and explained any parts of the instruction that the participant may be confused about. The participant was encouraged to speak out loud his perceptions of the system as he worked through each task. The facilitator intervened at times when the user tried to access a part of the prototype that has not been implemented. After the participant completed a task, he was asked to complete the task survey. When all tasks were completed, and the participant has completed the site survey, the facilitator explained the specific goals of the test and the overall site. Then the facilitor and at least one observer discussed in depth with the participant various issues that came up during the test and also asked for suggestions for the design.

To ensure each participant is started on time and given fair attention, two team members took the facilitator role. The second facilitator starts working with a new participant while the first facilitator finishes with the one participant. When one participant took longer than planned, the two observers also split up to ensure that each participant session has at least one set of observation data.

Only one photo from the back was taken (see photo), and team members who were observing took notes in their log forms. 

 
 

 
........updated: Apr 25 , 2001
Journalism and Business Models in New Media Publishing image from Gettyone.  Please visit their site to get permission