A user centered approach to designing, building,
  and implementing a

Digital Asset Management System

for the San Francisco Museum of Modern Art
      
Thoreau Lovell
Margo Dunlap

Joanna Plattner


IS213
 
Spring 2001


Site Contents

1. Project Proposal


2. Personas, Goals & Task Analysis

3. Scenarios, Competitive Analysis, and Preliminary Design

4. Low-fi Prototype
  Introduction
  Prototype Description
  Low-Fi Picts
       
Setup
        Home Page
        Object Record
        Image Request
Usability Testing
       Participants
       Procedure
       Scenario 1
       Scenario 2
Test Measures
Results
Discussion
Materials
       Script
       Consent Form
       Task Scenario 1
       Task Scenario 2
       Incident Logs

5. Appendix

6. Vocabulary

7. Work Distribution




Low-Fidelity Prototype and Usability Testing


4.1 Introduction

The Digital Asset Management System (DAM) we are designing for SFMOMA will consist of a Web interface; a database for storing metadata about digital images, their creation, and use; and the scripts that will connect the two into a single functional system.

There are two primary user groups for the DAM system. Those users who need to create and manage digital images and those users who want to find and request use of digital images. We tried to address some of the needs of both groups in the two scenarios used to test our low-fidelity prototype.

Our goal was to test the initial interaction design for three components of the system: using search to determine if a digital image exists, creating and processing image requests, and entering the technical metadata associated with a set of newly created digital images. We also wanted to test the terms we'd chosen for menus, buttons, and navigation controls.

4.2 Prototype Description
Our prototype consisted of one sheet of heavy white cardboard (14"x11") onto which was glued the Netscape browser menu bar, a SFMOMA-Digital Asset Management System banner, and the navigation bar we'd designed. Individual screens were created using the drawing tools in MS Word and printed in black and white were placed on the cardboard sheet as needed. Dialog boxes were cut out of printed pieces of paper and error messages were handwritten on note cards, both were placed on top of screens as necessary. No color was used in the prototype. A pen was used as a pointing device.

Pictures of the Low-fi prototype

top

4.3 Usability Testing

Participants: We conducted three separate usability tests on the low-fi prototype, one on March 5 and two on March 7th. Two of the tests were conducted in the Graphic Study office in the main SFMOMA building, the third was held in the Interactive Education offices in a nearby building. All three of the testers are staff members at SFMOMA, and each of them will be regular users of the Digital Asset Management system. Their roles at the museum are: Manager of Visual Resources in the Collection Information and Access department, Image Coordinator in Collection Information and Access, and Production Manager for Interactive Educational Technologies.

Procedure: Before beginning the usability test, we had each subject sign an informed consent form. Then we reviewed the test procedure, explaining that they would use two scenarios to test the system and that each of us would play a specific role: Greeter and Facilitator (Thoreau), Computer (Margo), and Observer (Joanna). We also encouraged them to "think aloud" as they worked through the scenarios. Finally, we let them know that the low-fi prototype only contained "screens" related to the tasks described in the scenarios and that if they "clicked" on an option for which there was no screen we'd let them know verbally. They then ran through both scenarios. Afterwards there was a short debriefing session. During the tests and the debriefing note cards were used to capture as much as possible of what the user, the facilitator, and sometimes, even the "computer," said.

top

Task Scenario 1: "You are a multimedia content developer at SFMOMA's education department. You've been asked to create a digital interactive presentation to support an upcoming exhibit called Exits." This scenario included the following tasks:

  1. Request access to a master image of Still Life with Roses and Arrow, by David Ligare.

    What we tested: how do users try to find an image, what menu options, what search terms are used; how do they respond to the search results page; how do they respond to the object record page; do they understand how to request access to an image; is the differentiation between requesting access and requesting digitization clear; does the term "master" class image make sense; does the Image Request form make sense.

  2. Request digitization of and access to a source file for Fire by Elmer Bischoff.

    What we tested: what do users do when there is not an existing digital image, do they understand how to request a new digitization; does the term "source" class image make sense.

  3. Process and submit the image request for Still Life . . . and Fire.

    What we tested: do users understand how to request that an Image Request be processed; do they understand the difference between Save and Save & Submit; is the tabbed display on the Image Request form (one for "digitization" and one for "access") clear.

  4. Review the image request. Determine what percentage of the Image Request is complete.

    What we tested: How will users try and view an existing Image Request; will they notice the "Percent Complete box," or will they try and figure out the the answer based on what is displayed in the body of the Image Request form.

Top

Task Scenario 2: You are a Digital Imaging specialist at SFMOMA and you spend virtually 100% of your time creating, cataloging, and managing digital images. You process a high-volume of images and like to work quickly.

Robin in Interactive Educational Technologies needs Source Class images for a number of Richard Diebenkorn prints for a project called California Arts Revisited. She has requested that these images be created and that she can access them for her project. The Image Request number is 123. This scenario included the following tasks:

  1. Review Image Request No.123 and answer the following questions: what artwork do you need to digitize, what class image files are being requested, and what is the due date is for the job?

    What we tested: how will users try and find Image Requests that they did not create themselves; is enough information presented for the image creator to be able to begin the digitization job.

  2. Assume that you have created the digital images for Image Request 123, your task now is to catalog them. (This task had a number of sub tasks)

    2.1 Begin the image catalog process


    What we tested: does "catalog" make sense; what path will the user follow to try to begin this process, search for images, for image request number, one of the menu items?

    2.2 Indicate that you created each master image and each derivative using the same devices and the same settings

    What we tested: (once the catalog process begins they step through a series of three screens requesting information) Is it clear that step 2.2 will result in applying the cataloging information requested in subsequent questions to all the images on Image Request 123; does the process of selecting from option lists work in this context.

    2.3 Indicate that you created master, source, and all browse class images, but not research class images for each object


    What we tested: does the use of Image Class categories make sense; does the use of check boxes make sense.

    2.4 Review the bit depth, pixel dimension, color correction, compression, and whether or not cropping was done. Edit the values for bit depth for the Master Class image.

    What we tested: Are all the options on this screen clear, specifically do users understand that they can either accept default settings, select pre-saved settings from a list, or edit the displayed settings; do the metadata terms make sense; are there other fields that should be included; do users wonder how to notify the person who requested the images that they are ready?

top

4.4 Test Measures

This section represents a summary and generalization of what we hoped to measure through the low-fidelity prototype testing.

What we looked for Why
Degree to which the interface appeared to support user goals To understand if basic ideas underlying system design are sound.
Effectiveness of the interaction design for the limited tasks defined in the scenarios Need to solidify basic interaction design before creating interactive prototype
Attempts to use unsupported paths to complete tasks To evaluate for possible inclusion in interactive prototype
Deadends, points in the interface at which users didn't know how to proceed To eliminate from design before interactive prototype
Articulation of missing features To evaluate for possible inclusion in interactive prototype
How well users understood the mapping of controls (buttons, menu items, hyperlinks) to intended actions To understand if control vocabulary makes sense

top

4.5 Results (Observations)

General

Users seemed to ignore the options on the navigation bar, except for Home. This may reflect the fact that the tasks in both scenarios didn't require use of the navigation bar. Two users did question some of the navigation bar options, wondering if users who didn't create digital images should see Update Object & Image Records, Manage Image Requests, and Imaging Guidelines." One user asked about the search format, wondering if there would really only be one box, and then suggesting that providing more explicit fields to search on would be a good idea.

Scenario 1 (3 users)

All users logged in without comment. One called the home page a "nice one." The users were unsure what would be displayed in the Digital Asset News area, and also didn't show much interest in the New Media area of the screen, which was meant to display recently created media.

One user immediately clicked on the search box and entered the artist's last name. Another user hesitated on the home page, asking "how do I get an object?" He finally clicked on the search box, but seemed unsure of what kind of terms he could enter into the search field, finally settling on the artwork title indicated in task 1.

The third user was confused about how to begin, saying later that he didn't notice the search box. When asked what he'd like to be able to do, he said "it seems like I should be able to create a new request. I've got several choices here, but it's not clear which one to pick." He decided to try Manage Image Requests, which we had not implemented. We asked him what he'd expect to see after clicking on Manage Image Requests, he said that since he had to log in, he'd like to see only those things that "I'd requested, done, or managed." He did notice the Active Image Request area on the Home Page and suggested that we add a Create New Image Request option there.

One user wasn't sure how to interpret the the Search results, which for his search displayed one record, as a thumbnail, with minimal descriptive metadata. He said "I'm not sure of the quality of this image, should I assume it's high quality? I want more information, but don't know how to get it." It wasn't immediately obvious to him that clicking on the image would lead to more information. We found out later that in the Collection Management system used at the museum clicking on a thumbnail loads a larger version of that image, not additional information on it.

Another user, at the same search results page, selected the thumbnail and clicked on Manage Image Requests. When that didn't work, he clicked on the thumbnail, because "that's what I think I'm supposed to do, not because I have any expectation about what I'm going to get."

In our prototype clicking on the thumbnail loads the Object Record screen for that artwork. One user found this screen pleasing. "Nice screen, I have choices and can see other images." All three were confused by the range of options for Requesting Image Access and Digitization under each image. One ignored all of these options and clicked directly on Add to Image Request, which prompted an error message. Two users said they thought that seeing this screen meant that they already had access to the master image. One user was confused by the image class terminology, particularly "Source" and "Research." She also wondered what the difference was between "Source" and "Master." Note that the other two test users had been involved in prior discussions about terminology for image classes.

Two users went directly to the Image Request form from the Object Record screen without adding the second image, Fire. Then they were both unsure about how to back up and add the image. One clicked on Home on the navigation bar and then said that maybe she should have tried the Search button on the navigation bar instead. The other went back to Home and then used the search box there, never apparently noticing the search option on the navigation bar.

After adding information in the Image Request form, one user asked "how do I know the system has acknowledged my actions?" He asked the same question again later in the session. Another user typed in the required information on the top of the form, clicked on Save but wondered, "what I'm saving." Two of the users wondered why they only saw one image request on the form, not noticing that there were tabs for New Digitization and Image Access requests. There was general confusion around the distinction between requesting digitization on the one hand and access only on the other.

One of the users also felt like there was too much information displayed about the steps the image creator went through to create each image. "I just want to request an image and know if it is ready or not." But two of the users suggested additional information for this form. Namely, the SFMOMA Accession number, when there is one; whether or not there are analog surrogates, for instance 4x5 transparencies; and the location of the artwork itself. One tester said that the museum was currently designing a paper image request form, that we may be able to use as a model.

One user repeatedly typed the Image Request number into the search box in order to return to the Image Request form, even though this resulted in an extra step, rather than clicking on it in the Open Image Request area of the home page. Clearly he had a hard time thinking about underlined black and white text as a hyperlink. This is most likely an artifact of the paper prototype.

It was difficult for two users to determine the percentage of completion of the Image Request. The image completion indicator was too small and positioned too far from the list of images that had been requested.

Scenario 2 (2 users)

Reviewing the Image Request form from the image creator perspective, one user thought there should be a note field on the Image Request form, to indicate, for instance, that "someone else wants an image of the same piece."

Both testers took some time to decide how to begin Cataloging the images. One user tried to click on the Update Object and Image Records button, before noticing the Catalog Images button.

First catalog images screen: One user said "it's nudging me to do my job correctly." Confirming the list of objects being cataloged was unnecessary, since it had just been displayed. But on the second image catalog screen, one of the users wished that he could see the list of files again before indicating what derivatives had been created.

There were no comments on the second image cataloging screen.

Third catalog images screen: Both users were unsure about the source of the technical metadata settings describing how the images were created. The concept of saving custom settings was confusing. The buttons Finish and Save Settings sounded to similar to one user, he also thought that if Finish was the last step in the cataloging process that it should be the right most button in the list. Being able to edit the default settings was appreciated, but one user wanted to be able to click in the table and make changes without having to click Edit Displayed Settings first. He also thought that if you had to switch to an edit mode that the switch should be clearly indicated by a change in the interface. The button Catalog Images was also confusing, since it could be interpreted to mean images used in a catalog.

The one user who is most actively involved in creating digital images in his current job said that he'd want more flexibility in how and when he recorded information about each step in the image creation process, that he could see often wanting to document the process along the way, rather than waiting until all steps are completed.

4.6 Discussion (what we learned and what we couldn't learn)

The low-fidelity prototype was our first attempt to visualize the SFMOMA Digital Asset Management system in terms of a user interface. Given this, we expected usability testing to uncover a large number of problems and to result in an equally large number of suggestions from our usability testers. Such was the case.

Based on this feedback we plan to make the following changes. First, there needs to be two separate interfaces. One for users who primarily need to create and manage digital images and one for those users who primarily want to find and request use of digital images.

The options available on the navigation bar needs to be completely redesigned to address the needs of both user groups. Other terminology confusion also needs to be addressed. For instance, the use of "Catalog Images" to mean add technical metadata to newly created images.

It's also clear that both types of users need to be able to quickly determine whether or not digital surrogates exist for a given set of objects. Accordingly, we need to present that information as soon as possible. In the low-fi prototype a user first has to do a search, then scan the search results, which are displayed as thumbnails, then click on a thumbnail to see a list of digital surrogates for that object. Clearly, this is too many steps.

The search interface needs to be greatly expanded to provide visual feedback to the users on the wide range of search choices available to them. For instance, image creators want to search on their names to bring up all the Image Requests that they have worked on.

We need to make it much easier for users to request digital images and not confuse them with the distinction between requesting access to, and requesting digitization of, digital images.

The Image Request form also needs to be radically simplified for these users. We need to make sure that it clearly asks for only a minimum amount of information about their request. The Image Request form should also be redesigned for the image creators. They need to see significantly more information than those users requesting images. In some cases we'll need to provide fields not present in the low-fi prototype, such as whether or not an analog surrogate exists, and the location of the art objects that need to be photographed. The use of tabs on this page needs to be rethought.

We'll also need to rethink the process of adding technical metadata about newly created images. It may be possible to integrate this process with the redesigned Image Request form for image creators. Regardless of where image creators enter the technical metadata, they should be able to do so with as much flexibility as possible. The use and source of default technical metadata values also needs to be clarified.

Testing the low-fi prototype, while quite valuable, couldn't help us answer a number of important questions. How learnable is the Digital Asset Management system? Will the design support daily users as well as the occasional user? How important will the speed of the searches be to user satisfaction? What can we expect the search performance to be, given that the number of images may quickly exceed 20,000.

In conclusion, while it's clear that many of the details of implementation need to be improved, it's also clear that the low-fi prototype supports, however crudely, the basic goals of the users. General users want to be able to determine what digital image surrogates exist, how to request the use (and creation) of digital image surrogates, and how to track the progress of those requests. Digital image surrogate creators want to quickly see who's requesting what images, indicate when they have created the requested images, and then easily document their creation by adding a minimum, but sufficient, amount of technical metadata to the image records.


Top