California Digital Library |
[ Home | Introduction | Prototype | Method | Test Measures | Results | Discussion | Formal Design | Appendices ]
RESULTS
On the positive side, people used and returned to our new front-end interface pages far more in this experiment than in our lofi prototype testing (as opposed to spending most of their time in the existing CDL pages). People also liked the look of the home page, and found that the source lists gave them a concrete sense of the size and complexity of the CDL resources--a goal we had hoped to achieve.
However, several sources of confusion appear to still exist in our redesigned interface. Some problems we inherited from the complexity and structure of the CDL resources, and represent our failure to find a means for simplifying and rendering transparent the source selection and search problems confronted by CDL users. Another set of difficulties result from the limits of our ability to hook into or modify existing CDL functionality to match our conceptions for the interface, as in the example of our inability to combine topic category and keyword search on our Select a Source page. A remaining group of problems, however, appear to be new difficulties that we have introduced in our effort to present the user with larger amounts of information regarding the nature and scope of the resources that CDL offers. The concrete sense of size and complexity mentioned above proved more overwhelming than helpful to our users when we listed each and every source in the Browse by Source Type. The list jumping as it expands jars the user and it is not evident to them that the branch has simply expanded to this enormous list. Perhaps a more graphically displayed solution is in order here.
Most users ignored the text on the home page central table that described the types of sources on the site. We figure this is due to the fact that scanning usually occurs in a vertical format, and our layout relied on a horizontal scan.
Our users spent significant time browsing among specific electronic journals when this was far from the best path to their goals, although to some degree this was due to a misunderstanding of the scenario: user 3 believed that the request to identify "sources" in scenario 3 excluded the Melvyl affiliated databases, when our intent had been that those databases should be included in "sources;" user 2 had a similar, although less consequential for the test results, reluctance to identify Melvyl as a "source". This confusion lead to a wide variation in times for Scenario 3 in execution by each Participant.
Sources of confusion:
- all 3 users found duplicate menus on both sides of front page confusing
- each also cited the overwhelming length of the source lists
- the "jumping" source tree expansion & contraction confused 2 of our users
- one user cited the lack of a clear indication of library catalog vs. more
limited databases
Users were also confused by the CDL choice of naming "Electronic Journals"
and "Journal Article and Other Databases". The differentiation is
not clear and therefore users kept returning to Electronic Journals to try to
search rather than trying to find the relevant Searchable Database.Consequently
an average of 2 minutes was spent spent scanning lists of resources that were
irrelevant.
The only true quantifiable results that measured we in this pilot test was the speed of the searches. While we wanted to be able to recognize and assess the number of steps a user takes down a wrong path before recognizing and retreating, this was not always possible. Users tend to start into a path and retreat more quickly than could be caught by our notetakers. A stop watch and a video tape of each session might have yielded a better analysis of this variable, yet, as noted before, we did not feel we had sufficient time to analyze this media.
See Discussion for more on these and other Results. The Raw Data can be found in the Appendix B
[ Home | Introduction | Prototype | Method | Test Measures | Results | Discussion | Formal Design | Appendices ]