Assignment Nine:Link to our Final Prototype
Problem statementScheduling meeting times between many users is a tedious, inefficient process requiring numerous emails or phone calls. When one user has a conflict, the entire discussion must be regenerated, requiring each user to again assert his or her list of availabilities and conflicts. Among large groups of users, the number of iterations required can become unwieldy and irritating. Adding to the frustration, some invited attendees forget or neglect to respond to meeting requests in a timely manner. Additionally, every semester, professors must revisit the task of determining office hours while trying to incorporate personal and student schedules. There is also the issue of existing calendaring programs, including Outlook, handheld PCs, annd others: SIMS has not and is unlikely to standardize on a common calendaring platform such as Outlook, nor is Berkeley likely to implement a centralized scheduling system. Moreover, when using scheduling programs, oftentimes inputting data is time-consuming and complicated by having too many options. Having to re-enter email information for people participating in a meeting can make the data tracking process even more tedious. Sometimes meetings are scheduled during holidays or during final exams because they are coordinated so far in advance. And when the results of possible meeting times are displayed, they tend to not be descriptive enough. Key information is misisng, such as people's names, the flexibility of time slots, and identification of people without whom a meeting cannot occur. Providing a streamlined web-based tool to automate the scheduling process would reduce the amount of effort and time required to organize a meeting, while providing an informative, easy-to-read display accessible to everyone in the SIMS community. Solution OverviewVERN is a web-based meeting scheduling tool that automates and streamlines the time and labor intensive process of agreeing on meeting times. The objective of VERN is to reduce the amount of effort and time required to organize a meeting, while providing an informative, easy-to-read display accessible to everyone in the SIMS community. Our primary goal is to develop a tool that is simple, intuitive, and efficient enough to use that both non-technical and non-calendar users are compelled to use the system. The challenge in reaching this goal is to determine how users currently go through the process of scheduling, to improve on features offered by existing meeting scheduling systems, as well as to determine how the process can be improved to offer a better system. The core of the VERN solution is an intuitive graphical drag and drop display where the user can utilize the mouse to quickly select and manage meeting times. This core is supported by a well-designed site offering additional functionality and features. Personas and ScenariosPersonas described in Assignment #2 Scenarios described in Assignment #3 Final Interface DesignDescription of FunctionalityOur third interactive prototype provides the following functionalities:
Main interaction flowINITIATOR
ATTENDEE
What was left unimplemented?We were able to incorporate most of the functionalities in our specification "wish list”. There are items that we would like to implement but were left out because we didn't have enough time to code the functionalities. The items not included in this prototype are:
Tools used to develop the systemThe VERN system was developed with a wide range of technical and user interface specific tools. The graphics were developed with Photoshop and Illustrator, and were positioned with a variety of tools including DreamWeaver and direct text editing. The site was then dynamically enabled through using PHP, tied into a mySQL back-end. As development progressed, the team attempted to isolate the interaction with the database into a single PHP file to minimize efforts when changing the UI. Of these tools, the PHP and mySQL environment was provided by SIMS IT. The mySQL environment was of an older version, which led to minor hiccups but no significant development hurdles. Some members of the team chose to develop locally using the Xampp Apache bundle (www.apachefriends.org/en/xampp.html), which contained a pull-string windows platform configuration of apache, PHP, and mySQL. DBDesigner4 was used to design the underlying data model. The applet interface was developed and debugged using Eclipse. Throughout the site JavaScript was used to enhance the UI, as well as tie the voting applet into the DB backend. Tools used for prototyping and implementing the UIOther tools used in the development of the prototype had varying levels of success. Our paper prototype worked wonderfully for our purposes, the ability for users to scribble directly on the paper when marking up times confirmed that users have an intuitive feel for marking out blocks of time on a calendar. However, the paper prototype did make it harder to spot "gaps" in our interaction flow, something that we found out during initial user testing when people clicked somewhere unexpected and we were missing a confirmation screen. We attempted to use Denim, and while the multi-level view of the site would have been useful, but for whatever reason didn't run fast enough on the computer being used to be usable. We believe it is also geared towards a 100% tablet interface, something that the designers were not used to. Pros and cons of these tools for your project
Enabling Dynamic Content PHP/mySQL/JavaScriptPHP proved to be an excellent cross platform environment, and was very usable by our team members. The similarity to Java proved to be beneficial, as well as the simplicity of writing loops and switch statements. We successfully consolidated the mySQL access into a single file, leaving a series of API calls for the rest of the site to use. There were difficulties with consistency in the code, and once our team started using the PHP “Define” constructs to standardize references to information, we were able to significantly cut down on bugs and miscommunications. Design EvolutionChanges from initial sketches, low-fi testing, HE, and final usability testThe user interface experienced a gradual evolution from our initial sketches, to low-fi testing, to heuristic evaluation, and to the final usability test: Initial SketchesThere were three major design paths that we wanted to explore. They share many things in common:
Low-Fi VersionOur team built a “Lego-style" low-fi prototype for testing. Navigation and interaction elements were constructed out of colored paper and assembled live in front of the user. This approach of separating out design components allowed us increased flexibility in testing different navigation, naming, and interaction possibilities.
Heuristic Evaluation
Final UsabilityFeedback from the Heuristic Evaluation was implemented into the final usability iteration of VERN. Log-In Landing Page/Meetings Calendar Propose a Meeting Voting on Meeting Time Preferences Voting Confirmation Page Meeting History Page Contacts Page Major Changes and Why They Were Made
The relative values of Lo-Fi Prototyping, Heuristic Evaluation and Pilot Usability TestingComparing the three evaluation techniques can be metaphorically related to Goldilock's experience in the home of the three bears:
What do we mean by "too soft" and "too hard"? We don't mean that they produce results which take more or less effort (Heuristic Evaluation generated a lot of work for our team). In fact, all three approaches were valuable, providing feedback at different phases of the UI design - but HE seemed to occupy the sweet spot. By discussing our experiences with the different approaches, the meaning of "too soft" and "too hard" should become more clear.
Low Fidelity PrototypingOur Lo Fi prototype was an extremely valuable way of testing out different UI design options. The flexibility of the paper based approach allowed us to try out many options with very little effort. At the same time, because the user interface was such "low fidelity", we were only able to evaluate user interface issues at a coarse and fairly qualitative manner. For example, the very quantitative measures of timing, error rates and related metrics that Pilot Usability addresses were entirely impossible to perform using paper cutouts and the Wizard of Oz interface simulation. Compared to usability testing, Lo Fi is very soft (in the sense of soft == qualitative). Heuristic Evaluation is also highly qualitative, but compared to Lo Fi, the feedback from the HE were far more complete and concrete. This is in large measure due to testing against a more fully realized interface. Because the Lo Fi interface is very coarse, it is only reasonable to expect coarse feedback. As an example, from Lo Fi we discovered that users preferred the click and drag interface for selecting times over a menu popup based approach. However, from HE we discovered details about users's expectations of how click and drag operates, the importance of color selection as well as broader navigational issues that arise when there is a "live" interface that responds in real-time to a user's actions. The level of detail possible when an actual interface is being evaluated cannot be reasonably simulated with paper cutovers and human UI puppetmaster. It could also be argued that many, if not all, of the insights derived from Lo Fi testing could have been achieved with HE as well. The big selling point of Lo Fi is the low cost to examine this different options. Lo Fi wins in terms of cost/benefit, but in terms of absolute benefits, it cannot compare to HE.
Pilot Usability StudyThe pilot usability study was in many respects similar to heuristic evaluation because it was informal and as a consequence, very qualitative. Our hypothetical formal experiment would have been far more quantitative and rigorous and would have many more metrics. Despite its informality, the usability study produced an amazing amount of useful feedback, but most of it was in terms of qualitative feedback on the interface. The formal metrics we measured did not give us the kind of insight into the interface that the open ended discussions provided. As we discussed in the previous assignment, the Pilot Usability Study was effectively an informal HE and the most valuable insights we derived from the exercise were similar to the results from a heuristic evaluation. A formal Usability Study has potential for more rigorous results, but rigorous answers almost always require very specific and narrowly constrained questions. To perform a proper usability study, you need to properly setup the experiment, conduct it properly and then invest significant time processing the results. It is at the opposite end of the "cost" spectrum from Lo Fi prototyping. Given the expense, the quantitative results of a usability study usually only answer a relatively small number of formal hypotheses. The verbalization of a user's experience is very qualitative, and more free ranging - and in our experience, yielding more valuable design feedback. This brings us back towards a Heuristic Evaluation style approach. This is based on our own experience - which is probably not typical for groups conducting usability studies in the field. Our user population is extremely cognizant of user interface issues, and capable of articulating very specific and insightful criticisms of the user interface. But once again, this points to the value of a heuristic evaluation approach. Usability testing seems very rigorous, but by virtue of it's rigor, it takes a lot of effort and only formally addresses very specific hypotheses. Compared to the results from heuristic evaluation, it seems "too hard". This is not to say we did not gather invaluable insights from the usability test - but the best results were often the consequence of ad hoc HE, and not from hypothesis testing or quantitative metrics.
Heuristic EvaluationIt should come as no surprise that having user interface experts evaluate your interface is the most efficient method to get feedback. The course readings indicated that HE was the most cost effective, and generated the most feedback among several usability tests. Lo Fi was not among the methods tested in the paper). Our own experience supported these findings: the SIMS Alumni Network group identied several dozen UI issues, resulting a reworking of the Vern interface that substantially improved the design. In the followup Pilot Usability study, we received amazingly useful feedback from one of our testers who was a SIMS student that also had experience in web development. What this points to is the value of expert feedback, especially against a fairly concrete UI design. The user interface for Lo Fi included a large amount of hand-waving, making it hard to get fine grained UI feedback. The hard, rigorous approach of usability testing is useful when expert feedback is not available, bit it is expensive and narrowly focused. Heuristic evaluation is the best because it relies on access to high quality information (expert feedback). Because we had ready access to such high quality information, HE was the most effective method. Under other circumstances, it may be more difficult to get expert feedback, but for our circumstances heuristic evaluation was Just right compared to the Too soft of Lo Fi and the Too Hard of a formal usabilty study. Class PresentationFinal Presentation given on Tuesday, May 3.
|