Steve Lam

SIMS 213

Project Task Analysis

Problem and Solution Overview (Synopsis)

A facility which needs to update filters against Spam needs to be highly efficient in terms of time and resources. Since each attack may arbitrarily occur at any given moment with varied durations, it is essential for the user(s) to be organized and well-prepared in quickly updating the anti-Spam filters. To achieve this goal of organization and efficiency, an interface that conforms to the organizational schemes of the user must be developed. This would be the placement of anti-Spam tools and Spam-viewers within an environment easily accessible to the user – or in this case the Spam Control Engineer.

The general proposed solutions for the new interface are the following:

Problem and Solution Overview (Extended)

The observed complexity of this issue is that the placement and actions of the tools seem to conflict with that of the users own judgements.

Since the achievement of multi-tasking and organization is needed at an optimum level here, the location and perceptible properties of the tools must be pragmatically appealing to that of the Engineer. It has been seen that after the Spam-details page has been retrieved, the previous message body is replaced by this new information. This is a fault because a number of users choose to view the Spam-details page first, and if the judgement is "fuzzy" as to whether it is unsolicited mail or not, the user then opens the message body and compares them side by side. Also, the storing and testing of the rule resides within the same viewer. So, if the user needs to look at another piece of Spam and write and store a rule for it, he cannot. The window is still being occupied, therefore, time is wasted and updating is delayed. There is also a necessity to view current rules that are "pending review" simultaneously with the Spam-Alert page. Lacking this feature, a user may waste valuable time by writing against a piece of Spam that has already been analyzed by another co-worker. Finally, the overall organization conflicts with the users’ perception of how the system works. Visually, the system lacks icons and picture representations of which, in this case, is necessary. The user must have a model of the whereabouts of the system, and to achieve this, is to use a visual representation. They must know what components interact with other components, so that, during a troubleshooting drill, the problems can be attacked much more efficiently. They must see where the database is relative to the Spam-wall, how does the Spam-alert page bring the Spam to the user, what layers must the committing pass through before it is active, plus other relationships.

Sometimes it "surprises" the user when an undesired action is performed. This "surprise" is the result of the interface not conforming to the users’ expectations. The actions and functions of the system are just as important as the overall organization. Not only does the system need to be conformed to the users’ mentality, but the actions must also be predictable by the user. Each message should open within its own viewing area. If a rule is being tested, an action by the user, otherwise explicitly told by the user, should not be interrupted. And, for actions that are not hard-and-fast (unsure to the system), the system should question or confirm users’ actions before proceeding.

Task Analysis

Target Users and Tasks

The analysis required the interviewing of three potential users. An attempt was made to sample a broad range of participants who must interface with the system. They included a novice (someone new to the company and still learning how to utilize the tools); an intermediate (someone who has interfaced with the tools for about two to three months and has a relatively good insight as to what it performs in the background); and a participant who was an advanced to expert user (one that has had over five months of experience with the interface and has also developed an anti-Spamming tool).

These users perform the same tasks. They must view E-mail caught by certain probes and judge, from relevant criteria, if it is Spam or not. Upon viewing the material, they may need to seek the aid of other tools to determine whether the material is Spam or not. If it is Spam and there is no previous rule written to block it, the user will compose a rule, test it, and store it until it can be reviewed by another user for propagation.

Interview Questions and Results of Interviews

The following were the questions presented to the participants:

Of the questions above, numbers 3, 4, 6, 8, 10, and 11 gave the most insight as to the problems of the current interface. The answers were very much consistent among the different levels of users. They were the following: Three Scenarios of Example Task Sequences

Scenario One: The user logs in and the tools are shown to him. He initiates the Alert page and an archive of captured E-mail is revealed to him. After evaluating the subject lines, the user then proceeds and clicks on a hypertext link, which brings about a new window revealing the message body. The message body shows that it is Spam. He reactivates the Alert window and clicks on the hypertext link to the Spam-details page. This page is brought about within the same window as the message body. The user categorizes the Spam, composes a rule, tests the rule, and stores it for review (all of these functions appear within the same window, each one linking to the other). He then waits for the system to finish processing the rule and then continues from Alert page again.

Scenario Two: The user logs in, a menu appears and he initiates the Alert page. He clicks on the hypertext link for the message body and it appears in a new window. It is hard for the user to judge if the message is Spam or not, so he goes back to the Alert page and clicks on the link for the Spam-details page. The Spam-details page appears within the same window as the message body. This new page still does not give a good insight as to whether or not the message is Spam, therefore, the user goes back to the window with the menu and initiates a trace-route. The trace-route tool opens in the same window as the menu and reveals to the user that it is Spam. He goes back to the window already open with the Spam-details page so he can write a rule. The user categorizes it, composes a rule, test the rule, and stores it for review.

Scenario Three: After the user logs in, the menu is brought up within the same window. A co-worker notifies him of rules waiting to be reviewed. He clicks on the Review Pending Rules link and the rules, which need to be approved, are revealed within the same page. He checkmarks the rules he wants to commit and waits for about two minutes for the system to process it. The rule is now ready for propagation. He clicks on the propagation button and the system initiates the process of sending the rule to the filter. As evident, two minutes were delayed in waiting for the rules to be committed. During this duration, the user could have examined other messages, which could have been Spam.

Suggested Solution

Functionality

The improved interface would allow the user to view the whole system as a graphical visual representation analogous to the mental models of the user himself. The tools would then be categorized based on this new representation. Therefore, they would be much more accessible in location relative to the user’s perception. Tools that organize into one category will all appear in one window within different frames (compartmentalized). Other tools, which perform specific tasks, will appear within their own window. And, tools that perform arduous and intensive tasks will also appear within their own window. This is due to the fact that while there is a background operation being performed, the user will still have the ability to continue his work without interrupting the system. Also, at times, when necessary, the system will give feedback to the user. These feedback cues will be in the form of notifying the user if the database is down, asking the user if he wants to continue with an arduous task, and presenting the user a representation of the background operations (e.g. showing the user a rule is being sent, showing the user a rule is being tested, showing the user how much is actually being filtered, etc.).

User Interface

After logging into the system, the user will be presented the graphical map of the overall system. Each piece of the system will contain a labeled list of tools that fit within that category or picture that represents it. The basic parts or icons of the representation will be the following:

Within the same window, the Alert page will also be present. From here the user can view the message or Spam-details page within their own respected windows. If the user does choose to write a rule he can do so with the Spam-details page. After composing the rule and while testing it, the user can view another message by selecting one from the Alert page. Doing this will not interrupt the process. If there are rules pending review, the picture representation will flash the Rule Updates icon. By selecting the flashing icon, a new window will be opened and the unpropagated rules will be presented to the user. The user may choose to commit it. While it is committing the user can continue with his other duties.

Scenario One (Improved): After logging in, the user can choose whichever he wants to look at first, the message, Spam-details, or both. He can compose the rule within the Spam-details page and while it is being tested he can continue to look at another message.

Scenario Two (Improved): After logging in, the user opens both the message window and the Spam-details window because he is not sure whether it is Spam or not. This analysis still does not yield a decision so he goes back to the system map and selects the traceroute tool located under BLOC. The tool tells the user that the message is Spam. All he has to do is maximize the Spam-details window, compose the rule and test it.

Scenario Three (Improved): Upon logging in, the Rules Pending Review icon flashes. He selects it and a new window with the rules are opened. He checkmarks the rules and commits them. After doing so, he can continue with his duties without interrupting the system’s committing.

Experiment Outline

Informally Testing the Rough Prototype

The testing of the prototype in this stage will be to present the user with a hand drawn representation of the interface. Ask them if the picture fits the mental model of their representation of the system. After doing so, check if the tools are categorized according to the user’s viewpoint. Also, test to see if the feedback is intuitive of the system’s background processes. And, finally when a tool is selected, does the outcome surprise the user (e.g. do they expect it to open up within a new window, when an operation or process is being performed, do they expect it to or not to interrupt the operation, etc.).

Formal Study

Independent Variables: The tools will be placed within different categories in order to yield an optimum reaction time for the selection of the tools. Different pictures will be used for the representation of the system. This is to see if the user finds one or another more pragmatically appealing over another.

Dependent variables: I will be measuring the improved or hindered speed of the users finding the needed tools and browsers. I will also be measuring the validity of their explanations, their mental representations of real-world processes in troubleshooting scenarios to see if the new representations are much more intuitive or better conveyed.

Participants: This experiment will primarily involve the operations staff, it’s supervisors, and tool implementers. These subjects have direct, everyday encounters with this interface.

Method: I shall use a within-groups scheme to approach this problem. Each and every operations staff will have the same tasks and conditions. This method seems valid because learning plays a large role in the usage of this system. They will be asked to find a tool and perform an operation as fast as possible or be asked a scenario in which something has gone wrong and convey a technically valid explanation to a supervisor, of which will help test the intuitiveness of the interface.

Results: I predict that the efficiency will be noticeably improved in the areas of finding a needed tool. But, the most dramatic result may come from the more informative representations of the current processes shown to the user. They may feel much more confident and develop a better cognitive understanding as to what the system is undergoing than seeing it as a mystery of which only the system administrator would know.