Encounter Bubbles


Final Class Paper by Zhanna Shamis and Sean Savage


Berkeley INFOSYS 247: Information Visualization      May 15, 2004

Background       Goals       Related Work       Description of the Visualization
Visualization Principles Used       Data Used       Tools Used


Background

Encounter Bubbles is the first visualization tool built on top of the Mobster sociolocative service framework. Mobster is a research and development project headed by Scott Lederer in Berkeley's Electrical Engineering and Computer Science department. This Encounter Bubbles prototype was built in partial completion of Professor Marti Hearst's Information Visualization class in Berkeley's School of Information Management & Systems in Spring 2004.

Mobster is designed to be an open framework on which locative (meaning location-based) networking applications can be built. The system is built around the concept of “encounters." Each encounter consists of a pair of devices, a start time (when one of the devices detects another), and an end time (when the devices pass out of range of one another). An encounter can be registered when two users (represented by two mobile devices) come within range of one another, or it can be registered between a user (represented by a mobile device) and a place (represented by a stationary device such as a wireless Internet access point). Mobster currently works with wi-fi (wireless internet) devices, but it can be extended to work with Bluetooth devices, cellular telephones and with devices that will use future wireless protocols as well.

The Mobster client application lives on a user’s mobile device or laptop computer and it registers encounters. The client sends its encounter records to the centralized Mobster server application which maintains a database of users, places, and encounters. On the server, each place can be “tagged” by users with a text place name. Users are recognized by unique identification numbers associated with their devices. For instance, if Mobster runs on my laptop, the Media Access Control (MAC) address of my wi-fi adapter card would be recognized and recorded by the Mobster system, and by registering with Mobster via the Web I can associate my name and profile with that MAC address. Privacy mechanisms will enable users to mask some or all of the information they submit to the system from others.

Encounter Bubbles uses Mobster's encounter data to produce a tool and a visualization that enables users to explore their social encounters in new ways.


Early Encounter Bubbles Mockup

.

Goals

Our core goal is to help users to invent and refine new capabilities, including new uses of networked devices that can track users’ encounters over time.

Encounter Bubbles differs from typical commercial applications; our approach to this project does not involve tasks that users already pursue in daily life. Because the concept behind Mobster is relatively new and because the project is very exploratory, the goal of our Encounter Bubbles visualization is to allow users to explore and understand their social encounters, by offering a few minimal tasks to start and allowing people to create their own uses from there. We want to avoid an interface that appears too rigid or too strictly task-centered, because the premise behind this application is that users, not technologists, will invent the most compelling uses for locative technologies.

Historically, inventors and early engineers of new technologies have invariably been wrong in their predictions of what people will use their creations for. When Thomas Edison invented the phonograph, he viewed it first and foremost as a tool for business executives to use for dictating letters. He thought that most of his customers would be businessmen who would use phonographs as devices for recording their own voices. Almost immediately after phonographs hit the market, people began using them for completely different purposes, and before long it was obvious that phonographs were not really "for" recording dictation -- most people used them for playing back prerecorded music.

On their own, technologists and academics are notoriously poor at predicting and defining uses of new technologies. Users of new technologies define the uses of new technologies, with the assistance of designers and engineers. We think that this pattern will repeat itself in the evolution of locative technologies, so for this project we departed from traditional usability design techniques by adopting this central precept: we will not presume to know what will be the most important and widely-used tasks that users will undertake.

From the start we assume that we can't predict what locative tech is "for," that we can't yet imagine any "killer applications" that might emerge using these services, that we can't yet hope to know what core tasks these services will eventually support. The best we can do is to put minimalized pieces into the hands of users, get out of the way and watch as embryonic uses emerge around those pieces. Then we can iteratively develop and refine software and hardware to support and further explore those uses. This philosophy does not preclude testing the waters with features and affordances that we think users might enjoy, but it requires a focus on users' behaviors and needs and a willingness to change initial designs based upon this. We're building a launch pad.

By design the main initial task to be supported by Encounter Bubbles is a loosely-defined one: visual exploration of a user’s relative proximity to other users and how this changes over time. Secondary tasks include the following: users can register with the system, post personal information that they wish to share with others, and identify other users as fellow members of groups. Users also can specify basic privacy options including how “visible” they will be to others: either “invisible,” anonymous, or visible and identified. Each user can specify that different privacy levels shall apply to them at different times of day, as well as either to everyone or only to certain groups. Users will be able to view a visualization of other users near them over a continuous range of times; some of these other users will be clearly visible as belonging to the different groups to which the user belongs. Some of these other people will be identified by name, and others will appear anonymously, according to users’ chosen privacy options.

.

Related Work

Location-aware or "locative" services are based upon networked mobile devices that "know" where they are. For more than 25 years technologies such as GPS (the satellite-based Global Positioning System) have allowed airplane pilots (and more recently, millions of civilian consumers) to know where on (or above) the Earth's surface they are. GPS only works reliably outdoors, away from large buildings and hills. Mobile devices that can tune in to GPS signals drain batteries relatively quickly, so a wide range of other methods of determining location (cellular telephone tower triangulation, wi-fi access point detection and triangulation, and so on) are being refined. But so far, no practical locative service can tell you precisely, simply and reliably where you are on the planet's surface, and do so in any environment.

For years mobile phone companies and service providers in Japan and Scandinavia (and more recently, in the US) have provided very basic locative services for users of their devices, mostly centered around direction-providing services that can be envisioned as Mapquest on the go, without the user having to specify their current location because the system already knows it. Some of these firms also provide simple “friend-finding” applications that notify users when they come into close proximity of other users whom they specify as friends. But these applications are mere extensions of already-existing applications in the desktop Internet realm.

Dreamers, developers and pundits such as Howard Rheingold and Ben Russell have predicted future uses of locative technology that go far beyond these applications, and conglomerations of independent engineers and academics in groups such as the Place Lab initiative (placelab.org) and the Locative Media Lab (http://www.locative.org/) have discussed and prototyped such next-generation locative services for years. Now Microsoft has joined the party with projects such as its World-Wide Media eXchange (http://www.wwmx.org/), which embodies many of the ideas put forth and experimented with by earlier players.

The locative-service space is still in its infancy and the vast majority of locative work is focused on absolute location (coordinates on the Earth's surface). Tremendous unexplored potential lies within concepts of relative location, including people's proximity to one another. Two of our key focis set Mobster and Encounter Bubbles apart from most other locative technology projects:

1. We’re focused on the possibilities of representing spaces relative to people’s use of those spaces, as opposed to shackling ourselves to literal maps of physical space

2. We visualize people’s encounters and movements over time, while most of the other work so far that we’ve seen has focused on real-time use of space.

We are aware of several other projects that work within this realm and touch upon these issues:

  • Lovegety (lovegety.notlong.com), a line of small electronic toys that were popular in Japan in the late 1990s. Lovegetys are designed to be carried around by their owners throughout their day-to-day lives, and each can beep and flash when it detects another Lovegety within five meters whose owner is of the opposite sex.
  • The Familiar Stranger Project (fsp.notlong.com) by Elizabeth Goodman and Eric Paulos, carried out at Intel Research Berkeley in 2003. The project involved the design of electronic devices that notify users when they are close to groups of people whom they have been close to in the past, without providing them any detailed information about individuals within the groups.
  • Jabberwocky (jw2.notlong.com) by Eric Paulos and the Urban Atmospheres Group within Intel Research Berkeley, currently ongoing, which is an outgrown of the Familiar Strangers Project. Jabberwocky software runs on Bluetooth-enabled mobile phones and notes when other Bluetooth phones pass within range of the user's phone. Jabberwocky keeps track of each encountered phone's identity and thereby recognizes whether or not a currently-detected phone has been detected in the past. A visualization that appears on the phone's screen denotes currently nearby devices and the degree of "familiarity" with the currently-detected devices based on past encounters.
  • Blogger Bridges (bbp.notlong.com) by Joe McCarthy, Mike Perkowitz and Matthai Philipose at Intel Research Seattle, a new project designed to explore the possibilities of carrying over online relationships between members of weblog communities into face-to-face reality through the use of networked devices that can recognize their proximity to one another. (Disclaimer: Encounter Bubbles team member Sean Savage may work on the Blogger Bridges project during the summer of 2004.)

    These are all useful and ingenious projects, they have all inspired Encounter Bubbles and our projects shares some attributes from these efforts. But none of these projects show nearly as many encounters as we do in Encounter Bubbles, and none of them are designed to deal with encounters in the same level and sort of detail that we provide.

    .

    Description of the Visualization

    The following screenshot shows a typical user's default Encounter Bubbles view. It shows data from the point of view of "me," a single user who is running Mobster software on a single device: in this case, a wi-fi enabled laptop computer.

    Each bubble denotes my laptop's most recent encounter with a specific other device.

    Just one bubble represents each device in this view. If one device is encountered repeatedly, only the most recent encounter appears as a bubble.

    The "now box" shows devices detected at this moment. In the view above, Karen and three unidentified people are all in the room with me now.

    In the now box, bubbles near the top arrived recently; those nearer the bottom have been "here" (in range of my device) longer.


    Movement illustrates more information (not visible in the screenshots):

  • Most of the bubbles move downwards and become dimmer with time, slowly but constantly. The movement is faster towards the top because of the gradual perspective effect (that is, because periods of time are compressed progressively moving further from the moment focused upon by the now box.

  • Never-before-encountered devices enter the "now" box from the top.

  • Occasionally a bubble quickly floats to the top of the "now" box from somewhere below in the history polygon, getting larger and brighter as it rises. It gets larger in fitting with the perspective-wall metaphor (i.e., bubbles more distant in time appear smaller). It also gets a bit larger relative to other bubbles sharing the same vertical position. With each encounter (each entrance into the now box) a bubble's relative size increases slightly and it becomes slightly brighter.


    Interactive functions are triggered by mouse activity:

  • Back in time: By default the focus box at the top of the screen is the "now" box, but the user can drag it down to focus in on a moment in the past.

    The visualization changes fisheye-lens style -- the "now" word stays at the top along with the current time, but the current bubbles shrink and crowd together as the focus box is pulled farther down, causing the focused-upon past bubbles and those around them to be relatively larger.

  • Ghost bubbles: If a device "now" encountered was also encountered at the time in the past that's being focused upon, that device's primary bubble remains up top in the "now" area, but a ghost-image bubble for it appears in the focus lens denoting its past encounter.

    The ghost bubbles are connected to their corresponding focused bubble via a dotted line.

    There's another way to view ghost bubbles: if I click on any bubble, all of the previous encounters I've had with that device appear as ghost bubbles, as seen below. Also, clicking on a bubble will cause the corresponding profile window to appear, if the person represented by that bubble has entered a profile into the system, and if that person has made their profile visible to me. Below, Karen has posted a profile and made it visible to me, so this is what I see when I click on her bubble.

    Before adding a profile, Karen "identified" her device, elsewhere in the main Mobster software. This process consisted of specifying her name, and selecting other Mobster users with whom she wanted to share her identity and profile.

    (In the view represented by the first screen shot, my friends Jane and Waldo shared their device IDs with me. In the screen shot above, Karen is the only person whom I'm currently encountering whose identity is visible to me.)

  • Mouseovers: Users can also mouseover over a bubble to see more information. The following screen shot illustrates a mouseover of Karen's bubble.

    Here, the device's name (only for devices whose Mobster identities have been shared with me) appear, along with:

    • lines connecting bubbles of any people who have been in the same room with Karen (that is, lines between the selected encounter bubble and any other devices that my Mobster has ever seen in the "now" box together with the selected device).

    • (only in the case of colored bubbles, which are grouped bubbles) group name, and

    • the moused-over bubble's profile window, if one exists.

  • Encounter alerts: I may double-click any bubble to set (or remove) an "alert" for the accompanying device. I will be alerted next time such a device is encountered.


    Miscellaneous:

    "Make a group" function: at any time I can "group" all bubbles in the "now" box for future reference. When I activate this function I'm asked to choose a color (from just six to eight colors, probably) and I'm given the opportunity (but not required) to name this group of people/devices. All these bubbles turn the color I chose and will remain that color from now on (until I remove that group). At the family reunion I can group all my family members red; at the conference keynote I can group all my colleagues yellow. (I can also group people another way, by selecting from a list of all my identified friends' names).

    Replay mode: Users can also engage in replay mode, whereby they can replay the basic bubble visualization backwards and forwards at various speeds. In this way they can quickly see patterns in our encounters, in the same way that time-lapse photography shows us patterns in the movements of stars and the growth of flowers.

    Horizontal position conveys no information. In future versions, bubbles at each given time might be made to gravitate towards the center of the screen, so that alternating busy and quiet periods (periods with many encounters and those with very few encounters) will be visible at a glance as wave shapes in the placement of bubbles along the margins.

    .

    Key Visualization Principles Used

    Given the black background that the visualization is set against, viewers perceive the brightest bubbles immediately. Variations in contrast are preattentive; this means that viewers recognize differences in contrast immediately upon viewing them, before consciously considering the image. Preattentive image properties powerfully convey information; the use of contrast here effectively conveys frequency of recent encounters.

    We presume that Encounter Bubbles users will be most interested in people whom they encounter frequently, and those whom they have encountered recently. This is why we use preattentive properties to highlight the bubbles corresponding to these people.

    Color is useful in showing distinctions among categories, and that's what it's used for here. The Encounter Bubbles visualization uses color to label groups. These colored groups are easily recognized as being separate from one another and from the other non-distinct bubbles.

    The visualization also incorporates the gestalt property of connectedness, wherein solid and dashed lines connecting bubbles convey particular relationships between them.

    .


    Data Used

    For the first visualization we used sample “dummy data,” but the Mobster team is already testing early client software and is already populating the database, so soon we hope to create a "live" version of the visualization backed by real data. Right now we’re focusing much more on after-the-fact exploration of historical data than on real-time data, so we expect most of the data to be “pre-stored.”

    Core data used includes:

  • Unique identifiers for users’ devices.
  • User-entered profile information mapped to each user’s device IDs
  • Representations of groups (collections of users).
  • Media types for each supported access point and device (possibilities include Wi-Fi, Bluetooth, telephony, GPS). This is important because these separate media have different levels of location precision (for instance, GPS can pinpoint a location but proximity to a cellular telephone antenna might only tell us that a user is somewhere within a several-mile-radius circle).
  • (See "place identifiers" below in Future Work)



    Tools Used

    We initially sketched design ideas on paper, then we explored the idea with initial mockups created in Photoshop.

    We decided to implement Encounter Bubbles using Flash MX. Flash MX allowed us to quickly mock up storyboards for our visualization and iterate on our design. Our initial storyboards were static pages. We used ActionScript to make our interface interactive and to demonstrate some of the functionality Encounter Bubbles supports. We pieced together built in ActionScript functions to display the current time in our prototype. Most of the interactive features in our prototype are triggered when a user clicks on or hovers over a part of the visualization.

    Originally we planned to simulate the dragging motion of the fisheye-lens focus box, however, we quickly realized that simulating this interaction properly required more time and more expertise with Flash and with ActionScript than we had. Instead of implementing a dragging motion we made the focus box clickable and linked it to a different frame in the Flash movie.

    The interaction flow for clicking the focus box is displayed below, along with the basic ActionScript used to call different frames.


    Frame 1

     
     on (release) {
      gotoAndPlay(10); 
     }
    		


    When our Flash movie loads, the focus box is positioned at the top of the encounter funnel, revealing encounters happening now (Frame 1, above).


    Frame 10
     
     on (release) {
      gotoAndPlay(13); 
     }
    		


    When a user clicks on the focus box, another frame in the Flash movie (Frame 10, above) is called and it shows the lens repositioned to reveal encounters from one month ago.


    Frame 13
     

    Clicking the focus box again calls yet another frame in the flash movie (Frame 13, above) that shows the focus box near the bottom of the encounter funnel, revealing encounters from 11 months and 3 weeks ago.
  •