November 24, 2007
Reading for November 27th, are now posted. Enjoy!

October 2, 2007
To upload your thoughtless acts, create a new assignment page like any other lab. You'll see "Thoughtless Acts" listed as one of the assignment options.

May 24, 2008
This site has been archived and is no longer editable. Stay tuned for the next version, coming in the fall!

Roomba extension

Project Members: 
Jonathan Breitbart
Kathleen Lu
Srikanth Narayan

INFO 290-13 Midterm Project Proposal

Jonathan Breitbart

Kathleen Lu

Srikanth Narayan

Option 1: Roomba Vacuum Cleaner Modification

With this project, we propose extending the Roomba's functionality
for users to interact with the system via a tangible user interface.
Currently the Roomba ( can
either be controlled by a Wireless Command Center (a remote control) or
by an algorithm for vacuuming a specific area. Under both
circumstances, a Virtual Wall restricts the Roomba from crossing
certain areas. We would like to design a system where users specify a
cleaning path via a touch-screen map interface. The user draws a path
with his/her finger on the map and the Roomba responds
(remotely) by following that path in the actual area being cleaned.
This modification would allow users to remotely monitor and control the
activity through a real-time visual representation of the Roomba itself
and its environment. The user would be able to regulate the Roomba's
behavior by physically interacting with the virtual representation of
the unit and its environment. The Roomba could also relay feedback to
be displayed on the virtual representation and alert the user of
possible problems, changes in conditions, completed task, etc. Using
this information, the user could modify behavior of the Roomba

Possible System Design

Our new system would consist of a touch-screen control center.
Included in the control center would be a video representation of the
actual area where the Roomba is located. This projection could be as
simple as an overhead (two-dimensional) view of the floor pattern for
the room. A camera (or multiple cameras) mounted on the ceiling of an
area could provide this view. The camera(s) would transmit the video
image to the control center, where it would be displayed on the touch
screen. This would allow the user to remotely view the environment in
which the system is located and see any changes that occur in the
environment (e.g. people walking or moving furniture, spills, movement
of the Roomba, etc.).

The user would then direct the Roomba's movement by dragging a
finger on the touch-screen map for the desired path. The user would use
the information on the virtual map being displayed to choose an optimal
path (e.g. the user could avoid furniture and other obstacles present
in the picture). The touch screen could draw the path created by the
user's finger on the screen to store the most recent path chosen. The
user drawn virtual route would then be transmitted to the Roomba, which
would then physically follow the path in reality. This could be done by
sending simple directional instructions to the Roomba. The Roomba would
not necessarily need to know anything about it's location, but would
simply follow the instructions received from the control center. Once
the Roomba completes the specified path, it could stop and optionally
communicate task completion with an alert (currently a 2-note sound).
Also, motion information could be recorded as to what areas the Roomba
actually covers and this information could be displayed (perhaps
denoted as transparent shading on the control center screen) that would
inform the user what area of the environment the system has covered.
This could subsequently influence the user's decisions for future
Roomba movement. The user could also change or stop the Roomba
mid-path, perhaps by simply drawing another path on the control center
screen or by tapping in the image of the Roomba on the screen. When the
user wants to stop the Roomba from cleaning, he/she could tap the
screen at the location of the Roomba's base, which would then transmit
instructions to the Roomba telling it to return the location of the


Option 2: Green Thumb for Dummies

We propose a gardening device with indicators informing users of
plant conditions. Unless a user is familiar with gardening, there is a
tendency to either dehydrate plants or over-water plants. With a
physical icon (Ex: green thumb or miniature plant) or other
representation of plant needs (water, fertilizer, or sun, etc.), it
could inform users of the appropriate action (water plants, stop
watering plants, add fertilizer, too much/not enough sun).

Possible System Design

system would consist of several devices. A photocell would measure the
amount of sunlight while an analog device would measure the saturation
level or density of the soil. Based on the input from these devices,
LEDs can serve as outputs where a color and/or blinking rate indicates
actions required from the user (Examples--a) Red high-blinking rate=
needs water now; b) Yellow blinking = too much sun, etc.). A final
device or alternative to the LED output would be a physical icon or
other representation of the plant such as a "green thumb" object or a
miniature plant.



Obsevation - Looks like you

Obsevation - Looks like you have two systems, so we'll make shorter comments on both of them! (1) The roomba currently works pretty much autonomously. (although one could argue that the appeal of the roomba is just that) (2) People are unaware of the needs of certain plants and you can build a system to help newbies turn their brown thumb to a green thumb.

Systems - (1) The Roomba proposal involves creating a touch-screen remote control for the Roomba, to correct potential problems with existing solutions. It might be interesting to consider additional modes of interaction with the Roomba. For example, the touch-screen system is in many ways like a traditional computer interface and still involves action at a distance -- are there other interactions more closely tied to the Roomba itself? Would it be possible to move away from screens entirely, or to consider a hybrid solution? (2) The green thumb project seems to be primarily a display system. How do you see a user interacting with this system? -- What type of input might the user give back into the system, either explicitly or implicitly? What are the tangible aspects of this system and how does it take advantage of digital information?

Related work -- You may want to talk to Elizabeth Goodman (in BID), who is doing research on community gardens and technology.

Powered by Drupal - Design by Artinet