Why, you might ask, are they doing this? Well, with a collection of 3d images representing digital versions of hundreds of household objects, researchers can teach robots how to better understand the world around them. In essence, they are building a library to teach robots to be smarter than Roombas. For our robots to be intelligent, we need a digital, representational world filled with objects, each enriched with metadata (attributes).
The success of this project comes from the millions of kinect systems already in people's homes. These Xbox controllers are perched above many living room TVs and are already capable seeing all the objects in that room. The researchers hope to collect data on everything we see around us "from your couch to your TV to slippers, guitars, mugs, and toys." With all this information organized and shared, robots and services could understand our world and then be programmed to make it better.
Kinect@home is just a small part of the digitization of the physical objects around us. In the foreseeable future, humans will have virtually mapped our world and enriched it with metadata to make a digital model of our world. The article also mentions Google's Open Graph which aims to be database for everything. If organizing systems like Kinect@Home and Open Graph prove effective, we will see a rapid expansion in devices and services that use this data to improve our lives (and maybe roombas will finally know to avoid getting stuck under the bed).