Final Project: Interior Modeling and Planning

Posted by raghavchandra

raghavchandra's picture

 

 

Interior Modeling and Planning
(Architecture and Interior Designing)
We have evolved our idea from the Midterm project, and while we are sticking to the original “Lens” interface, the experience is totally new.

Original midterm project: A digital magnifier that would be like a virtual window to interact with remotes and cell phones that are too small to be conveniently used by old people.

Feedback: The intended experience of using the lens with the metaphor of a magnifying glass  is good. The problem is that the magnifier does not eliminate the bad interface of the device being operated on (like a TV remote) currently provides, it actually just uses the bad interface itself.

Final Project Proposal:
Our project allows the user to visualize and interact with the interiors of a designed house.
Imagine your architect gives you a 3D plan of your new house. Before you finalize the plan, you want to know how it would really feel to move around inside the house. Our lens allows you to explore the house as if you are really in it, using intuitive gestures and motions.
In order to get the complete picture, you want to get the feel of a furnished house. Suppose you are at a furniture store, wondering how this really nice looking table would look in your kitchen. Our lens allow you to capture the table and place it in the virtual house with just the right location and orientation. Now you can walk around the table in the furniture shop and the lens would create a 3D overlay of the table in the house. In essence, the lens allows you to transform the surroundings of the table (i.e. the store) into a realistic representation of your house.

The Lens:
The lens is a light, portable screen. It can be used in two modes - Gesture and Walkthrough. The Gesture mode allows you to explore by moving the lens in different directions or back and forth. The Walkthrough mode allows you to actually walk around with the lens and get a life-size experience of viewing the house, as though seeing an invisible world using the lens as a portal.

Layering:
The user stands before the object desired to be placed in their house. The user moves around in the virtual house to the right place and freezes the virtual world. He then sees an overlay of the object in this frozen virtual world, and walks around until the object overlays the frozen image in the desired orientation. Now, by unfreezing the virtual world, the user can use the Walkthrough mode (see above) to sync both motions, seeing a realistic depiction of the object in his house.
We are also possibly considering recording the captured objects such that each additional object is a layer that can be added onto the virtual house. This allows users to see multiple objects in the same space.
 
 
Implementation:
A full implementation is not feasible given the time available for the final project. We have divided the roles between our three members based on two categories - implementation and presentation.
 
Implementation and Research on Technical Viability:
Apoorva Sachdev:    Insertion of the object
Raghav Chandra:      Extraction of the object
Karthik Lakshmanan:   Motion based movement

Presentation:
Raghav Chandra:       Physical mock up of lens
Karthik Lakshmanan: Concept -  Video capture
Apoorva Sachdev:      Interface design for the demo
0
Your rating: None
Tags: