P3 – Groupo Naidy – Group 1

 

Our group wanted to create a system that could make the interaction between customers, the kitchen, and servers smoother and more efficient. In our first prototype test, we hope to see whether our new interface is intuitive to use and actually improves efficiency, or whether it introduces too much information to servers and confuses them.

Mission Statement:

Our goal for this project is to create a system that can aid the work of servers in a restaurant by easily providing them information on the state of the tables they are waiting. This information could help them make better decisions of how to order the tasks they complete, and complete those tasks more efficiently.

Avneesh, Kuni, and Yaared created the  rough prototypes.

Joe and John reviewed and improved the prototypes. They also wrote the task descriptions.

All members participated in documenting the prototypes and writing up the discussion.

PROTOTYPE DOCUMENTATION

The main board shown to the servers.

The Motherboard displays all the statuses of the tables in a given section of the restaurant. The tables are arranged in the floorplan of the section, where each table has indicators for cup statuses, a two column list table of all the orders, and a timer for how long since the order has been placed. For cup statuses, we have 3 lights–green yellow and red–that correspond to full, half-full, and empty, respectively. A number beneath each of these lights indicated how many cups are in each state. Our table of orders highlights each order as either green or red, depending on whether the item has been prepared or not, respectively. When the item has been delivered, the entry becomes a white box. Finally, there is a timer for every table that notifies how much time has passed since the orders were placed. If a table has had all green items for over 5 minutes, the table itself turns red, indicating that the food has been sitting for a while. Our coasters were simply cardboard squares for the moment.

server section

The start screen for entering a new order

table order

The screen to enter the details of an order.

order comments

The screen to enter comments for a particular item ordered

The confirmation screen for changing an existing order

The confirmation screen for changing an existing order

 

The confirmation screen for canceling and order

The confirmation screen for canceling and order

The confirmation screen for completing an order and sending it to the kitchen.

The confirmation screen for completing an order and sending it to the kitchen.

The is the device through which servers input orders. It has three functionalities – making new orders, changing existing orders, and canceling existing orders. The interface is a touch-display screen. The home screen simply allows users to pick a table and then either “Make a new order” or “Change the existing order” for that table. The order screens respectively display the “Current order” (+ any comments), “Menu”, and the “Make new/change order” and “Cancel order” buttons. For a server to add something to an order, he or she must simply ‘flick’ an item from the menu to the left (this will propel the item to the left and add it to the current order, for more than one simply flick again). To delete an item from the current order, simply ‘flick’ it to the left again so it is out of the order (if there are x2 of an item in an order flick left twice to get rid of both items). Servers can also add comments (e.g. well done, spicy) by simply pressing the comment box next to each item in the current order, which navigates to a comment screen with a keyboard in which you can attach or cancel comments. Once the order is done press “Make/change order”. If an order wants to be canceled simply press “Cancel order”. serverCenter is set up so that we don’t run into consistency issues with information in the kitchen center/mother board.

The help button

The help button

The help button after calling for help.

The help button after calling for help.

bridgeServer is a mobile application that helps servers notify their team if they are in trouble and require help with their tasks. Servers can log in to bridgeServer and are authenticated according to up to date establishment staff rosters. Servers simply press the “Send for help!” button when in need of assistance. This signal will be picked and displayed on the motherboard to notify other servers on duty (“Help is on its way!” pop-up). One of the other servers will respond and assist the struggling waiter.

TASKS

EASY – Calling for Help

Yaared is serving a customer and see's that another one needs to be served.

Yaared is serving a customer and see’s that another one needs to be served.

Yaared calls for help from another waiter.

Yaared calls for help from another waiter.

Avneesh sees the call for help on the Motherboard.

Avneesh sees the call for help on the Motherboard.

Avneesh helps Yaared's other customers

Avneesh helps Yaared’s other customers

Job well done team!

Job well done team!

Yaared calls for help from another waiter.

Yaared calls for help from another waiter.

Avneesh sees the call for help on the Motherboard.

Avneesh sees the call for help on the Motherboard.

Avneesh helps Yaared's other customers

Avneesh helps Yaared’s other customers

Job well done team!

Job well done team!

When extremely busy and unable to ask another waiter for assistance, a waiter may request help by pressing a button on his or her handheld device.  The other waiters then receive a notification on their handheld device that the given waiter needs help.  If the solution to the problem is obvious, a nearby waiter can address it directly.  Alternatively, they could consult the motherboard to see if anything is amiss with the busy waiter’s tables.  If the problem is less obvious, they can at least come to the vicinity of the troubled waiter so that they can receive instructions directly.  The original waiter can then clear the help requested status when the problem has been resolved.

MEDIUM – Checking customer status

The motherboard shows the "state" of each table so the server can infer customer impatience from this low level data.

The motherboard shows the “state” of each table so the server can infer customer impatience from this low level data.

A medium-level task that a waiter may have to perform is checking to see which tables are waiting on orders, have drinks that need refilling, or may be requesting attention directly. Currently serving staff in restaurants needs to physically go over to the area in question, survey all tables in detail, and then report back to the kitchen for what they need. The prototype app makes this task nearly trivial. Each table’s order data and drink levels are displayed on the prototype screen, along with the amount of time the group has been at the table. The user can see easily the number of drinks that need refilling by looking at the red and yellow lights. If a group has been waiting at a table for a significant amount of time without ordering food, this will also be clearly visible because the table will change colors.  All of this data can be easily monitored from a central location instead of by manual survey.

HARD – Determining task order

Determining task order is made much easier when there is information to see what issues are urgent.

Determining task order is made much easier when there is information to see what issues are urgent.

Perhaps the hardest task of the waiting staff is just to determine in what order to complete the various other tasks to be completed. Different tasks take different amounts of time, and it is often difficult to complete them in an order that leaves no patron waiting for too long. The most time-consuming of these is actually bringing out the food – since a waiter has to estimate how long it will take for the food to be prepared and may waste time standing around to wait for it – or, in the opposite case, miss it when it comes out since he or she is performing a different task. In the prototype, the presence of red and green highlighting underneath the food ordered at each table shows whether or not it is ready. Using this system, the user will no longer have to waste time going back to the kitchen to check, as the info is right in front of them on the prototype. This data, combined with the easing of checking customer status, should provide the user with an easier way to determine a task order, and provide a greater margin of error for a sub-optimal task order.

DISCUSSION

We created the layout for the motherboard on Adobe Illustrator, and all other parts were constructed from paper (the coasters were cardboard squares). Using Illustrator was a new technique, but it didn’t take too much time since we had a group member who knew how to use it. The most difficult part of our prototyping was figuring out a good interface for the input of the ordered items. We did not want to make the interface complicated and slow down the wait staff, but we wanted to be able to log enough information so that the Motherboard would be effective. We also did not want the waitstaff to have to record orders twice, once at the table and again inputting into the system. To solve this, our system could use a dedicated employee to take the waitstaff’s order notes and then enter them into the system. This way, the waitstaff have no “down time” where they can’t be serving or on the floor, and the pattern breaking activity of order entry for all waitstaff is concentrated in a single employee. We thought organizing the table information on the floor plan worked well for the motherboard, since we are presenting the information in a layout that people are already used to. Color coding various signals also provided a very simple way to signal certain information. We feel mix between textual and color information prevents the motherboard from becoming too cluttered with text.

 

 

P3 – Life Hackers

Group 15: Prakhar Agarwal (pagarwal@), Colleen Carroll (cecarrol@), Gabriel Chen (gcthree@)

Mission Statement

Currently there is no suitable solution for using a touch screen phone comfortably in cold weather. Users must either resort to using their bare hands in the cold or using unreliable “touchscreen compatible” gloves that often do not work as expected. Our mission is to create an off the screen UI for mobile users in cold weather. In our lo-fi prototype testing we hope to learn how simple and intuitive the gestures we have chosen for our glove really are for smartphone users. In addition to the off the screen UI, there is a phone application that lets you set a password for the phone.

We are all equally committed to this project, and we plan on dividing the roles evenly. Each member of our group contributed to portions of the writing and prototyping, and while testing the prototype we split up the three roles of videotaping, being the subject, and acting as the “wizard of Oz.”

Document the Prototype

This slideshow requires JavaScript.

Because our user interface is a glove with built-in sensors, we decided to prototype using a leather glove and cardboard. The cardboard is a low fidelity representation of the sensors, and was intended to simulate and test whether the sensors would impede the motion or ability to make gestures using the actual glove. For the on-screen user interface, we decided that since most of the functionality that we want the glove to work with is already built into their phone. For this reason, we decided that we would simply have test users interact with their phone while a “wizard of Oz” performed the “virtual” functionality by actually touching the phone screen. In addition, since the application for setting one’s password using our device has not yet been developed, we sketched a paper prototype for this functionality. By user-testing this prototype we hope to evaluate the overall ease of use of our interface.

Task 1: Setting the Password/Unlocking the Phone (Hard)

This slideshow requires JavaScript.

This is a task that needs to be performed before using the other applications that we have implemented, therefore it is important that it is possible to do with the gloves, so that users do not have to unlock their phone in the cold before each of our other tasks. The password is set using an onscreen interface in conjunction with the gesture glove. A user follows onscreen instructions – represented in the prototype with paper. They are told that they can only use finger flexes, unflexes, and pressing fingers together. Then they are told to hold a gesture on the glove then press a set button (with the unwired glove.) The screen print out what it interpreted as the gesture (for example, “Index and Middle finger flexed”.) When the user is satisfied with the sequence of characters they can press Done Setting button on screen. This task is labeled as hard because it involves a sequence of gestures mapping to a single function or action. In addition, users setting their gesture sequence need to interact with the application on screen.

Task 2: Picking up and Hanging Up the Phone (Easy)

This slideshow requires JavaScript.

One of the most common tasks to perform outside while walking is to talk on the phone, so it is a perfect interface to reinvent for our glove. Picking up and hanging up the phone have standard gestures as opposed to the user-determined gestures of setting a password. They use a gesture that is a familiar sign for a picking up a phone, as in the photo that shows the thumb to the ear and pinky to the mouth with the rest of the fingers folded. This is the easiest of the three tasks that we have defined. The user simply needs to perform the built-in gesture of making a phone with his or her hand and moving it accordingly.

Task 3: Play, Pause, and Skip Through Music (Medium)

This slideshow requires JavaScript.

From our contextual inquiries with users during the past assignment, we found that listening to music is one of the most common actions for which people use their phone while in transit. However, currently one needs to have specialized headphones in order to play/pause/change the music they are listening to without touching their screen. This would provide users with another simple interface to do so. For playing music, users will simply need to make the rock and roll sign as shown in the photo. To pause the music, the users must hold up their hand in a halt sign. To skip forward a track, users point their thumb to the right, while to pause they point their index finger to the left.

Discuss the Prototype

We made our prototype by taping the cardboard “sensors” to the leather glove for the gesture-based component of our design. The phone interface was made partially by paper prototyping and paritally by using the actual screen. We simulated virtual interaction by using the “wizard of Oz” technique described in the assignment specifications. Using this, we found a couple things that worked well in our prototype. Our gestures were simple in that they mapped one-to-one with specific tasks. We believe the interface (for setting passwords specifically) proved simple, while hopefully conveying enough information for the user to understand it. The system relies on many gestures that are already familiar to the user – the rock and roll sign, and telephone sign. We also saw that when we were wearing the glove, we could generally complete most everyday tasks, even off the phone (i.e. winding up a laptop charger cord), without added difficulty.

There were, however, definitely some things that prototyping helped us realize that we could improve also. We realized that we will need to consider the sensitivity of the electrical elements in the glove and it’s fragility when we are constructing it. When Prakhar opened a heavy door, one of the cardboard pieces of the protoype became slightly torn, helping us realize just how much wear and tear the glove will have to be able to withstand to be practical for daily use. We also realized that we will need to have different gloves for lefties and righties, since only one hand will have the gesture sensors in it and right handed users will have different preferences than left handed users. The app will be configured to recognize directional motions based on whether a righty or lefty user is wearing the glove. For example, the movements for skip forward and skip backward would likely be different for lefties and righties because of the difference in orientation of the thumb and the forefinger on either hand. Another thing we realized is that instead of having the gesture control be in the dominant hand as we initially supposed, we should consider the benefits of having gesture control in the non-dominant hand, freeing up the dominant hand for other tasks. This was especially noticeable when testing the functionality to set the password, which required users to simultaneously use the phone and the glove. In doing so, it would be easier to do gestures with the non-dominant hand while using the phone with the dominant.

P3: Group 8 – The Backend Cleaning Inspectors

Group Number and Name
8 – The Backend Cleaning Inspectors

Members

  • Tae Jun Ham (tae@)
  • Peter Yu (keunwooy@)
  • Dylan Bowman (dbowman@)
  • Green Choi (ghchoi@)

Mission Statement
We the Back­end Clean­ing Inspec­tors believe in a bet­ter world in which every­one can focus on the impor­tant things with­out the distraction and stress from mundane chores. This is why we decided to make our “Clean Safe” laundry security system. Stressing over laundry is by far one of the most annoying chores, and our “Clean Safe” laundry system will rescue students from that annoyance. Now Princeton students will be able to work with­out hav­ing to worry about the safety of their laundry.

Description of Our Prototype

Our prototype consists of two parts: 1) the user interface with a keypad and an LCD screen, 2) the lock. The user interface is where the two users, the Current User and the Next User, interact with the lock and with each other. It has a 4-by-4 keypad, three LED lights and a black-and-white LCD screen. The lock consists of two parts. One will be mounted on to the door of the washing machine, and the other on the body of the machine next to the door. There is a dowel that connects the two parts and acts as the lock. Also, there is a servo motor inside one of the parts that will lift/lower the dowel to release/lock. The servo motor will be controlled by the user interface.

Description of Tasks

  1. Lock­ing the machine (Current User)
    Our project lets the current laundry machine user to lock the laundry machine right after starting the laundry process. This task is three step process : (1) User inputs his student ID into keypad (on unlocked machine) (2) User press the “Enter” button on the keypad (3) User confirms his identity on screen by pressing the “Enter” button again.




    As shown in the picture above, our prototype mimics user operations on our module with keypad and lcd screen. “Enter” button is located on the right bottom side of the keypad.

    Above video shows testing process of this task with our prototype. As you can see, it is very simple to lock the laundry machine for added security.

  2. Send­ing mes­sage to cur­rent user that laun­dry is done and some­one is wait­ing to use the machine (Next User)
    This task lets next laundry user to send a message to the current laundry machine user. This task is very simple. User simply press button “Alert” to send the message to current user and the screen would show that the message is successfully sent.


    As shown in the picture above, our prototype mimics user operations on our module with keypad and lcd screen. User simply press the “Alert” button on the right side of the keypad.

    Above video shows testing process of this task with our prototype. Our system allows an easy for the two users to interact with each other.

  3. Unlock the machine (Current User)
    This task lets the current laundry machine user to open the laundry machine with his student ID. This is three step process : (1) User inputs his student ID into keypad (on locked machine) (2) User press the “Enter” button (3) User confirms his wish to unlock the machine by pressing the “Enter” button.




    As shown in the picture above, our prototype mimics user operations on our module with keypad and lcd screen. This task is very similar to the Task 1 except that this task is done on the already loc
    ked machine. This similarity makes our user interface more intuitive.

    Above video shows testing process of this task with our prototype. As shown in the video, it is very simple to unlock the door and retrieve the laundry. Also, with the extra security provided by our machine after the laundry is done, it will be unlikely for the Current User to lose any of his/her laundry.

Discussion of Prototype
We made our prototype using cardboard paper and the plain white paper. Cardboard paper forms the base of our prototype and white paper pieces work as components on the base. In general, making the prototype was a straightforward process. However, we had to modify few parts of the prototype (e.g., LCD screen interface) to make our prototype intuitive enough for someone to test it without too much explanations. After modifications, many testers were satisfied with the prototype and we now believe our low-fidelity prototype does its job effectively.

P3 – Epple (Group 16)

Group 16 – Epple

Member Names
Saswathi: Made the prototype & part of the Final document
Kevin: Design Idea & Part of the Final document
Brian:  Large part of the Final Document
Andrew: Created the Prototype environment & part of the Final Document

Mission Statement

The system being evaluated is titled the PORTAL. The Portal is an attempt at intuitive remote interaction, helping users separated by any distance to interact in as natural a manner as possible. Current interaction models like Skype, Google Hangouts, and Facetime rely entirely on users to maintain useful camera orientation and affords each side no control of what they are seeing. We intend to naturalize camera control by implementing a video chatting feature that will use a Kinect to detect the orientation of the user and move a remote webcam accordingly. Meanwhile, the user looks at the camera feed through a mobile viewing screen, simulating the experience of looking through a movable window into a remote location. We hope to learn in our first evaluation of our prototype ways to make controlling the webcam as natural as possible. Our team mission is to make an interface through which controlling web cameras is intuitive.

Description of Prototype

Our prototype uses a piece of cardboard with a cut out square screen in it as the mobile viewing screen. The user simply looks through the cut out square to view the feed from a remote video camera. From the feed, the user can view our prototype environment. This consists of a room with people that the user web chats with. These people can either be real human beings, or in some cases printed images of human beings that are taped to the wall. We also have a prototype Kinect in the room that is simply a decorated cardboard box.

Prototype in use. User keeps a subject in the portal frame by moving their own body.

Prototype in use. User keeps a subject in the portal frame by moving their own body.

Cardboard Kinect. Tracks user's motion and moves the remote webcam accordingly.

Cardboard Kinect. Tracks user’s motion and moves the remote webcam accordingly.

Stand-in tablet. The portal through which the user views the remote location's webcam feed.

Stand-in tablet. The portal through which the user views the remote location’s webcam feed.

 

Three Tasks

Task 1 : Web chat while breaking the restriction of having the chat partner sit in front of the computer

Difficulty: Easy

Backstory:

A constant problem with web chats is the restriction that users must sit in front of the web camera to carry on the conversation; otherwise, the problem of off-screen speakers arises.  If a chat partner moves out of the screen with our system, we can eliminate the problem of off-screen speakers through simply allowing the user to intuitively change the camera view to follow the person around. The conversation can then continue naturally in this situation.

How user interacts with prototype to test:

We have the user look through the screen to look and talk to a target person.  We have the person move around the room.  The user must move the screen to keep the target within view while maintaining the conversation.

Saswathi is walking and talking. Normally this would be a problem for standard webcam setups. Not so for the Portal! Brian is able to keep Saswathi in the viewing frame at all times as if he were actually in the room with her, providing a more natural and personal conversation experience.

 


Task 2 : Be able to search a distant location for a person through a web camera.

Difficulty: Medium

Backstory:

Another way in which web chat differs from physical interaction is the difference in the difficulty of initiation. While you might seek out a friend in Frist to initiate a conversation, in web chat, the best you can do is wait for said friend to get online. We intend to rectify this by allowing users to seek out friends in public spaces by searching with the camera, just as they would in person.

How user interacts with prototype to test:

User plays the “Where is Waldo” game , there are various sketches of people taped on the wall. The user looks through the screen and moves it around until he is able to find the Waldo target.

After looking over a wall filled with various people and characters, the user has finally found Waldo above a door frame.

 


Task 3 : Web chat with more than one person on the other side of the web camera.

Difficulty: Hard

Backstory:

A commonly observed problem with web chats is that even if there are multiple people on the other end of the web chat, it is often limited to being a one on one experience where chat partners wait for their turn to be in front of the web camera or crowd together to appear in the frame. Users will want to use our system to be able to web chat seamlessly with all the partners at once. When the user wants to address another web chat partner, he will intuitively change the camera view to face the target partner. This allows for dynamic, multi-way conversations not possible through normal web camera means.

How user interacts with prototype to test:

We have multiple people carrying a conversation with the user. The user is able to view the speakers only through the screen. He must turn the screen in order to address a particular conversation partner.

The webcam originally faces Drew, but Brian wants to speak with Kevin. After turning a bit, he finally rotates the webcam enough so that Kevin is in the frame.


 Discussion

The prototype is mainly to understand the user’s experience so we have a portable display screen that resembles an iPad made from a cardboard box with a hole for a screen cut out. One can walk around with the mobile display and also look through it at the environment. The Kinect is also modeled as a cardboard box with markings on it and placed in a convenient location as a real kinect that is detecting user movement would be placed. The prototype environment is made from printouts of various characters so that one can search for “Waldo”.

In creating our prototype, we found that the standard prototyping techniques of using paper and cardboard was plenty multi-purpose for our needs. It was difficult to replicate the feature of the camera following a scene until we hit upon the idea of simply creating an iPad “frame” which we would just use to pretend to be remotely viewing a place. Otherwise, the power of imagination made our prototype rather easy to make. We felt that our prototype worked well because it was natural, mobile, easy to carry, and enhance our interactions well (since there was literally nothing obstructing our interaction). Even with vision restricted to a frame, however, we found that our interactions were not in any way impaired.

P3 – Runway

Team CAKE (#13) – Connie (demos and writing), Angela (filming and writing), Kiran (demos and writing), Edward (writing and editing)

Mission Statement

People who deal with 3D data have always had the fundamental problem that they are not able to interact with the object of their work/study in its natural environment: 3D. It is always viewed and manipulated on a 2D screen with a 2 degree-of-freedom mouse, which forces the user to do things in very unintuitive ways. We hope to change this by integrating a 3D display space with a colocated gestural space in which a user can edit 3D data as if it is situated in the real world.

With our prototype, we hope to solidify the gesture set to be used in our product by examining the intuitiveness and convenience of the gestures we have selected. We also want to see how efficient our interface and its fundamental operations are for performing the tasks that we selected, especially relative to how well current modelling software works.

We aim to make 3D modelling more intuitive by bringing virtual objects into the real world, allowing natural 3D interaction with models using gestures.

Prototype

Our prototype consists of a ball of homemade play dough to represent our 3D model, and a cardboard 3D coordinate-axis indicator to designate the origin of the scene’s coordinate system. We use a wizard of oz approach to the interface, where an assistant performs gesture recognition and modifies the positions, orientations, and shapes of the “displayed” object. Most of the work in this prototype is the design and analysis of the gesture choices.

Prototype

Discussion

Because of the nature of our interface, our prototype is very unconventional. It requires two major parts: a comprehensive gesture set, and a way to illustrate the effects of each gesture on a 3d object or model (neither of which is covered by the standard paper prototyping methods). For the former, we considered intuitive two-handed and one-handed gestures, and open-hand, fist, and pointing gestures. For the latter, we made homemade play dough. We spent a significant time discussing and designing gestures, less time mixing ingredients for play dough, and a lot of time playing with it (including coloring it with Sriracha and soy sauce for the 3D painting task). In general, the planning was the most difficult and nuanced, but the rest of building the system was easy and fun.

Gesture Set

We spent a considerable amount of time designing well-defined (for implementation) and intuitive (for use) gestures. In general, perspective manipulation gestures are done with fists, object manipulation gestures are done with a pointing finger, and 3D painting is done in a separate mode, also with a pointing finger. The gestures are the following (with videos below):

  1. Neutral – 2 open hands: object/model is not affected by user motions
  2. Camera Rotation – 2 fists: tracks angle of the axis between the hands, rotates about the center of the object
  3. Camera Translation – 1 fist: tracks position of hand and moves camera accordingly (Zoom = translate toward user)
  4. Object Primitives Creation – press key for object (e.g. “C” = cube): creates the associated mesh in the center of the view
  5. Object Rotation – 2 pointing + change of angle: analogous to camera rotation
  6. Object Translation – 2 pointing + change of location: analogous to camera translation when fingers stay the same distance apart
  7. Object Scaling – 2 pointing + change of distance: track the distance between fingers and scale accordingly
  8. Object Vertex Translation – 1 pointing: tracks location of tip of finger and moves closest vertex accordingly
  9. Mesh Subdivision – “S” key: uses a standard subdivision method on the mesh
  10. 3D Painting – “P” key (mode change) + 1 pointing hand: color a face whenever fingertip intersects (change color by pressing keys)

Display

Our play dough recipe is simply salt, flour, and water in about a 1:4:2 ratio (it eventually hardens, but is sufficient for our purposes). We use a cardboard cutout to represent the x, y, and z axes of the scene (to make camera movements distinct from object manipulation). Lastly, for the sake of 3D painting, we added Sriracha and soy sauce for color. We did not include a keyboard model for selecting modes, to avoid a mess – in general, a tap on the table with a spoken intent is sufficient to model this.

To represent our system, we have an operator manually moving the object/axes and adding to/removing from/stretching/etc. the play dough as the user makes gestures.

Neutral gesture:
[kaltura-widget uiconfid=”1727958″ entryid=”0_5cywalwv” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /]

Perspective Manipulation (gestures 2 and 3):
[kaltura-widget uiconfid=”1727958″ entryid=”0_plcklhja” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /] [kaltura-widget uiconfid=”1727958″ entryid=”0_zkjrl2oe” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /]

Object Manipulation (gestures 4-8):
[kaltura-widget uiconfid=”1727958″ entryid=”0_ectbev0x” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /] [kaltura-widget uiconfid=”1727958″ entryid=”0_3c4tjcus” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /] [kaltura-widget uiconfid=”1727958″ entryid=”0_pas35w52″ width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /] [kaltura-widget uiconfid=”1727958″ entryid=”0_kq3doena” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /] [kaltura-widget uiconfid=”1727958″ entryid=”0_uqb8zyft” width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /]

3D Painting (gesture 10):
[kaltura-widget uiconfid=”1727958″ entryid=”0_mzod6bp3″ width=”260″ height=”206″ addpermission=”-1″ editpermission=”-1″ /]

Not shown: gesture 9.

Task Descriptions

Perspective Manipulation

The fundamental operation people perform when interacting with 3D data is viewing it. To be able to understand a 3D scene, they have to be able to see all sides and parts of it. In our prototype, users can manipulate the camera location using a set of gestures that will always be available regardless of the editing mode. We allow the user to rotate and translate the camera around the scene using gestures 2 and 3, which use a closed fist; we also allow for smaller natural viewpoint adjustments by moving their head (to a limited degree).

Object Manipulation

For 3D modelling, artists and animators often want to create a model and define its precise shape. One simple way of object creation is starting with geometric primitives such as cubes, spheres, and cylinders (created using gesture 4) and reshaping them. The user can position the object by translating and rotating (gestures 5 and 6), or alter the mesh by scaling, translating vertices, or subdividing faces (gestures 7-9). These manipulations are a combination of single finger pointing gestures and keyboard button presses. Note that these gestures are only available in object manipulation mode.

3D Painting

When rendering 3D models, we need to define a color for every point on the displayed surface. 3D artists can accomplish this by setting the colors of vertices or faces, or by defining a texture mapping from an image to the surface. In our application, we have a 3D painting mode that allows users to define the appearance of surfaces. Users select a color or a texture using the keyboard or a tablet, and then “paint” the selected color/texture directly onto the model by using a single finger as a brush.

P3: Lab Group 14

a) Group Information: Group #14, Team Chewbacca

b) Group Members

Karena Cai (kcai@) – in charge of designing the paper prototype
Jean Choi (jeanchoi@) – in charge of designing the paper prototype
Stephen Cognetta (cognetta@) – in charge of writing about the paper prototype
Eugene Lee (eugenel@) – in charge of writing the mission statement and brainstorming ideas for the paper prototype

c) Mission Statement

The mission of this project is to create an integrated system that achieves the goal of helping our target users take care of their dog in a non-intrusive and intuitive way. The final system should be almost ready for consumer use, excepting crucial physical limitations such as size and durability.  The purpose of the system is to aid busy dog-owners who are concerned about their dogs’ health but who must often spend time away from their household due to business, vacation, etc. Our system does so by giving the user helpful information about their dog, even when they are away from home. It would help busy pet-owners keep their dogs healthy and happy and give them greater peace of mind.  By helping owners effectively care for their pets, it might even reduce the number of pets sent to the pound, where they are often euthanized.  In our first evaluation of the prototype system, we hope to learn about what users consider the most crucial part of the system. In addition, we will learn to what extent the information about their dog should be passively recorded or actively notified to the user. Finally, we will try to uncover as many flaws with our current design as possible at this early step. This will prevent us from using our limited time on a feature that does not actually provide any benefit to the user.

d) Description of prototype

Our prototype includes three components: a paper prototype of the mobile application, a prototype of the dog bowl, and a prototype of the dog collar.  The mobile application includes the home screen and screens for tracking the dog’s food intake, tracking its activity level, and creating a document that includes all pertinent data (that can be sent to a vet).  The dog bowl includes an “LED” and a “screen” that shows the time the bowl was last filled.  The collar includes an “LED”.

20130329_162734

A prototype for the dog bowl (Task 1)

20130329_162527

Prototype for the dog collar (Task 2)

photo (2)

Notification set-up for the application

20130329_162806

Main page for the app, where it shows exercise, diet, time since last fed, a picture of the dog (would go in the circle), edit settings, and exporting the data.

20130329_162952

Exercise information for the dog, shown over a day, week, or a month. (Task 2)

20130329_162852

Diet information for the dog, where it shows the bowl’s current filled amount, and the average information for the dog’s intake. (Task 1)

  20130329_163011

The function for exporting data to the veterinarian or other caretakers for the dog. (Task 3)

e) Task Testing Descriptions

Task 1: Checking when the dog was last fed, and deciding when/whether to feed their dog.

The user will have two ways to perform this task. One, they may look at the color of an LED on the dog bowl (which indicates how long it has been since the bowl was filled), or look at the exact time of the last feeding, which is also displayed on the bowl. Alternatively, they can look at the app, which will display the time the bowl was last filled.

If no feeding has been detected for a long time, the user will receive a direct alert warning them that they have not fed their dog. We intend for our prototype to be tested using both the bowl alone and using both the mobile application and the bowl.  The “backstory” for the bowl alone is that the owner is at home and wishes to see if they should feed their dog, and/or whether someone else in their family has already fed their dog recently. The “backstory” for the mobile application + bowl prototype test is that the owner has gotten a notification that they have forgotten to feed their dog, and they check the mobile application for more information and subsequently go to fill their dog’s bowl.

Task 2: Checking and regulating the activity/healthiness of your dog

The user can check the activity level of his or her dog by looking at its collar – a single LED will only light if the dog has lower levels of activity than usual (for the past 24 hours). The user can also find more detailed information about their dog’s activity level by looking at the app, which shows the dog’s level of activity throughout the day, week, or month, and assigns a general “wellness” level according to the dog’s activity level that is displayed on the home screen as a meter. This prototype should be tested in two ways — using the collar alone or just the mobile application.  The backstory for testing only the collar prototype is that the owner has just arrived home from work and wants to know whether the dog needs to be taken on a walk (or whether it has received enough physical activity from being outside during the day when the owner was not home) — using the LED on the collar, the owner can make a decision.  The backstory for testing only the mobile prototype is that the owner has recently changed their work schedule and wishes to see whether this has adversely affected their ability to give their dog enough physical activity — they can check this by looking at the week/month views of the mobile app.

Task 3: Generate, view, and share a summary of your dog’s health over a long period of time.

The user can generate, view, and share a summary of their dog’s health over a long period of time by using the “Export data” button on the application, which also has the option of sending the information to someone else (probably a veterinarian).  This mobile application prototype will be tested by having users interact with the relevant prototype screens.  The backstory for testing is that the user has a veterinarian appointment the next day, but does not remember exactly how much they have been feeding their dog/how much activity it has gotten, and would not be able to tell the vet much from memory.  Using the prototype, they can automatically send detailed information straight to the vet.

Video of the tasks here: https://www.youtube.com/watch?v=KIixVJ21zQ0

f) Discussion of Prototype

We started the process of making our prototype by first brainstorming the most convenient ways that a user could perform these tasks. Continuous revisions were made until we believed we had streamlined these tasks as much as possible within our technological limitations. Afterwards, we created an initial design for the application, and quickly created prototypes for the mobile application, collar, and bowl.  While not particularly revolutionary, we used a physical bowl (made out of paper) to simulate the usage of the bowl. While we were considering including some surrogate imitation of a dog, we decided against it, as all of our ideas (hand puppets, images, video, etc) were considered too distracting for the tester. Because the collar is an interface ideally out of the hands of the user, we decided to simply show them a prototype of what they would see on the collar, as well as their data updating on the application.

Perhaps the most difficult aspect of making the prototype was figuring out how we could make the user “interact” with their dog, without actually bringing in their dog.  It was also difficult to design prototypes that had minimal design (i.e. tested all of the relevant tasks, while not distracting the user with “flashy” icons or features).  We found that the paper prototypes worked well to help us envision how the app would look, and how it would be improved. The prototypes for the bowl and collar were also helpful in helping us identify exactly what information the user would need to know and what was superfluous.  Using very simple prototype materials/designs for the bowl and collar were helpful to our thinking/design process. While the paper prototypes submitted in this assignment were created through multiple revisions, the prototype will probably continue to be revised for P4.

P3: Prototyping the BlueCane

Mission Statement

Our mission is to improve the autonomy, safety, and overall level of comfort of blind users as they attempt to navigate their world using cane travel. Our system will accomplish this by solving many of the problems users face when using a traditional long, white cane. Specifically, we will allow users to interact with their GPS devices without having to dedicate their only other free hand to its use by integrating Bluetooth functionality into the cane itself, and our system of haptic feedback will allow users to receive guidance that is perfectly clear even in a noisy environment and does not distract them from listening to basic environmental cues. In addition, the compass functionality we add will allow users to always have an on-demand awareness of their cardinal orientation, even indoors where their GPS does not function. Finally, because we recognize the utility that traditional cane travel techniques have come to offer, our system will perform all of these functions without sacrificing any of the use or familiarity of the standard cane.

Description of tasks
1. Navigating an Unfamiliar Path While Carrying Items:
We will have our users perform the tests while carrying an item in their non-cane hand. To replicate how the task would actually be performed from start to finish, we will first have the user announce the destination which they are to reach aloud (as they would using hands-free navigation), and then via “Wizard of Oz” techniques we will provide the turn-by-turn directions.

We did a few test-runs in the ELE lab and found that it was necessary to dampen the extra noise created by our wizardry. The video below is a quick example of the method we will use when testing the prototype with users.

And the same method while carrying an item in the other hand:
free hand 1

2. Navigating in a Noisy Environment:
An important aspect of the design was to eliminate the user’s dependence on audio cues and allow them to pay attention to the environment around them. Likewise, we recognized that some environments (e.g. busy city streets) make navigating with audio cues difficult or impossible. In order to simulate this in our testing, we will ask the user to perform a similar navigation task as in Task 1 under less optimal conditions: the user will listen to the ambient noise of a busy city through headphones.
headphones 2

3. Navigating an Unfamiliar Indoor Space:
When navigating a large indoor space without “shorelinable” landmarks, the user uses the cane to maintain a straight heading as they traverse the space, and to maintain their orientation as they construct a mental map of the area. With our prototype, the user will be told that a landmark exists in a specific direction across an open space from their current location. They will attempt to reach the landmark by swinging their cane to maintain a constant heading. A tester will tap the cane each time the user swings it past geographic north, simulating the vibration of a higher-fidelity prototype. The user will also have the option to “set” a direction in which they’d like the tap to occur by initially pointing their cane in a direction, and will be asked to evaluate the effectiveness of the two methods. The user will be asked to explore the space for some time, and will afterwards be asked to evaluate the usefulness of the cane in forming their mental map of the area.

Description of prototype
Our prototype consists of a 4ft PVC pipe, and a padded stick meant to provide tactile feedback without giving additional auditory cues. The PVC pipe is meant to simulate the long white cane used by blind people. The intended functionality of the product is to have the cane vibrate when the user swings the cane over the correct direction (e.g., north). To simulate vibration of the cane when it passes over a cardinal direction, we use the padded stick to tap the PVC pipe when it passes over the intended direction.

How did you make it?
The PVC pipe is used as-is. The padded stick is just a stick with some foam taped to its end as padding.

Other prototyping techniques considered
We considered taping a vibrating motor to the PVC pipe and having a tester control the vibration of the motor through a long wire when the user is swinging the PVC pipe. However, we realized it would not work well since the user would be swinging the pipe quite quickly, and it would be hard for the tester to time the vibration such that the pipe vibrates when it’s pointing in the right direction.

What was difficult?
This prototype was really simple to build.

What worked well?
The foam padding worked well to provide tactile feedback without giving additional auditory feedback.

Group Members: Joseph Bolling, Jacob Simon, Evan Strasnick, Xin Yang Yak

P3 – Name Redacted

Group 20 – Name Redacted

Brian worked on the project summary and the discussion of the three prototypes.

Ed worked on the prototype that modeled the TOY programming machine.

Matt worked on the prototype corresponding to the binary lesson.

Josh worked on the prototype for the tutorial to teach instructors how to use this system.

We all worked on the mission statement.

Rational for the Assignment: Most school districts do not teach computer science and are constrained by technological costs and teacher training.  Our project hopes to rectify these two problems by creating an easy to use, fun, and interactive computer program that allows students to learn the fundamentals of computer science without the need for expensive hardware and software.  Our end goal is to create a project that allows users to “code” without typing on a computer.  Thus, this prototype gives us a great opportunity to test the feasibility of this design.  From a user perspective there is very little difference between taping paper to a whiteboard and having us draw in the output from putting magnets on the whiteboard and have a projector show the output generated by the computer.  Thus, we hope that the low-fidelity prototype can give us not only an accurate depiction of how the user will interact with the final system but also provide valuable insight into how to improve the overall user experience (especially since the goal is to create fun and interactive experience for middle school students).

Mission Statement:  Our group strives to provide every student with a method to learn the fundamentals of computer science in a tangible, fun, and interactive way.  Most schools in the country do not teach computer science because of resource and financial limitations.  However, computer science is one of the fastest growing industries, creating a wide gap between the supply and demand of computer programmers.  By creating a cheap and effective way to teach  the fundamentals of computer science, we hope to give students from all socioeconomic backgrounds the ability to become computer programmers.

Updated Task: We switched our difficult task from the Simplified Turtle Graphics to the teaching of the TOY Lesson to instructors.  Since interviewing an instructor in P2, we realized that a large part of the success of our project relies on teaching instructors how to use the system.  Thus, since our user group expanded from only students to students and teachers, it made sense to focus one task on how instructors would use our interface.

Description of Our Prototype: Our prototype uses paper cards with “fake” AR Tags and labels that look similar to those in a real system.  We used tape on the back of the cards to mimic how users will put magnetic or otherwise adhesive cards onto a whiteboard.  Our prototype obviously does not rely on a projector or web camera, and so we used whiteboard markers to emulate the information that the program would project onto the whiteboard.  For the tutorial we drew what the whiteboard would look like as the user stepped through the program.  We have 16 number cards (for 0 to f) and labels for binary, octal, hexadecimal, decimal, LABEL, MOV, PRINT, ADD, SUB, JNZ

Completed program

Completed program

This is where the program outputs.

This is where the program outputs.

This is where the memory is displayed.  Our interviewed potential users said that understanding memory was a very difficult concept to grasp.

This is where the memory is displayed. Our interviewed potential users said that understanding memory was a very difficult concept to grasp.

Completion of program with output.

Completion of program with output.

This is where the code goes (the students places the magnetic blocks here).

This is where the code goes (the students places the magnetic blocks here).

Task 1 – Teaching Binary:

The objective of this task is to provide a simple, clean, and interactive teaching environment for students to learn the difference between different number bases (specifically binary, octal, hex, and decimal).  For user testing, we would approach this from two angles.  First, we will test what it is like to teach with this tool.  For that, we would have the tester imagine they are teaching to a classroom of students and using this as an aid in the lesson.  Second, we can see the test through the eyes of the students.  Our tool is meant to be interactive, so after a quick lesson on what the tool is and how it works, we might ask students to complete quick challenges like trying to write 10 in binary.  The point of this system in both cases it to try and simplify the teaching process and increase engagement through interactivity.

 

Imagine you are teaching a lesson.  The basic idea of our UI is that there are 4 numerical base cards:



And 16 number cards:



They have adhesive on the back and they stick to the white board.  Users then place these cards on the board.  In our real system, we would project output on top of the board and cards, but for this lo-fi prototype, we will use marker, written by the tester, instead.

 

The first way that you could use the system is to put one numerical base card down and place a string of numbers after it.  The system will interpret that string of numbers in the base and provide a representation of that quantity.  In the below example, it displays balls.



Another way using the system would be to put down more than one numerical base cards and then place a string of numbers following just one of those card.  The system would then populate strings of numbers following the other bases so that they are all equivalent.



If the user places down something that is not valid (such as a value beyond the base), we would use highlighting from the projector to let them know their error.


 

Task 2 – TOY Programming:

This lesson is used to teach the fundamentals of how all computers work. We provide users cards with a simplified version of assembly language and visual tags printed on them. The teacher or student will use this system to learn about computation and simple logic in a classroom setting. As the user places cards on the board, the projector will overlay syntax highlighting and other visual cues so the user gets feedback as he or she is writing the program. Then, when the user is done writing the program, they place the RUN card on the board. The system first checks if the program is syntactically correct and, if not, displays helpful messages on the board. Then, the system walks through the code step by step, showing the contents of memory and output of the program. As the commands are all very simple and straightforward, there is no confusing “magic” happening behind the scenes and it will be very easy for students to understand the core concepts of computation. However, the language is complete enough to make very complex programs. Our paper prototype very closely resembles the form of our final project. We created the visual tags out of paper and stuck them on a whiteboard using tape. We mimicked the actions of the projector by drawing on the board with markers.

This slideshow requires JavaScript.

Task 3 – Teaching Tutorial:

The purpose of the tutorial would be to familiarize users – the teacher in particular – with the main display interface and teach them how to properly use the instruction cards. Prior to starting the tutorial, the user will need to be given basic instructions on what the system is, why it is useful, and how to setup the system and run the program. Once started, the user should not need any additional help. Text and prompts will be projected on the screen to guide the user through the tutorial, teaching him what the different parts of the display represent and how to properly use each instruction card. The tutorial will advance to the next step when after a certain period of time has elapsed, the user has completed a designated task, or the user presses a continue key such as the space bar the the computer. Mistakes can be recognized and corrected by the system itself if the user does something wrong.

This slideshow requires JavaScript.

Discussion of Prototype:

We made our prototype by first creating blocks that were similar to how they would appear on the magnets that the users will use.  The blocks have AR Tags (which for now we made up), and a label.  There are blocks for all of the numbers from 0 to f,  and blocks with keywords that are supported in the “computer language” we are developing. Another part of our prototype was drawing on the whiteboard how the whiteboard will look during a lesson. This meant creating the three sections that will be projected – code, memory, and output. We wanted to draw these sections on the whiteboard for our prototype since they change real time for our project and thus we could emulate with markers what the user will see when they use our program. By using the whiteboard, we used a new prototyping technique. We believed that it was more suitable than paper because of the increased flexibility it gives us over paper. When we test our prototype with real users, we want the freedom to display the correct output, memory and error messages. This would have required too much paper since we have many different possible values in each of the four registers at any given moment, among other things. Also, since our system will rely on the whiteboard, it made sense to have the users interact with a whiteboard when testing our prototype.

One of the challenges that we had to confront arises from the primary user group being younger students.  Thus, we had to keep the tags simple and few enough that students could reasonably understand what they did yet we still wanted a reasonable amount of functionality.  It was difficult to come up with a good whiteboard interface for the users.  We wanted something simple that still conveyed all of the useful information.  One idea that we considered was an input tag that would allow the user to input data to the TOY program.  We decided however that this made the programming unnecessarily complex while not adding that much benefit.  Most of the difficulty in creating the prototype was similar, and the issue came from deciding what functionality to include that would offer a complete introduction to the material without overly complicating the learning process.  Even though using the whiteboard rather than paper presented some difficulties, I think it works very well in terms of simulating the program.  It was also important that our prototype not lay flat on a surface, since the final project will use a projector image on a whiteboard.  I think our prototype very closely resembles how we currently think the end product will look.

The Elite Four (#19) P3

The Elite Four (#19)
Jae (jyltwo)
Clay (cwhetung)
Jeff (jasnyder)
Michael (menewman)

Mission Statement:
We are developing a system that will ensure users do not leave their room/home without essential items such as keys, phones, or wallets. Our system will also assist users in locating lost tagged items. Currently, the burden of being prepared for the day is placed entirely on the user. Simple forgetfulness can often be troublesome in living situations with self-locking doors, such as dorms. Most users develop particular habits in order to try to remember their keys, but they often fail. By using a low-fidelity prototype, we hope to identify any obvious problems with our interface and establish how we want our system to generally be used. Hopefully, we can make this process easy and intuitive for the user.

Statement: We will develop a minimally inconvenient system to ensure that users remember to bring important items with them when they leave their residences; the system will also help users locate lost tagged items.

We all worked together to create the prototype and film the video. Jae provided the acting and product demo, Clay provided narration, Jeff was the wizard of Oz, and Michael was the cameraman. We answered the questions and wrote up the blog post together while we were still in the same room.

Prototype:
We created a cardboard prototype of our device. The device is meant to be mounted on the wall next to the exit door. Initially, the user will register separate RFID tags for each device he or she wants to keep track of. After that, the entire process will be automated. The device lights up blue in its natural state, and when the user walks past the device with all the registered RFID tags, the device lights up green and plays a happy noise. When the user walks past the device without some or any of the registered RFID tags, the device lights up red and plays a warning noise. The device is just a case that holds the Arduino, breadboard, speakers, RFID receiver, LEDs, and buttons for “Sync” and “Find” modes. The Arduino handles all of the RFID communication and will be programmed to control the LEDs and speakers. “Sync” mode will only be toggled when registering an RFID tag for the first time. “Find” mode will only be toggled when removing the device from the door in order to locate lost items.

The blue- and red-lit versions of the prototype, plus the card form-factor RFID tag

The blue- and red-lit versions of the prototype, plus the card form-factor RFID tag

The green- and red-lit versions of the prototype, plus the card form-factor RFID tag

The green- and red-lit versions of the prototype, plus the card form-factor RFID tag

Task 1 Description:
The first task (easy difficulty) is alerting the user if they try to leave the room without carrying their RFID-tagged items. For our prototype, the first step is syncing the important tagged item(s), which can be done by holding the tag near the device and holding the sync button until the lights change color. Next, the user can open the door with or without the tagged items in close proximity. If the tagged items are within the sensor’s range when the door is opened, the prototype is switched from its neutral color (blue) to its happy color (green), and the device emits happy noises (provided by Jeff). If the tagged items are not in range, the prototype is switched to its unhappy color (red), and unhappy noises are emitted (also provided by Jeff). This functionality can be seen in the first video.

The device is in door-mounted "neutral mode"; the user has not opened the door yet

The device is in door-mounted “neutral mode”; the user has not opened the door yet

When the door is opened but tagged items are not in proximity, the device lights up red and plays a warning noise

When the door is opened but tagged items are not in proximity, the device lights up red and plays a warning noise

When the tagged item(s) is/are in close proximity to the device and the door is opened, the device lights up green and plays a happy noise

When the tagged item(s) is/are in close proximity to the device and the door is opened, the device lights up green and plays a happy noise

Task 2 Description:
The second task (moderate difficulty) is finding lost tagged items within one’s room/home. For our prototype, this is accomplished by removing the device from the wall, pressing the “Find” button, and walking around with the device. The speed of beeping (provided by Jeff in the video) indicates the distance to the tagged item and increases as the user gets closer. This functionality is demonstrated in the first video.

The device can be removed from the wall and used to locate missing tagged items

The device can be removed from the wall and used to locate missing tagged items

Task 3 Description:
The third task (hard difficulty) is that of finding lost items outside of one’s residence. As before, the user removes the device from the wall and uses the frequency of beeps to locate the device. This task presents the additional challenge that the item may not be within the range of our transmitter/receiver pair. In order to overcome this, the user must have a general idea of where the object is. Our system can then help them find the lost item, with a range of up to eight meters. This range should be sufficient for most cases. This functionality is shown in the second video.

(Visually, this is identical to Task 2, so no additional photos are provided.)

Video Documentation:
Tasks 1 & 2: Syncing, forgotten item notification, & local item-finding

Task 3: Remote item-finding

Discussion:
Our project has a very simple user interface, since the device is intended to require as little user interaction as possible. There are no screens, so we used cardboard to build a lo-fi prototype of the device itself. There are three versions of the device; they differ only in the color of the LEDs as we have described above. “The device” is just a case (not necessarily closed) that holds the Arduino, breadboard, speakers, RFID receiver, LEDs, and buttons for “Sync” and “Find” modes. The functionality of each of these is described in the photos and videos. For our prototype we did not exactly come up with any new ways of prototyping, but we did rely heavily on “wizard of Oz” style prototyping, where one of our members provided sound effects and swapped different versions of the prototype in and out based on the situation.

It was somewhat difficult to find a way to effectively represent our system using only ourselves and cardboard. Since our system is not screen-based, “paper prototyping” wasn’t as easy as drawing up a web or mobile interface. The system’s interface consists mainly of button-pressing and proximity (for input) and LEDs/sound (for output), so we used a combination of cardboard/colored pen craftsmanship and human sound effects. The physical nature of the prototype worked well. It helped us visualize and understand how our device’s form factor would affect its usage. For example, using a credit card as an RFID tag (which is roughly the same size as the one we ordered) helped us understand the possible use cases for different tag form factors. While experimenting with different visual/auditory feedback for our item-finding mode, we realized that when no tagged item is detected, a slow beep, rather than no beeping at all, could help remind users that the device is still in item-finding mode.

Group 10: Lab 3

Group Number and Name

Team X – Group 10

Group Members

  • Osman Khwaja (okhwaja)
  • Junjun Chen (junjunc)
  • Igor Zabukovec (iz)
  • (av)

Description of System

We built a crawler that propels itself forward by “punting”. We were inspired by punt boats which are propelled by a punter pushing against the bottom of a shallow riverbed with a long pole. Our system works by having the crawler push against the surface of the table with a short pole (attached to a servo motor), sliding itself forward as a result. The crawler does not actually move in the direction that we expected to when we made it, and problem we had is that there was no way to do the equivalent of lifting the pole out of the water in order to bring it back to the initial position. However, these two problems combined resulted in a crawler that worked differently from how we first intended, but nevertheless managed to scuttle forward fairly effectively. To improve this, we might try mounting the servo motor on a wheeled cart, which would help its movement be more consistent with our intention.

Brainstorming

  1. Ballerina – DC motor that spins a pencil wearing something similar to a dress
  2. Rocking Chair – Uses servo motor to rotate between end positions, just like a rocking chair
  3. Unicycle – wheel surrounds a DC motor and the wheel rotates as the DC motor does
  4. Yarn Ball Retracer – DC motor spins up along a string and moves eventually until all the string is wound up again
  5. Waddle Walker – Uses the servo motor to awkwardly waddle/walk in a semi-straight line by alternating leg movements
  6. Log roller – DC motor attached to a log=shaped object that is connected to the spinning axle of the motor
  7. Reversed roller – DC motor spins a gear which is connected to an object that moves along the ground. The direction of rotation of the DC motor is reversed due to the gear connection, and the object rolls in the reverse direction
  8. Hamster Ball – A DC motor is attached to a wheel (or small ball) inside a ball (the wheel is not connected to the ball, but pushes it).
  9. Wheelbarrow – A DC motor moves the front wheels of the vehicle, but it only moves if something is holding up the back.
  10. Lurcher – A two legged robot, one leg is static and being dragged by the other, which takes steps using a servo motor.
  11. Propellor Robot: a robot on wheels that moves by using a propellor attached to the back that blows it along.
  12. Punt: We attach an punting pole to a servo motor, and have it in a wheeled cart. The motor pushes the pole into the ground periodically, pushing itself forward.

Sketches

Circuit Sketch (taken from Arduino tutorial on Servo Motors):
some_text

 

Crawler Sketch:

sketch

 

Demonstration Video

Parts Used

  • Servo motor
  • Arduino
  • Cardboard Arduino box
  • Empty spool (to raise servo motor)
  • Straws (for punting pole)
  • Electrical tape
  • Jumper wires

Instructions

  1. Tape the empty spool vertically onto the empty Arduino uno box.
  2. Tape the servo motor horizontally on top of the spool
  3. Fashion an oar out of some small sticks by taping them together. At one end of the oar (the base), create a round end with electric tape
  4. Tape the oar to the rotating part of the servo motor such that the oar will hit the ground during rotation. Attach the oar such that it provides forward locomotion by angling it away from the box.
  5. Connect the servo motor to the breadboard and the arduino as shown in the picture from the adafruit learning system tutorial
  6. Connect the arduino to power and let it propel forward

Source Code

#include <Servo.h>
int potPin = 0;
int servoPin = 9;
Servo servo;

int angle = 0;

void setup()
{
  servo.attach(servoPin);
}

void loop()
{
  angle++;              // 0 to 180-ish
  servo.write(angle);
}