Lifehackers Final Project

Group 15: Lifehackers
Colleen, Gabriel, Prakhar

One-sentence description of your project

Our project is a glove that allows us to con­trol (sim­u­lated) phone actions by sens­ing var­i­ous hand gestures.

Hyperlinks to your blog posts for all previous project assignments (P1 to P6)

P1:
https://blogs.princeton.edu/humancomputerinterface/2013/02/22/life-hackers-p1/
P2:
https://blogs.princeton.edu/humancomputerinterface/2013/03/14/p2-life-hackers/
P3:
https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-life-hackers/
P4:
https://blogs.princeton.edu/humancomputerinterface/2013/04/08/group-15-p4-lo-fi-user-testing/
P5:
https://blogs.princeton.edu/humancomputerinterface/2013/04/29/group-15-p5-working-prototype/
P6:
https://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-user-testing-of-gesture-glove/

Videos

Answering a call:

Playing Music:

Setting a Password and Unlocking the Phone:

List of Changes Implemented Since P6

  • Updating the unlock sketch to make it more intuitive and efficient. Now, instead of holding each gesture for 3 seconds, the sequence proceeds automatically when a user gets the correct gesture. The 3 second limit is still present for security purposes.
    This change was made because some users from P6 complained that the unlocking process was too long; they pointed out that unlocking should not be such a hassle.

  • The music player application code is updated to read gestures more frequently but register gestures only once if they are pressed and held.
    This was done because those who tested the application had difficulty using the music player because it was either too sensitive or not sensitive enough based on how the delay between readings was set. This allows us to keep a healthy level of responsiveness while removing the associated negative side effects.

Evolution of Goals and Design through the Semester

Our goals for this project evolved dramatically over the course of the semester. The original idea for our design came from an idea to play a game on your phone while running on a treadmill, in order to encourage users exercise. In an attempt to make it more practical, we then thought to generalize the idea for all types of interaction on a treadmill, instead of limiting it to a game. Then, for the contextual inquiries in the second project assignment, we realized that we could expand the target user group. We decided that it didn’t make sense for us to limit the application to users running on a treadmill, and that off screen interaction could be useful in other situations as well. Thus, we arrived at the idea to generalize off screen interaction with a phone for people going to class in cold weather; the medium that we chose for this was a glove, which is an article that currently interferes with smartphone interaction, but is also necessary to wear in cold weather.

After deciding on this idea, we had a lot more confidence in our project and its practical implications. However, when we approached the working prototype assignment, we hit another roadblock. We realized the difficulties involved in developing an application that alters native functionality of smart phones. This infeasibility led us to modify our goals of designing a smartphone application; thus, we changed our design so that it works on a series of Processing applications instead, on the computer. With further iteration and collaboration with mobile operating systems designers, our system can be developed into a true mobile application.

Critical Evaluation of Project

While developing our prototype, we came across a number of design difficulties that we would need to overcome before our project could become a real-world system. First of all, the glove would need to waterproof. We have proposed our system for use in the cold, which could mean precipitation and other elements of nature. This could be a problem for an electronic system which is sensitive to water. Along the same lines, our product is a pair of gloves, and people use their hands very often for all sorts of tasks. In our very first paper prototype, a cardboard “sensor” of the glove came off while a user was attempting to open a door while wearing it. We realized just how robust our gloves would need to be to withstand everyday use. However, waterproofing and strengthening our system could potentially add a fair amount of bulk to the gloves. We already found that sometimes it could be hard to move certain fingers because of the bulk, so a lot of work would have to go into considering these trade-offs and finding or developing materials that meet all of the gloves requirements.

One of the most important drawbacks of our current system is that it does not actually work on a phone. This is because smartphones do not allow 3rd party apps to access functionality such as picking up a phone. For our system to be viable, we would most likely need to have Apple and Google either enable use of it themselves, as a built in function for the phone, as can be done when controlling the phone with bluetooth or voice. We also found that such applications that control native functionality are also possible to implement on a rooted or jailbroken device. For instance, the application CallController lets iPhone owners use the accelerometer to answer calls. However, this obviously limits our potential user base. Finally, we can see making a system that works for people with different hand shapes, sizes and flexibilities being a huge challenge. People’s hands and the way that they make gestures can be very different. We need to develop a calibration mechanism to learn the threshold reading for when a user is flexing or pressing a sensor, and might even need to rely on machine learning to make the system truly adaptable to the variations in users.

With all this said, we do believe that the gesture glove could be a real world system. We learned a lot about the challenges of gesture interactions and gloves controllers, but none of the obstacles, as listed above, are impossible to overcome. We would mainly need to focus on making them suitable for the way users handle their gloves and make gestures, as well as partnering with smartphone companies. There is definitely a need for a better alternative for controlling smartphones outdoors or in the cold. With more cycles of redesigning and testing,  we imagine that our glove could be this alternative.

Future Steps

As mentioned above, one of the the main issues with our prototype currently is that the tasks are done on simplified simulations of a phone that we have created using Processing. We would solve this by approaching Apple and Android to discuss becoming partners on the project. We anticipate that this is not difficult to implement; this is more of a business problem. some of our main implementation problems are in waterproofing and strengthening the gloves. To solve this problem, without creating an overly bulky glove, we would want to use smaller sensors and integrate them into the gloves fabric. The outside of the glove would be made of a waterproof material. We would then test the product by allowing users with various lifestyles use the gloves for some period of time, say a month, and then see what condition the gloves are in and for what reason. We could then improve on the resilience of our gloves and do another round of testing until we are satisfied with the robustness of the gloves.

The main challenge in user-interaction with our system are creating gestures that work for everyone. The first step in solving this issue is to implement a calibration system. The user could open and close a fist and then press each of their fingers to their thumb. This would allow us to quickly determine custom threshold values for a particular user and set of sensors so that the program will from then on be able to tell when a user is flexing or pressing a sensor. Secondly, we would need to user-test the gestures we have chosen for the existing tasks and any more that we may choose to implement. We found that many users had issues with the gestures that we had chosen for our system. However, there was a lot of variation in the particular complaint they had–which gesture they were unhappy with and for what reason. This suggests to us that we will need a to test many different options for gestures with a much larger sample size of users. We may also experiment with different ways to make a more customizable system, either by offering different sets of gestures to choose from or allowing a user to make up their own gestures. We imagine that this would be the most difficult problem to solve.

Source Code

Source Code: https://www.dropbox.com/s/o6tqo7axfxsvlgp/FinalCode.zip

README: https://www.dropbox.com/s/ksej59d360c1t9b/README.txt?n=5352256

Third-Party Code Used

Demo Printed Materials

https://www.dropbox.com/sh/ang9lxss5ok4dx4/m7LcaaOP14

L3 – Waddlebot

Group 15: Prakhar Agarwal (pagarwal@), Colleen Carroll (cecarrol@), Gabriel Chen (gcthree@)

Description:

We built a robot that moves on the principle of vibration. The robot had two legs attached which were attached at the top to a DC motor that was used to spin an asymmetric propellor. The spinning of the asymmetric propellor constantly changed the robot’s center of gravity and thereby caused the robot to vibrate and waddle in random directions of its choosing. Our motivation for this robot came from trying to emulate the motion of a penguin and think we were successful in meeting this goal. We were also impressed with the power of the propelling system that we chose. Although the overall level motion we were able to achieve was satisfactory, if done differently, we would like to have had some degree of control over the motion. The current motion is undirected and rather random.

Brainstorming:

  • Make a robot hamster using a hamster ball
  • Food Car: donuts for wheels, entenmann’s box for body
  • Rolling robot which moves by controlling an inner ball inside the outside shell
  • Rolling robot with edges, so that it does flips
  • Collision-detecting robot using a photosensor to stop when it’s close to something
  • Row-boat robot: instead of swimming, use motor to run mini oars and row a boat
  • Robot that follows light with photosensors on all sides
  • Wind-up robot: wind up with rubber band on motor in one direction, and then let go
  • Robot that waddles based on vibration caused by having an asymmetric propellor
  • Robot that starts the propeller of a light flight machine, release from motor to fly
  • Jumping robot that works by winding up a band and then leting go
  • Frog robot: synchronize servo motor movement to make a robot jump
  • Using servo motors to make something move like a worm
  • Fan propelled robot car
  • A robot that moves other items by catapulting them

Sketches/Video:

penguin

The inspiration for our design!

One of our design sketches. The first prototype had the breadboard hanging in the middle, which proved to be a bad idea.

Final and improved design sketch.

Materials:

  • 1 Small 6V DC motor
  • 1 Asymmetric propellor (eraser or servo arm both work well)
  • 2 Strips of cardboard
  • 1 Small breadboard
  • 1 PN2222 Transistor
  • 1 1N4001 Diode
  • 1 270 Ω Resistor
  • 1 Arduino Uno
  • 4 Rubber feet
  • Jumper wires
  • Tape

Instructions:

Using the breadboard, connect the DC motor, transistor, diode, resistor and arduino as shown in the following diagram:

circuit

Circuit diagram for the robot.

Cut the cardboard strips so that they are about the same width as the motor. Tape them to the motor, on each side. Make sure that the motor is oriented vertically so that the robot can stand. Next, tape the breadboard one of the cardboard strips. Then, bend the strips at the bottom, creating a flat surface for the rubber feet to attach to. Finally, attach the asymmetric propellor to the axis of the motor. Now, you have a PenguinBot!

Source Code:

/*
Adafruit Arduino - Lesson 13. DC Motor
*/

int motorPin = 3;
void setup()
{
 pinMode(motorPin, OUTPUT);
 Serial.begin(9600);
 while (! Serial);
 Serial.println("Speed 0 to 255");
}
void loop()
{
 if (Serial.available())
 {
   int speed = Serial.parseInt();
   if (speed >= 0 && speed <= 255)
   {
     analogWrite(motorPin, speed);
   }
 }
}

 

 

P3 – Life Hackers

Group 15: Prakhar Agarwal (pagarwal@), Colleen Carroll (cecarrol@), Gabriel Chen (gcthree@)

Mission Statement

Currently there is no suitable solution for using a touch screen phone comfortably in cold weather. Users must either resort to using their bare hands in the cold or using unreliable “touchscreen compatible” gloves that often do not work as expected. Our mission is to create an off the screen UI for mobile users in cold weather. In our lo-fi prototype testing we hope to learn how simple and intuitive the gestures we have chosen for our glove really are for smartphone users. In addition to the off the screen UI, there is a phone application that lets you set a password for the phone.

We are all equally committed to this project, and we plan on dividing the roles evenly. Each member of our group contributed to portions of the writing and prototyping, and while testing the prototype we split up the three roles of videotaping, being the subject, and acting as the “wizard of Oz.”

Document the Prototype

This slideshow requires JavaScript.

Because our user interface is a glove with built-in sensors, we decided to prototype using a leather glove and cardboard. The cardboard is a low fidelity representation of the sensors, and was intended to simulate and test whether the sensors would impede the motion or ability to make gestures using the actual glove. For the on-screen user interface, we decided that since most of the functionality that we want the glove to work with is already built into their phone. For this reason, we decided that we would simply have test users interact with their phone while a “wizard of Oz” performed the “virtual” functionality by actually touching the phone screen. In addition, since the application for setting one’s password using our device has not yet been developed, we sketched a paper prototype for this functionality. By user-testing this prototype we hope to evaluate the overall ease of use of our interface.

Task 1: Setting the Password/Unlocking the Phone (Hard)

This slideshow requires JavaScript.

This is a task that needs to be performed before using the other applications that we have implemented, therefore it is important that it is possible to do with the gloves, so that users do not have to unlock their phone in the cold before each of our other tasks. The password is set using an onscreen interface in conjunction with the gesture glove. A user follows onscreen instructions – represented in the prototype with paper. They are told that they can only use finger flexes, unflexes, and pressing fingers together. Then they are told to hold a gesture on the glove then press a set button (with the unwired glove.) The screen print out what it interpreted as the gesture (for example, “Index and Middle finger flexed”.) When the user is satisfied with the sequence of characters they can press Done Setting button on screen. This task is labeled as hard because it involves a sequence of gestures mapping to a single function or action. In addition, users setting their gesture sequence need to interact with the application on screen.

Task 2: Picking up and Hanging Up the Phone (Easy)

This slideshow requires JavaScript.

One of the most common tasks to perform outside while walking is to talk on the phone, so it is a perfect interface to reinvent for our glove. Picking up and hanging up the phone have standard gestures as opposed to the user-determined gestures of setting a password. They use a gesture that is a familiar sign for a picking up a phone, as in the photo that shows the thumb to the ear and pinky to the mouth with the rest of the fingers folded. This is the easiest of the three tasks that we have defined. The user simply needs to perform the built-in gesture of making a phone with his or her hand and moving it accordingly.

Task 3: Play, Pause, and Skip Through Music (Medium)

This slideshow requires JavaScript.

From our contextual inquiries with users during the past assignment, we found that listening to music is one of the most common actions for which people use their phone while in transit. However, currently one needs to have specialized headphones in order to play/pause/change the music they are listening to without touching their screen. This would provide users with another simple interface to do so. For playing music, users will simply need to make the rock and roll sign as shown in the photo. To pause the music, the users must hold up their hand in a halt sign. To skip forward a track, users point their thumb to the right, while to pause they point their index finger to the left.

Discuss the Prototype

We made our prototype by taping the cardboard “sensors” to the leather glove for the gesture-based component of our design. The phone interface was made partially by paper prototyping and paritally by using the actual screen. We simulated virtual interaction by using the “wizard of Oz” technique described in the assignment specifications. Using this, we found a couple things that worked well in our prototype. Our gestures were simple in that they mapped one-to-one with specific tasks. We believe the interface (for setting passwords specifically) proved simple, while hopefully conveying enough information for the user to understand it. The system relies on many gestures that are already familiar to the user – the rock and roll sign, and telephone sign. We also saw that when we were wearing the glove, we could generally complete most everyday tasks, even off the phone (i.e. winding up a laptop charger cord), without added difficulty.

There were, however, definitely some things that prototyping helped us realize that we could improve also. We realized that we will need to consider the sensitivity of the electrical elements in the glove and it’s fragility when we are constructing it. When Prakhar opened a heavy door, one of the cardboard pieces of the protoype became slightly torn, helping us realize just how much wear and tear the glove will have to be able to withstand to be practical for daily use. We also realized that we will need to have different gloves for lefties and righties, since only one hand will have the gesture sensors in it and right handed users will have different preferences than left handed users. The app will be configured to recognize directional motions based on whether a righty or lefty user is wearing the glove. For example, the movements for skip forward and skip backward would likely be different for lefties and righties because of the difference in orientation of the thumb and the forefinger on either hand. Another thing we realized is that instead of having the gesture control be in the dominant hand as we initially supposed, we should consider the benefits of having gesture control in the non-dominant hand, freeing up the dominant hand for other tasks. This was especially noticeable when testing the functionality to set the password, which required users to simultaneously use the phone and the glove. In doing so, it would be easier to do gestures with the non-dominant hand while using the phone with the dominant.

P2 – Life Hackers

Group Number:
15

Group Members:

Prakhar Agarwal (pagarwal@)
Gabriel Chen (gcthree@)
Colleen Carroll (cecarrol@)

We all worked together or equally on all parts of the assignment. Each member did one contextual interview and storyboard, and we worked together on the other parts for a balanced effort.

Problem and Solution Overview:

The problem we have chosen to address is the difficulty of interacting with one’s phone when it is cold outside. Specifically, when we are wearing gloves, using a phone requires that we take them off because they block capacitive sensing and are clunky in such a way that pressing buttons is rather difficult. Our solution is to develop a glove with a variety of strategically placed motion and flex sensors that would recognize hand movements and gestures and translate these to perform simple tasks on the phone such as picking up the phone or changing the music playing.

Description of Users:

Our target user group consisted of people who were walking around campus, wearing gloves and holding or using a smartphone. On campus, we specifically looked for younger users as these are the most likely to be technologically connected and dependent. For further observation of possible users outside of campus, we could definitely look at urban professional and commuters also. However, for the campus demographic, we chose young users as they are more technologically connected; in fact, we tried to interview an older man, and he said that he didn’t even own a cell phone. The first person whom we interviewed was on her way back from class; she was wearing knit white gloves and held a pink iPhone. From talking to her, we learned that she was from Georgia and preferred warmer weather, and wore gloves quite often in chillier weather. Her priority was functionality, and she seemed most interested in being able to use her phone effectively and conveniently. Our second interview was with a girl from California. She wore leather gloves, and used an old school smartphone with a very small screen. Her priority was cost, and because of this, she was skeptical of the necessity for touch sensitive gloves. She also mentioned that she didn’t especially dislike the cold. The third interviewee was chosen as a control. This person was inside using their phone as they would if they did not have the hassle of cold weather and gloves. They were asked to do many of the same basic tasks and notes were taken on the speed and comfort level of performing these tasks.

Contextual Inquiry:

For our contextual interview, we stood outside of Frist on a cold afternoon, and looked for users who fit the description above that we were looking for. Once we found someone who fit the description and was willing to answer a few question, we asked them about their phone usage and asked them to perform several tasks on the phone. We were most interested in what they did in order to bypass the inconvenience of having to wear gloves, and made observations on their strategies. The third interviewee was asked to perform the same tasks and was observed as a standard for the level of difficulty for those tasks in warm conditions compared to those asked to perform them outside.

The tasks generally performed by the people we observed and interviewed were pretty standard and common. All interviewees had similar habits in terms of what they used their phones for while walking; the most common functions were phone calls, text, music and email. Each of these tasks was often preceded by the unlocking of the phone, although not all of our contextual inquiry subjects had this function enabled. In addition, a common theme amongst all interviewees was that each of them admitted that cold weather deters them from interacting with their phones in certain situations. For comparison, the  the interviewee inside used their phone so often and easily that they were almost distracted from the interview. Between completing the tasks, the interviewee would check their email and search things online. Switching between tasks was extremely simple and the user seemed almost not to notice it. From this we can conclude that the user needs to have an even simpler way to switch between tasks with glove to match the ease of use in warm conditions.

The interviewees differed in the strategies they used in order to cope with cold weather and phone usage. The first girl we interviewed used a method where she took off enough of her glove to expose her thumb, but left the rest of her glove on. In addition, if she could use her phone with one hand, she would leave the other glove completely on. The second girl took off both of her gloves in order to use her small smartphone. Perhaps the size of her phone required her to hold it with both hands. The third interviewee often switched between using one finger or hand to using several, with different orientations depending on the task that they were performing. Again, it was much easier for the user indoors to use their hands in whatever way they wanted than for those outside.

Task Analysis:

Part A

  1. Who is going to use system?
    Our system is a glove that lets people perform simple tasks on their phone when it is cold outside without actually touching their phone. The target user base is mobily connected individuals who need to walk outside in colder climates. Those who want to wear gloves to keep their hands warm but do things on their phones at the same time would benefit from this system. We also found that it was more likely for younger people to use the system. When we tried to do a contextual interview with an older man we saw wearing gloves, we found that he didn’t even own a cell phone. On the other hand, younger people are generally more technologically connected.
  1. What tasks do they now perform?
    The users currently perform a number of tasks on their phones. From the contextual interviews, we found that the most common activities performed with smartphones while walking outside are texting and checking emails. Some people also suggested that they enjoy listening to music on their phones while they are walking to and from class for which they will generally only perform simple tasks such as playing/pausing/skipping songs en route. Users who we talked to for contextual interviews also mentioned that they don’t generally talk to others on the phone, with texting and email being much more common alternatives. However, for communicating with family or for professional purposes, they said that phone calls are the communication medium of choice. At the moment, users have to take off their gloves or use “Smart Touch” gloves, which work on touch screens, to perform these tasks.
  1. What tasks are desired?
    During cold weather, people usually try to stay bundled up rather than interact with their phone too much. While on a nice day, someone may do more complicated tasks while walking, people generally want to get from point a to b as quickly as possible, performing only essential tasks, when it is cold. The simpler, essential tasks that people do want to perform are picking up phone calls, responding to texts, turning on their phone, etc without having to fumble clumsily with their phone or taking off their gloves.
  1. How are the tasks learned?
    Smartphone interactions are usually learned by what is on screen. An effective smartphone UI is either intuitive or has written instructions. Many, particularly the tasks we are interested in implementing, rely heavily on convention for users to learn them. For example, the keyboard is standard across all applications of the phone so sending a message or typing a search term is the same across the phone. Most keyboards resemble the qwerty desktop keyboard, there is considerable variation on how to type special characters. Picking up a phone is usually pressing or swiping a green button, with a possibly with a slightly different interaction for picking up when the phone is locked. This relates back to the standard on landline phone answer buttons. Music players rely on the play, pause, and skip symbols established in many of the earliest digital playback systems. However, unlocking varies greatly from keyboard to numberpad to the android unlocking grid, with either visual or haptic feedback.
  1. Where are the tasks performed?
    The tasks can be performed in transit on a mobile device in the cold (really, anywhere).
  1. What’s the relationship between user & data?
    An intuitive or easy to learn UI is usually a positive user experience, in the sense that the interactions go mostly unnoticed and require little effort to remember and accomplish. Users are interested in their end goal such as answering the phone or listening to music, not the interaction. Therefore these tasks should be easy to perform and remember so that they disappear into the background.
  1. What other tools does the user have?
    As the most obvious solution, the user can take take off the gloves. Using fingers in the cold is the core problem we are trying to solve, though. Using a headset is one possible solution to this problem, but it is inconvenient in noisy rooms and uncomfortable in most public areas (such as outside where this glove is intended to be used.) The other option is to use “Smart Touch” gloves which work on touch screens, but are clunky and definitely make it difficult to press small buttons or links on the screen in comparison to not wearing gloves.
  1. How do users communicate with each other?
    Using our system, users experience improved communication, as they can send messages more conveniently and answer calls more easily.
  1. How often are the tasks performed?
    People are frequently in motion, and the tasks are intended to be performed every time they need to use their mobile devices. As people only need to wear gloves during the winter, the device might not be completely useful outside of the winter season. This means that the system could potentially be performed only seasonally.
  1. What are the time constraints on the tasks?
    Unlocking the phone should be performed rapidly, but can vary depending on the length of a password. Picking up a phone call or bringing up the voice recognition command for messages should be instantaneous. Interactions with the music player should be performed using a single gesture and should also be instant.
  1. What happens when things go wrong?
    When things go wrong with unlocking a password, such as a misused/unrecognizable gesture, the user should just attempt to unlock the password as they normally would or try again using the gestures. With communication and music, the same logic applies. In any case, the user would not be terribly inconvenienced by these events outside of being frustrated that the glove didn’t work as advertised.

Part B (Description of Three Tasks):

  1. Unlocking phone
    Current Difficulty: Medium
    Proposed Difficulty: Medium
    Unlocking a phone has considerable limitations currently. For one, the phone has limited space and users generally use only one or two fingers, so long,complicated passwords are even more cumbersome than they are on a desktop keyboard. Because users are limited in the speed that they can enter passwords in an onscreen character based password system, even a secure password (based on randomness of characters) can be more easily observed by an onlooker. The android unlocking grid is perhaps more convenient to use with one finger but it is even more easily observed than the character-based passwords. Cold weather and stiff fingers make unlocking even more difficult to perform. The task needs to be quick and simple to learn and perform, as it will be used often and users (based on our CI) seem to rate convenience first when choosing passwords. In addition, the system should have at least the potential to be hidden, such as by keeping the glove in your pocket, for security.
  1. Communication
    Current Difficulty: High
    Proposed Difficulty: Medium
    A task that users commonly face themselves with is the task of communicating with other mobile phone users. In transit, communication commonly takes on two forms: phone calls and text messages. The interactive tasks associated with phone calls are answering the call, rejecting the call, and hanging up the call. Each of these tasks can be performed pretty easily as is, but removing the dependency on a touch screen can make things even easier, as gesture accuracy and ease are reduced when limited to pointing on a screen. The second task, which is sending text messages, can be quite complicated for mobile phone users. They need to deal with typing out a message and then sending it. As an alternative, users can currently select the voice recognition command, which is a tiny button on the messages screen. With the system, users can easily select the voice recognition command with a gesture away from the screen. As another function, gestures can be set to map to characters that can string into a message.
  1. Music
    Current Difficulty: Medium
    Proposed Difficulty: Low
    Another tasks users may choose to perform while out in the cold is listening to music as this is generally a more passive activity. There are a couple of tasks they might want to perform while doing so including changing the volume and controlling music (play, pause, switch songs). Currently, the iPhone is compatible with special headphones with a “control capsule” that performs similar tasks, but users with other phones do not have this option. We would provide a gesture based means to do this. Users may pinch their fingers and move their hand up or down to change the volume. Flicking one’s wrist to the left or right may switch to the next or previous song. Other similar simple gestures may pause and play. This would be become a relatively simple interface for users. The simplicity and limited number of tasks that users need to perform in interacting with music would make this relatively easy from the design side. As with the other tasks, we would need to establish a way by which we get the glove to start recognizing gestures (maybe a switch or a location on the glove to press and hold), and then we could recognize these simple commands.

Interface Design

Description

Users can use our product to interact with their smartphone in cold weather. Rather than ineffective smartphone gloves which attempt to let you interface with the smartphone screen with conductive fingertips, our gloves are an interface themselves. With small, simple gestures such as bending a finger or squeezing two fingers together, combined with a headset for voicing message text, users can accomplish the essential tasks that one might have to make while on the go in the winter. We will implement locking and unlocking the phone, answering the phone, sending a message, and playing music. Our design does not have the frustrating problems of fit and surface area that existing smartphone gloves have, nor does it put the user in the awkward situation of using voice commands constantly. By creating a winter-weather interface for smartphones, we can a provide a simple, useful experience for smartphones users in the cold.

Storyboards

This story board shows the ease of picking up and then ending a phone call while wearing the smart gloves.

This story board shows the ease of picking up and then ending a phone call while wearing the smart gloves.

Toggling music is another action that would become much easier in the cold using the glove.

Toggling music is another action that would become much easier in the cold using the glove.

The final storyboard shows how the system could be used for the last task described, unlocking the phone.

System Sketches:

The glove contains flex and force sensors on each of the fingers and an accelerometer at the wrist in order to make it easy to read a variety of actions.

The glove contains flex and force sensors on each of the fingers and an accelerometer at the wrist in order to make it easy to read a variety of actions.

The glove could be controlled by an application on the phone that allows users to map gestures to tasks on the phone. Certain tasks would be premapped. Also, certain simple gestures would be preloaded as suggested gestures for users to map to functionality they may desire in order to deal with user concern about the difficulty of coming up with usable gestures.

The glove could be controlled by an application on the phone that allows users to map gestures to tasks on the phone. Certain tasks would be premapped. Also, certain simple gestures would be preloaded as suggested actions for users to map to functionality they may desire in order to deal with user concern about the difficulty of coming up with usable gestures.

A2: Prakhar Agarwal

Observations:

For the observations portion of this assignment, I both observed students in the classes I went to and interviewed a couple of my friends. My most extensive passive observation was during the previous HCI lecture when I got to class about 10-15 minutes early. The last class was still leaving, and so a number of us including Professor Fiebrink were standing outside waiting. Being someone who usually barely gets to class in time, I was surprised by the number of students who prefer to arrive with time to spare. I noticed a student sitting on the side resting his eyes and two students leaning against the wall and taking, but mostly, students were on their phones. The motif of electronically connected students was observed even further after getting into class. A number of students sat down with friends or at some distance from others they weren’t especially familiar with and started to pull out their notebooks or computers. Students tended to converse with one another, look over notes for this (or sometimes another) class, or go on the Internet (Facebook, Reddit, email, cellphone games, etc) until the lecture started.

In interviewing friends, I tried to talk to those with different general habits. One of them told me that she usually prefers to get to class 5 to 7 minutes early and uses the time to get stuff out and make sure that she has the needed binder/documents ready. She said that she generally tries to make productive use of the rest of the time to check her email and respond as necessary. If she doesn’t have emails, she said she would then either read a book or spend time on Facebook, etc. On the other side of this, another person I interviewed said that he generally is the type of person who gets to class barely on time or often a bit late, especially in the mornings. Specifically, he says that while he would ideally prefer to avoid the awkwardness of walking in late, he always mis-budgets time, wants to leave on time, but gets caught up in something and then doesn’t. Interestingly, he said that if he is less than 5 minutes late, he doesn’t feel too bad. If he gets in later than that, he often just sits there for the first few minutes of class trying to stay as unnoticed as possible. From the interviews, I also got some interesting information about things that some professors have done in a low tech way during these wait times. I found that some professors play music, put up notes that are a review of the previous lecture on a projector or the board, or start to pass around a sign in sheet.

 

Brainstorming (Worked with Vivian Qu):

  1. “Morning announcements” style information about campus activities, broadcasted on projector in the lecture hall prior to the start of lectures
  2. “Check-in” application using phone GPS to replace sign-in sheets
  3. Q&A mobile app forum, using voting to choose the most relevant questions for classes
  4. Provide feedback to inquiries the professor has, i-clicker style (results shown on projector in real-time)
  5. Application that allows students to compete against each other in educational games such as flashcard matching that are related to the class
  6. “Popcorn” questions — professors can pick random people to answer questions to facilitate class discussions; can make it so that all the students log in to an app which randomly selects one of the users to answer a question / provide feedback
  7. A mobile app where you can check the upcoming deadlines for a class (assignments, projects, etc), shown in a calendar or task list
  8. Mobile application that shows you which seats are full as a lecture starts to fill up (hopefully would serve as motivation to not go into a class late), and can see where are friends are sitting
  9. Mobile calendar which shows campus events daily which also shows how many of your friends are going, how many tiger tickets are left
  10. Mobile application that combines class schedule data + lecture room location to see which of your friends have class near you so you can walk there together
  11. A discussion board viewable on your mobile phone where students post what they learned from the previous lecture instead of the professor posting the content
  12. Audit Course Recommender — have extra time in between two classes, then finds nearby lectures that are starting and can go in and listen for a while
  13. Application that reminds you how much time needed to walk to class from current location, and reminds you when to leave (if you need to speed up or can slow down)
  14. Grade calculator — takes the median/mean/grade data from blackboard and instantly computes a score. Also can project the scores needed on future assignments to achieve the desired grade.
  15. Map my schedule — at the beginning of the semester, it’s annoying to keep looking up where your next class is so this app will plan out a route for you through the whole day!
  16. Collaborative classroom playlist application for entertainment before class
  17. Collaborative easy to use music generation program (maybe everyone gets different instrument?)
  18. Live lecture broadcast for those enrolled in class so you don’t have to get out of bed in the morning

 

Favorite Ideas:

  • Get-to-Class: Application that reminds you how much time is needed to walk to class from current location, and reminds you when to leave
    • I chose this idea largely because, being someone who struggles to get to class on time, it would be personally beneficial to me and there really is no precedent for such a system.
  • Classroom Games: App that allows students to compete against each other in educational games such as flashcard matching that are related to the class
    • This idea appealed to me because it seems that students are generally pretty connected to the Internet almost all the time, and as several are already playing mobile games before class, I thought this could be channeled towards more productive gameplay.

 

Prototypes:

Get-to-Class!

The home page has a pretty clean interface.

The home page has a pretty clean interface.

First time you click enroll, you must search classes to add.

First time you click enroll, you must search classes to add.

Possible choices populate as you type in a class name.

Possible choices populate as you type in a class name.

After choosing all of your classes, need to press submit.

After choosing all of your classes, need to press submit.

Check screen for enrolled classes.

Check screen for enrolled classes.

First time you choose "Set Alarm," none have been set.

First time you choose “Set Alarm,” none have been set.

Added an info page about setting alarms on basis of first test subject's confusion.

Added an info page about setting alarms on basis of first test subject’s confusion.

Choose class to add an alarm for.

Choose class to add an alarm for.

Edit settings for alarm.

Edit settings for alarm.

"Set Alarm" page after alarms have been set.

“Set Alarm” page after alarms have been set.

"How Far Am I" page caused some confusion in initial tests (described below).

“How Far Am I” page caused some confusion in initial tests (described below).

Classroom Games

The homepage lets you choose a game to play, the scoreboard, or the login page.

The homepage lets you choose a game to play, the scoreboard, or the login page.

Login page with a simple interface.

Login page with a simple interface.

Directions for the flashcard game.

Directions for the flashcard game.

Flashcard game interface.

Flashcard game interface.

Directions for the multiple choice game.

Directions for the multiple choice game.

Multiple choice classroom game interface.

Multiple choice classroom game interface.

Directions for the game of concentration.

Directions for the game of concentration.

Interface for a game of concentration.

Interface for a game of concentration.

Scorecard page let's you toggle between tabs and see your own score at the top.

Scorecard page let’s you toggle between tabs and see your own score at the top.

 

Feedback:

From the first test, I found out about a couple of kinks that made the application a bit confusing to use. I was surprised to find that the subject found the “How far am I?” portion of the application confusing.  It wasn’t clear what the purpose was. From feedback at the end, I found that it would be useful to change the labels on the selection page to something on the lines of “Start Location” and “End Destination,” and maybe even change the name “How far am I?” to something on the lines of “Find Distance” or something else clearer. It was also unclear how the time alarm was to be set up. The app require an understanding of 3 time blocks – how early you want to leave, how much time it will take to get to class from where you are (implicit), and how early you want to get to class. This wasn’t originally made clear, and so, for future test subjects, I actually added a pop-up note the first time they tried to set a new alarm.

Note pops up when setting alarm to make the use of necessary fields clear to users.

Note pops up when setting alarm to make the use of necessary fields clear to users.

I tested with two other subjects and as some of these points of confusion were cleared up, I mostly got more feedback about developing the application further. It was suggested, that I add reminders about assignment due dates. Instead of having people “Enroll” in classes, it was suggested that the app sync with SCORE to make the user experience less involved. People suggested that the app would mostly be used during the first week of classes or for morning classes. To adapt to this functionality, I was told to maybe allow people to allot time for breakfast, getting coffee or getting ready. One of the users suggested adding the ability to change settings about whether the user has a bike and about the user’s walking speed. My idea about an extension I had considered myself was also reinforced when one of the subjects suggested that I add a map that shows users the best path to class. Finally, as a less serious concern, I was told that I should add the tiger icon to the corner of all the pages because it was kind of cute.

Using the native keyboard to type in class preferences.

Using the native keyboard to type in class preferences.

Users found use of the sliders and other common interactive aspects of the app to be pretty intuitive.

Users found use of the sliders and other common interactive aspects of the app to be pretty intuitive.

Insights:

I definitely see that adding the map with directions to the class is the first important development. It would definitely add to the user experience. More importantly, it would show users which path is being considered when calculating the time to get to class. Additionally, I think that it is valuable to sync to SCORE or even ICE. This would make it so that a user could download the app, log in and immediately start using it. I had my friends test out this app, and they had to sit down and go through the procedure. A general user, though, might be discouraged from using the app just because of the hassle of adding classes manually. In general, though, I was happy with the way in which users could mostly intuit how to use the app. I tried to use simple buttons, list and clear labels for the most part, which were similar to existing commonly used apps, so that this would be possible, and outside of the one or two concerns noted in feedback, users said that this was successfully achieved.

L1: Electronic Anti-Intoxication (or Overflow Detecting) Cup

Group 15: Prakhar Agarwal (pagarwal@), Colleen Carroll (cecarrol@), Gabriel Chen (gcthree@)

Description:
We chose to build a pressure sensitive cup because it was the most widely useful of all of our ideas, and we could imagine using it in everyday life for a number of different applications. The cup uses a force sensor to detect the amount of liquid in the cup and colored lights to indicate to the user how much they have filled/emptied the cup. The idea was originally inspired as an easy way to track your drinking on a night out. The best part of our design is that it requires almost no new interactions for the users to learn. You can fill or drink from the cup like you would with any other cup and the color scheme of the lights follows standard green/yellow/red signaling for go/slow/stop. In our video, we show the application of this to building an light-signaling overflow detector and our storyboard shows another application of this sort of technology to staying safe on a night out. As we have the cup now, it is easy to pour into, but hard to pour out of (because it is hooked up to the Arduino), which complicates certain use cases. With more time, we would make it more mobile, and we would also like to create a way for it to keep track of when the cup is filled/emptied multiple times.

Photos of Sketches:

Air Bass Sketch

Air Bass Sketch

Fat Belt Sketch

Fat Belt Sketch

Overflow Cup Sketch

Overflow Cup Sketch

Storyboard:

sb1sb2sb3sb4

Live in Action:

http://www.youtube.com/watch?v=-wj1bbwfeVo

Photo of the full setup

Photo of the full setup

Photo showing the outer cup with sensor and LEDs

Photo showing the outer cup with sensor and LEDs

 

Photo showing the bottom of the inner cup with a piece of styrofoam used apply pressure directly to the pressure sensor

Photo showing the bottom of the inner cup with a piece of styrofoam used apply pressure directly to the pressure sensor

Materials:

  • 2 plastic cups
  • FSR
  • 3 LEDs (green, yellow, red)
  • small piece of Styrofoam or cardboard
  • alligator clips
  • electrical tape
  • Knife
  • Arduino
  • USB cable
  • Breadboard

Instructions:

  1. Start by cutting one cup to about half of its original height. This will make it easier to attach the electronic components to the bottom.
  2. Pierce a slit in the bottom of the cup. Put the FSR through the slit so that the round sensor is centered inside the cup and the long end is sticking outside.
  3. Next, pierce three sets of 2 small holes in the bottom of the cup. You stick one LED in each of these sets so that the light is on the inside of the cup and the wires are sticking out.
  4. Fit the uncut cup inside the cut one. Notice the size of the gap between the bottom of the inside cup and the bottom of the outside cup.
  5. Cut a piece of Styrofoam or cardboard to the shape and size of the head of the FSR and then tape it over the FSR. This will serve as padding between the top cup and bottom cup so that the top cup will put pressure directly on the FSR, through the padding, as it is filled with water. (Note: Ensure that the padding is taller than the LEDs.)
  6. Set up the circuit as shown in the schematic so that each of the lights is connected independently to a digital input on the Arduino.
  7. Test the FSR. Note the reading when the cup is empty, half way full, full, and about to overflow.
  8. Write your program so that: If you are sensing an overflowing cup, the yellow lights up at half way, green at full, and red at overflowing. If you are sensing a draining cup, green lights up at full, yellow at halfway, and red at empty.

Code:

const int ledYellow = 2;
const int ledRed = 4;
const int ledGreen = 7;
int fsrPin = 0; // the FSR and 10K pulldown are connected to a0
int fsrReading; // the analog reading from the FSR resistor divider

void setup(void) {

// We'll send debugging information via the Serial monitor
Serial.begin(9600);

pinMode(ledRed, OUTPUT);
pinMode(ledYellow, OUTPUT);
pinMode(ledGreen, OUTPUT);

}

void loop(void) {

fsrReading = analogRead(fsrPin);

if (fsrReading < 500) {
digitalWrite(ledRed, LOW);
digitalWrite(ledGreen, LOW);
digitalWrite(ledYellow, LOW);
}

if (fsrReading > 500 && fsrReading <= 600) {
digitalWrite(ledRed, LOW);
digitalWrite(ledGreen, LOW);
digitalWrite(ledYellow, HIGH);
}

if (fsrReading > 600 && fsrReading <= 680) {
digitalWrite(ledRed, LOW);
digitalWrite(ledGreen, HIGH);
digitalWrite(ledYellow, LOW);
}

if (fsrReading > 680) {
digitalWrite(ledRed, HIGH);
digitalWrite(ledGreen, LOW);
digitalWrite(ledYellow, LOW);
}

Serial.print("Analog reading = ");
Serial.println(fsrReading); // the raw analog reading
delay(100);
}