P6 User Testing of Gesture Glove

Group 15 : The LifeHackers

Prakhar Agarwal, Gabriel Chen, and Colleen Carroll

The Gesture Glove in a Sentence

Our project is a glove that allows us to con­trol (sim­u­lated) phone actions by sens­ing various hand gestures.

Our System

The system being evaluated was a system that simulated commonly used functions of a phone that could be performed by users off the screen using the Gesture Glove that we built in the previous assignment. The sensor readings that mapped to built-in gestures let users perform 3 tasks (see the Tasks section under Method). The purpose was to see if different users would be able to easily and intuitively perform the 3 tasks, and to look for potential improvements that could be implemented in future iterations of our system. The rationale of the experiment was that if any of the users had difficulty with any of the tasks, then improvements would need to be made in order to let all users interface with the system comfortably and conveniently.

Implementation and Improvements

Our submission for P5 can be found here: http://goo.gl/DB4Sq. Since P5, we left the general structure of the prototype the same for P6. A couple of quick changes were made to the interface, though:

  • We changed the threshold values for a number of the in built in gestures to better match the different hand sizes and maneuverability of different people.
  • We lowered the delay between cycles of glove readings to allow for higher sensitivity.

Method

Participants

Our users were chosen out of students in public places. We tried to vary gender and technical background. The first was a 20 year old male named Josh who was studying in Brown Hall, and is a computer science major. The second was a 21 year old female named Hannah who is a Chemical and Biological Engineering major with a certificate in the Woodrow Wilson School. She was using her iPhone in Frist. Lastly, we chose a  20 year old male named Eddie who was studying in Frist. He is an Economics major who owns an iPhone.

Apparatus

We conducted our test in a secluded corner in Frist campus center. Our equipment was our laptops, one of which was used for the phone simulator, and the Gesture Glove. Two members of our team recorded critical incidents, while the other read the demo script to the user.

Tasks.

The first and eas­i­est task we imple­mented is pick­ing up a phone call. A user sim­ply puts his hand into the shape of a phone to pick up a call and then can do the ‘okay’ hand motion in order to end the call. This was the easiest task.

Our sec­ond task is more dif­fi­cult as there are more com­mands and ges­tures to be rec­og­nized. This task allows users to con­trol a music player with the glove. By using a pre­set set of hand motions, a user can play and pause music, nav­i­gate between a num­ber of tracks and adjust the vol­ume of the music that is play­ing.

Finally, our last task involved allow­ing users to set a cus­tom pass­word rep­re­sented by a series of three cus­tom hand ges­tures, which the user can then use to unlock his phone. This is the most dif­fi­cult task as it involves set­ting ges­tures oneself, then remembering the sequence of gestures that were previously set so that a user could unlock a phone.

Procedure

Users were chosen from public areas and asked if they would be able to spare 5 minutes for our project study.The study was focused on 3 main tasks which the users had to complete as described above. One team member prompted the user to complete these tasks using the following demo script. http://tinyurl.com/cqwktog The users wore the Gesture Glove and interacted with a simulated smartphone screen on the computer, while two members noted the critical incidents of the testing session.

Test Measures

The bulk of our study was on qualitative measures because of the nature of the tasks that we asked the users to complete. Picking up the phone and hanging up take a trivial amount of time. The users were asked to experiment with the music player, which implied any amount of time could be used. Lastly unlocking the phone took exactly 9 seconds each time a user attempted to unlock. For these reasons we did not measure time per task.

The following metrics were studied:

  • number of attempts to successfully unlock the phone
  • qualitative response to the system based on a Lichert scale (sent to the users as a questionnaire at the following link : http://tinyurl.com/p6questionnaire with these results: http://tinyurl.com/p6likert )
  • general feedback during study – positive or negative
  • observations of how users made gestures during session
  • time to set password

Results and Discussion

Some of the most common user feedback was that unlocking the screen was hard to do.Our original implementation has users enter a gesture, hold it for 3 seconds until a green dot shows up on screen, and then move on to the next gesture. The purpose of this was to keep the password secure. If the program told you as soon as you made one right or wrong gesture, a thief could eventually unlock your phone by process of elimination. However, we received feedback that unlocking took too long, too short, or should not require looking at the screen. It was apparent that for security we were sacrificing usability. We also realized that on current unlocking systems different people choose between a more complex password for security or a simple password for convenience.  Considering all of the design trade-offs involved, we decided to leave it up choose a middle road. We will provide some basic security, but let the system be flexible enough for users to be able to make and use a password according to their preferences.

Almost everyone had issues using the thumb sensors. Even our team, who by this point are very used to using the system, occasionally has issues with it. Upon closer observation during the usability testing, we realized that users don’t always press the same part of the thumb. This varies even for a particular individual. They may sometimes press the very tip, sometimes the middle of the area between the tip and the knuckle, and sometimes very close to the knuckle. What is even harder than just hitting the thumb on the right spot (without looking at it) is to get the forefinger and thumb sensors right on top of each other in order to activate both at once. Our conclusion is that the thumb needs larger pressure sensors. With proper equipment, we could imagine getting a sensor that covers the entire surface of the part of the thumb above the knuckle. Because the thumb is critical in a lot of gestures (it is used to activate the pressure sensors of the other fingers), we believe this would be a very important fix in future iterations.

The interface for setting a password obviously needed more instruction. The main problems with this were that the users did not realize that they needed to press set before making the gesture and then pressing save. This could be fixed with a simple line of instruction at the top. A more complicated problem is that of the visualization of the gesture that the system registers the user making while they are setting the password. We had a user whose gesture was not quite registering what he was intending to do. This was apparent from the visualization for us, but he did not notice. This could have resulted in the user setting a password that they think is one sequence of gestures, while the machine thinks it is something else, resulting in the user’s unlocking hardly, if ever being successful. This tells us that we may want a visualization that is even easier for a user to understand, for example, a visualization of a hand doing the gesture that the machine is registering the user doing. We also had a user who made his password, and then couldn’t remember what he had done for the password and made a simpler one instead. This again tells us that he couldn’t read the visualization easily enough to quickly recall what he had done. This is just further justification for a more intuitive visualization.

Overall, the system got some very positive reactions. Though we definitely have a number of improvements to make, we got comments throughout user testing, like “This is sick!” and “Awesome!” These recommendations for improvement as well as the positive reactions are reflected through the responses we got from the questionnaire we had participants submit after testing the interface (link to results can be found in the Appendix). Along with asking for subjective feedback, we had users rank how much they agree with certain statements on a Lichert Scale where 1 represented “Strongly Disagree” and 5 represented “Strongly Agree.” The average results are shown in the table below. As we see, users had the most amount of trouble using the password interface. Through testing, we found that the reasons why users had difficulty was sometimes just technical glitches (the wiring got unplugged) and other times because of the issues we discussed above. We have tried to address these concerns as well as possible in the discussion above.

It was easy to use the gesture glove to pick up and end phone calls.

4.67

It was easy to use the glove to control the music player.

3.67

The interface to set a password with the glove was intuitive and easy to use.

4.33

The interface to unlock the phone with the glove was intuive and easy to use.

3

The gestures chosen for built in functionality made sense and would be easy to remember.

4

Appendices

Materials given to or read to users :

  • consent form and demographic info : http://tinyurl.com/p6consent
  • task script: http://tinyurl.com/cqwktog
  • post-interview questionnaire: http://tinyurl.com/p6questionnaire
  • demo script:   We didn’t demonstrate our system before letting users test it; instead, we demonstrated the built-in gestures that we wanted to exhibit before each task, while the user had the glove on. This way, the gestures could be explained more efficiently and the users could use them right away.

Raw Data