P6 – Dohan Yucht Cheong Saha

Group # 21

  • David
  • Miles
  • Andrew
  • Shubhro

Oz authenticates individuals into computer systems using sequences of basic hand gestures.

Introduction

In this experiment, we’re collecting data regarding test usage of a hi-fi prototype of our handshake authentication system. The data we’re interested in collecting are total time necessary to complete the tasks, number of errors during each use, and user-indicated scores of usability after they participate in the study. The hi-fi prototype has advanced quite a bit since P5 (see details below), so we can now have a more precise understanding of how users tend to interact with and learn about our handshake authentication system.

Implementation and Improvements

Link to P5 Submission

Our working prototype has changed greatly between P5 and P6. Now there is no longer a Wizard of Oz requirement for checking the veracity of the handshake. The algorithm does it all. In addition, we made improvements in our box/camera design that involves a green background to better distinguish hand gestures (with a black glove). Finally, the experience is capped with full browser integration via a Chrome extension, so the PDF presented in P5 is now rendered irrelevant. We have the real experience. The main remaining limitation for our system is that we limit hand gestures to a small set for the moment (each individual finger, fist, peace sign, and a handful of others).  We also had users wear a black glove for simplicity of hand recognition, but this can be removed in future versions without significant technology changes.

Participants

Our participants for our prototype testing are students from Princeton University. We came across these participants when walking through Whitman College. Based on our demographic questionnaire, we have learned that participant 1 is a male Woodrow Wilson School major, who has no exposure to hand gesture recognition software. Participant 2 is a female who is also in the Woodrow Wilson School, who has a little exposure to hand gesture recognition software. Lastly, participant 3 is a male Chemistry major who has no exposure to hand gesture recognition software. These participants are valid candidates from our target user group, that have members who would often log into computer clusters and web accounts on a daily basis.

Apparatus

Our study was carried out in a campus study room in which we had a computer set up with the Oz system.  The main components of the system are a colored box and webcam (replacing our previous Leap Motion) to capture hand motions. As discussed above, we also have users wear a black glove to simplify hand capture.

Tasks

We changed our tasks from our low-fi prototype testing in P4.  For this test, we used logging in, initial registration, and password reset as our easy, medium, and hard tasks respectively.  We created a test Facebook account for our participants to use.  In the registration task, participants set and confirmed an initial handshake to go with a Facebook account. Prior to setting the handshake, the user was required to insert his/her original username/password combination (which we provided) to login. In the password recovery task, the participants used our password reset feature, which sends a verification code to the user’s phone, to verify their identity and to allow them to set another handshake. The last task had participants complete a login from beginning to end using the handshake they set in previous steps.  We chose to replace profile selection with initial registration because profile selection is included in the login process.

Procedure

For each participant, we began by obtaining informed consent and then explaining the goals of the system and the basic idea behind it as described in our demo script.  We then had them complete the tasks in order of initial registration, password reset, and finally user login.  Because our system is not completely integrated (switching between windows is required), David acted as the Wizard of Oz for steps that required switching between the browser and our recognition program.  Andrew guided the participants through the tasks, Miles recorded a time-stamped transcript of incidents and anything the participants said, and Shubhro recorded and took pictures of our trials.  At the end of the three tasks, each participant was asked to fill out a brief survey providing feedback on the system.

Test Measures

We’re measuring the following statistics for the indicated reasons:

  • Total time to complete each task
    • Speed is one of the top motivators for creating a handshake based authentication system. The goal is to be, in many instances, faster than typing in passwords on a keyboard for the same level of security.
  • Number of errors in each task-participant instance
    • In tandem with speed is the number of points a user is confused during the system’s use.  This should be minimized as much as possible
  • Participants score from 1 to 5, with 5 being the highest:
    • Ease of use
      • For obvious reasons, we want our users to leave the product feeling they had a non-challenging experience
    • Speed of authentication
      • The user’s sense of speed is important, as it may actually be different from real time spent
    • Clarity of expectations and experience
      • If the user is confused about what to do with the authentication system, this should be addressed with documentation and/or prototype adjustments.

Results and Discussion

Quantitative: We had both quantitative and qualitative results to our tests. The original qualitative results are linked to in our second-by-second notes and questionnaire responses (see below). The quantitative results are summarized below:

Task (Time, # of Errors)

Participant 1

Participant 2

Participant 3

Mean

Registration

(1:55, 1)

(2:04, 1)

(1:53, 2)

(1:57, 1.3)

Reset

(2:10, 2)

(3:06, 2)

(4:00, 4)

(3:05, 2.6)

Login

(0:15, 0)

(0:30, 0)

(0:20, 0)

(0:21, 0)

Metric

Participant 1

Participant 2

Participant 3

Mean

Ease of Use

3

5

3

3.66

Speed of Authentication

5

3

2

3.33

Clarity of Expectations

5

4

3

4

Mean

4.33

4

2.66

The time measurements from these trials are in-line with what we expected. Handshake reset took the most time of the three tasks, followed by registration, and login. User login, at a mean of 21 seconds, is longer than we would like, but we expect this number would be more reasonable as users become more familiar with handshake authentication systems. That there were no errors during the login process is a testament to the general accuracy of our gesture recognition.

It is interesting that the first two participants ranked their experience quite higher than the final participant. This can probably be attributed to the difficulties the final participant had with shadows in the box affecting the accuracy of gesture recognition in non-login processes.

General observations: It seemed that Oz had a somewhat steep learning curve.  As shown in our data, all of our test subjects misentered a handshake at least one time for several reasons. First, our explanation in the script wasn’t clear enough on the limitations of the current prototype (e.g., can’t take into account hand rotation) or how to use the prototype properly (e.g., didn’t insert their hand into The Box far enough). Consequently, Oz misinterpreted hand gestures relatively easily, and it often took several tries for the users to even enter in the same handshake twice in a row. Additionally, during the testing we realized how important the lighting is in the overall usability and accuracy of the system: shadows cast onto the hand or in The Box were often disruptive and resulted in inaccurate measurements. However, over the course of the testing, all three users were able to understand the prototype enough in order to use it fluently for the final test.

During the testing, users remarked the system with phrases such as “Cool!,” “Awesome!,” or “Aw, sweet!,” even though they sometimes struggled becoming acquainted with the handshake system. We suspect that this is due to the fact that though hands-free interfaces for computers have existed for several years (e.g., webcams, headsets), hands-free interfaces for controlling computers are relatively novel and unused. An interesting observation that we noted during our test trials was that every user set three hand gestures for their final handshake without our influence. It would appear that three gestures maybe the ideal number for users. More testing data is needed to justify this claim.  It is also possible that users would use much longer handshakes if prompted to do so, just as many existing websites require a password of a certain length.

Possible changes: There are several steps we can take in response to our observations.  In future versions, we intend to make The Box enclosed with consistent lighting to avoid environmental factors such as shadows from affecting the read gesture (which was a significant problem for participant 3).  Additionally, if we were able to spend significant time revamping the technology, using a depth camera such a Kinect would alleviate issues surrounding lighting (and obviate the need for the black glove used in our current prototype). In the final version of the product, we definitely want to make the Chrome plugin the one point of interaction. Currently, a terminal window is required to start the handshake reading process. Also, we would like to have facial recognition during the user selection process because typing in a username, as we have it in the status quo, would probably be slower.  One last interaction mechanism that we should implement is feedback as users enter the password.  Users appeared to rely on the terminal printout to signal when a gesture was read in, allowing them to move on to the next gesture.  We would like to have the Chrome plugin provide a visual “ping” that indicates a gesture has been recorded.

Appendices

Link to Consent Form

Link to Demographic Questionnaire

Link to Demo Script

Link to Questionnaire Responses

Link to Second-by-Second Notes

Link to Code