P5: Dohan Yucht Cheong Saha

Group # 21 : Dohan Yucht Cheong Saha


  • David Dohan

  • Miles Yucht

  • Andrew Cheong

  • Shubhro Saha


Oz authenticates individuals into computer systems using sequences of basic hand gestures.

 

Tasks Supported

 

Our easiest task is the profile selection for the users. To confirm a user’s “username” he or she will select an image from a gallery of profiles that corresponds to him/herself. The medium-difficulty task is handshake parsing, during which our system detects whether the pass-gesture is correct. The final, and most intricate, task is password reset– which involves sending an email to the user for a handshake reset link. In addition, training the system is an additional task we’re adding to this prototype. The user must perform a series of gestures to have our system recognize their hand with accuracy.

 

How Tasks Changed

 

One of our prior tasks was to detect the face of a user; however, this task’s purpose was to identify the “username” of the individual which was covered by the profile selection task. Since these two tasks have identical functionality, there was not immediate need for both. Having facial recognition would be an interesting task that could be implement in the future. Instead, we realized that procuring training data on the hand gestures is a more significant task that dictates the effectiveness of detecting hand gestures. We also did not incorporate password recovery (yet) because its purpose is not crucial to the functionality of Oz, but is in the pipeline.

 

Revised Interface Design

 

We decided to move away from the Leap Motion technology and are using a web camera to detect hand gestures. Some motivations for this change includes stepping away from a black-box technology and the ease of using openCV and scikit-learn with webcam data.  We found overall that our webcam based approach gave a higher accuracy than the Leap Motion.

 

As a result of our change in hardware type (Leap to webcam), we found that we must approach the box design very differently.  The Leap does not need light in order to function, as it has IR LEDs built in.  For our webcam, however, we both need a larger box and must install lights in our box.  For the first functional prototype, we elected to simply use a large, open box with the webcam mounted at the top.  Because the box is not sealed and is a similar color to skin, our first prototype also requires that the user wears a black glove so their hand can be differentiated from the background.  In the future, the inside of the box will be black, so no glove will be necessary.

 

Updated Task Storyboard



Sketches for Unimplemented Features



Overview & Discussion of New Prototype

 

In this prototype, we have advanced our previous prototype by turning the paper model into one that the user interacts with on the screen. In addition, in the background, we’ve made advanced in our underlying technology by developing the proof-of-concept for hand recognition with a regular webcam. Because we haven’t integrated this proof-of-concept into a browser-based plugin, however, we still require a wizard-of-oz routine to ensure test users have entered their handshake sequence correctly

 

There are mainly two things we left out of this prototype. The first is the browser plugin that will allow the user to use handshake authentication on a web site like Facebook. We left this out because we need more time to learn about browser plugins and require the back-end system with hand recognition to be complete. The second functionality left out is the integration of the hand-recognition technology that we have completed at an experimental stage (see above). Though the experimental results give us confidence in the proof-of-concept, we have yet to polish the software into something that can be integrated with the browser plugin.

 

There are two wizard-of-oz techniques required to make the prototype work today. The first is on the user-interface side, where someone needs to press an arrow to proceed the slides. The user can still tap the screen in a touch-screen style interface to proceed. The second wizard-of-oz technique is on the hand recognition side. Though we have a proof-of-concept complete for hand recognition, it has not yet been integrated into the prototype, but may be complete by P6.

 

We fully implemented the detection of basic hand gestures to demonstrate a proof of concept. We also implemented the training of the hand gestures using the webcam and the APIs explained. Because this training data is necessary for the detection of hand gestures, we implemented this task as well. We believe that these two parts are the most important part of our prototype to implement immediately because they will require the most work to refine.  We must both devise a novel algorithm for classifying hand shapes and provide an intuitive way of training it for use by multiple users.

 

The two outside code sources that we used were the opencv, numpy (to handle the opencv data structures), and scikit-learn APIs. The opencv API allows us to obtain still images from a webcam, threshold the hand, find the contour of the hand, resize the image to fit the hand, and to save the images to disk. Scikit-learn was used to create a support vector machine object, to train it using data taken from a webcam, and based on the data from the webcam predict the label for the current hand.

 

Video capturing proof of concept hand recognition:

https://www.dropbox.com/sh/nb91flk0v2fmml4/F7RhHS2sjN/p5?lst#f:classifier.mp4

 

PDF capturing user-interface screens to use during prototype testing:

https://docs.google.com/file/d/0B9RlBTXdYjXMQlBDXzJwa1ZsbzA/edit?usp=sharing

 

Proof of Concept Git Repository:

https://github.com/dmrd/oz/tree/master/camera