P6 – Dohan Yucht Cheong Saha

Group # 21

  • David
  • Miles
  • Andrew
  • Shubhro

Oz authenticates individuals into computer systems using sequences of basic hand gestures.

Introduction

In this experiment, we’re collecting data regarding test usage of a hi-fi prototype of our handshake authentication system. The data we’re interested in collecting are total time necessary to complete the tasks, number of errors during each use, and user-indicated scores of usability after they participate in the study. The hi-fi prototype has advanced quite a bit since P5 (see details below), so we can now have a more precise understanding of how users tend to interact with and learn about our handshake authentication system.

Implementation and Improvements

Link to P5 Submission

Our working prototype has changed greatly between P5 and P6. Now there is no longer a Wizard of Oz requirement for checking the veracity of the handshake. The algorithm does it all. In addition, we made improvements in our box/camera design that involves a green background to better distinguish hand gestures (with a black glove). Finally, the experience is capped with full browser integration via a Chrome extension, so the PDF presented in P5 is now rendered irrelevant. We have the real experience. The main remaining limitation for our system is that we limit hand gestures to a small set for the moment (each individual finger, fist, peace sign, and a handful of others).  We also had users wear a black glove for simplicity of hand recognition, but this can be removed in future versions without significant technology changes.

Participants

Our participants for our prototype testing are students from Princeton University. We came across these participants when walking through Whitman College. Based on our demographic questionnaire, we have learned that participant 1 is a male Woodrow Wilson School major, who has no exposure to hand gesture recognition software. Participant 2 is a female who is also in the Woodrow Wilson School, who has a little exposure to hand gesture recognition software. Lastly, participant 3 is a male Chemistry major who has no exposure to hand gesture recognition software. These participants are valid candidates from our target user group, that have members who would often log into computer clusters and web accounts on a daily basis.

Apparatus

Our study was carried out in a campus study room in which we had a computer set up with the Oz system.  The main components of the system are a colored box and webcam (replacing our previous Leap Motion) to capture hand motions. As discussed above, we also have users wear a black glove to simplify hand capture.

Tasks

We changed our tasks from our low-fi prototype testing in P4.  For this test, we used logging in, initial registration, and password reset as our easy, medium, and hard tasks respectively.  We created a test Facebook account for our participants to use.  In the registration task, participants set and confirmed an initial handshake to go with a Facebook account. Prior to setting the handshake, the user was required to insert his/her original username/password combination (which we provided) to login. In the password recovery task, the participants used our password reset feature, which sends a verification code to the user’s phone, to verify their identity and to allow them to set another handshake. The last task had participants complete a login from beginning to end using the handshake they set in previous steps.  We chose to replace profile selection with initial registration because profile selection is included in the login process.

Procedure

For each participant, we began by obtaining informed consent and then explaining the goals of the system and the basic idea behind it as described in our demo script.  We then had them complete the tasks in order of initial registration, password reset, and finally user login.  Because our system is not completely integrated (switching between windows is required), David acted as the Wizard of Oz for steps that required switching between the browser and our recognition program.  Andrew guided the participants through the tasks, Miles recorded a time-stamped transcript of incidents and anything the participants said, and Shubhro recorded and took pictures of our trials.  At the end of the three tasks, each participant was asked to fill out a brief survey providing feedback on the system.

Test Measures

We’re measuring the following statistics for the indicated reasons:

  • Total time to complete each task
    • Speed is one of the top motivators for creating a handshake based authentication system. The goal is to be, in many instances, faster than typing in passwords on a keyboard for the same level of security.
  • Number of errors in each task-participant instance
    • In tandem with speed is the number of points a user is confused during the system’s use.  This should be minimized as much as possible
  • Participants score from 1 to 5, with 5 being the highest:
    • Ease of use
      • For obvious reasons, we want our users to leave the product feeling they had a non-challenging experience
    • Speed of authentication
      • The user’s sense of speed is important, as it may actually be different from real time spent
    • Clarity of expectations and experience
      • If the user is confused about what to do with the authentication system, this should be addressed with documentation and/or prototype adjustments.

Results and Discussion

Quantitative: We had both quantitative and qualitative results to our tests. The original qualitative results are linked to in our second-by-second notes and questionnaire responses (see below). The quantitative results are summarized below:

Task (Time, # of Errors)

Participant 1

Participant 2

Participant 3

Mean

Registration

(1:55, 1)

(2:04, 1)

(1:53, 2)

(1:57, 1.3)

Reset

(2:10, 2)

(3:06, 2)

(4:00, 4)

(3:05, 2.6)

Login

(0:15, 0)

(0:30, 0)

(0:20, 0)

(0:21, 0)

Metric

Participant 1

Participant 2

Participant 3

Mean

Ease of Use

3

5

3

3.66

Speed of Authentication

5

3

2

3.33

Clarity of Expectations

5

4

3

4

Mean

4.33

4

2.66

The time measurements from these trials are in-line with what we expected. Handshake reset took the most time of the three tasks, followed by registration, and login. User login, at a mean of 21 seconds, is longer than we would like, but we expect this number would be more reasonable as users become more familiar with handshake authentication systems. That there were no errors during the login process is a testament to the general accuracy of our gesture recognition.

It is interesting that the first two participants ranked their experience quite higher than the final participant. This can probably be attributed to the difficulties the final participant had with shadows in the box affecting the accuracy of gesture recognition in non-login processes.

General observations: It seemed that Oz had a somewhat steep learning curve.  As shown in our data, all of our test subjects misentered a handshake at least one time for several reasons. First, our explanation in the script wasn’t clear enough on the limitations of the current prototype (e.g., can’t take into account hand rotation) or how to use the prototype properly (e.g., didn’t insert their hand into The Box far enough). Consequently, Oz misinterpreted hand gestures relatively easily, and it often took several tries for the users to even enter in the same handshake twice in a row. Additionally, during the testing we realized how important the lighting is in the overall usability and accuracy of the system: shadows cast onto the hand or in The Box were often disruptive and resulted in inaccurate measurements. However, over the course of the testing, all three users were able to understand the prototype enough in order to use it fluently for the final test.

During the testing, users remarked the system with phrases such as “Cool!,” “Awesome!,” or “Aw, sweet!,” even though they sometimes struggled becoming acquainted with the handshake system. We suspect that this is due to the fact that though hands-free interfaces for computers have existed for several years (e.g., webcams, headsets), hands-free interfaces for controlling computers are relatively novel and unused. An interesting observation that we noted during our test trials was that every user set three hand gestures for their final handshake without our influence. It would appear that three gestures maybe the ideal number for users. More testing data is needed to justify this claim.  It is also possible that users would use much longer handshakes if prompted to do so, just as many existing websites require a password of a certain length.

Possible changes: There are several steps we can take in response to our observations.  In future versions, we intend to make The Box enclosed with consistent lighting to avoid environmental factors such as shadows from affecting the read gesture (which was a significant problem for participant 3).  Additionally, if we were able to spend significant time revamping the technology, using a depth camera such a Kinect would alleviate issues surrounding lighting (and obviate the need for the black glove used in our current prototype). In the final version of the product, we definitely want to make the Chrome plugin the one point of interaction. Currently, a terminal window is required to start the handshake reading process. Also, we would like to have facial recognition during the user selection process because typing in a username, as we have it in the status quo, would probably be slower.  One last interaction mechanism that we should implement is feedback as users enter the password.  Users appeared to rely on the terminal printout to signal when a gesture was read in, allowing them to move on to the next gesture.  We would like to have the Chrome plugin provide a visual “ping” that indicates a gesture has been recorded.

Appendices

Link to Consent Form

Link to Demographic Questionnaire

Link to Demo Script

Link to Questionnaire Responses

Link to Second-by-Second Notes

Link to Code

L2 – Dohan Yucht Cheong Saha

Team Members:

  • Andrew Cheong (acheong@)
  • David Dohan (ddohan@)
  • Shubhro Saha (saha@)
  • Miles Yucht (myucht@)

Group Number: 21

Description

For our project, we built an instrument that allows a person to conduct the tempo of a piece of music by walking. The primary components of the instrument are piezos attached to each foot and two buzzers. Taking a step moves the song along to the next note, and each song can have both a bass and treble part play simultaneously. We built this project because we like the idea of being able to control existing music through motion. We are pleased overall with the final product because it works as we envisioned it and is actually pretty fun to use. There are a number of possible improvements that we would like to add, including the ability to change to different songs and read music from arbitrary midi files. Additionally, the device is somewhat difficult to attach to your body: perhaps this could be made more portable by the use of a piezo sensor integrated into your shoes (along with a battery pack for the Arduino). We are also limited to songs which change on each tap of the foot, as opposed to songs that might have several pitches play per beat. Other possibilities would be to synthesize the sound using processing or use a similar interface to create music (e.g. a different drum for each foot) as opposed to controlling existing music.

Prototypes

Before building out final product, we built three separate prototypes, with the third leading to our final product.

Instrument 1 – Conducting a Choir

When building our first prototype, we set out to be able to control the volume and pitch of a song by raising and lowering our hands (as if conducting a musical group). While the final result worked, it did not perform as well as we had hoped. The main issue with the instrument is that it is difficult to properly estimate changes in position from the accelerometer (although it naturally works very well for detecting actual accelerations).

Instrument 2 – Anger Management

We liked the idea of using the force of a knock on a piezo to control sound. In this project, hitting your computer results in it playing sound back at you that corresponds to how hard you hit it.

Instrument 3 – Walking Conductor

This prototype is the core of our final instrument and is composed of piezos attached to each foot and a beeper. Knocks (e.g. steps) on the piezos trigger the next note in a melody to play.

 

Final Instrument

We decided to modify the walking conductor instrument to allow multiple parts to play at once. The control mechanism remained the same, but the addition of a second beeper allows us to play a bass part at the same time as the melody.

Parts list:

  • 2 piezo elements
  • 2 beepers
  • 2 rotary potentiometers
  • 2 1-megaohm resistors
  • 1 Arduino Uno
  • 4+ alligator clip cables
  • Electrical tape

Assembly Instructions:

  1. On a breadboard, connect one end of one resistor to analog pin A0 and the other end to ground. Connect one end of other resistor to analog pin A1 and the other end to ground.
  2. Connect one piezo element in parallel with each resistor, attaching it to the breadboard using the alligator clip cables.
  3. On a breadboard, connect pin 1 of one potentiometer to digital pin 3, pin 2 to one beeper, and pin 3 to ground. Connect pin 1 of the other potentiometer to digital pin 6, pin 2 to the other beeper, and pin 3 to ground.
  4. Connect the other pins on each beeper to ground.
  5. Attach the piezo elements to your shoes using electrical tape.
  6. Run around to your favorite song!

Source Code Instructions:

  1. This code makes use of the “Arduino Tone Library” which allows one to play multiple notes simultaneously using the arduino. It is required to run our code, so download it using the link above. The library comes from the Rogue Robotics project.
  2. Download our code below, and begin controlling Thus Spoke Zarathustra as you walk! Different music can be played by replacing the trebleNotes and bassNotes arrays with different note sequences.

Code:

/*
 * COS 436 Lab 2: Sensor Playground
 * Author: Miles Yucht, David Dohan
 * Plays "Thus Spoke Zarathustra" by Richard Strauss on beats measured by piezo elements.
 */
#include "pitches.h"
#include "Tone.h"
#ifdef REST
# undef REST
#endif
#define REST -1

Tone bassTone;
Tone trebleTone;

int treblePin = 3;
int bassPin = 6;
int piezoPin1 = A0;
int piezoPin2 = A1;

int threshold = 50;

int currentNote = 0;

int pauseTime = 100;

//melody to Thus Spoke Zarathustra
int trebleNotes[] = {NOTE_C4, NOTE_G4, NOTE_C5, REST, NOTE_E5, NOTE_DS5, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, 
    NOTE_C4, NOTE_G4, NOTE_C5, REST, NOTE_DS5, NOTE_E5, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, NOTE_C4, NOTE_G4,
    NOTE_C5, REST, NOTE_E4, NOTE_A4, NOTE_A4, NOTE_B4, NOTE_C5, NOTE_D5, NOTE_E5,
    NOTE_F5, NOTE_G5, NOTE_G5, NOTE_G5, NOTE_E5, NOTE_F5, NOTE_G5, NOTE_A5, NOTE_B5, NOTE_C6, REST};

//bassline to Thus Spoke Zarathustra
int bassNotes[] = {NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, 
NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, 
NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_C4, NOTE_C4, NOTE_G4, NOTE_E4,
NOTE_C4, NOTE_G3, NOTE_G3, NOTE_E3, NOTE_A3, NOTE_G3, NOTE_C4, REST};

void waitForKnock() {
  int sense1, sense2;  
  while (true) {
      sense1 = analogRead(piezoPin1);
      sense2 = analogRead(piezoPin2);
      if (sense1 > threshold || sense2 > threshold)
        break;
      /*
      Serial.print(threshold - sense1);
      Serial.print(", ");
      Serial.println(threshold - sense2);
      */
    }
    /*
    Serial.println(sense1);
    Serial.println(sense2);
    */
}

void playTone(Tone tone, int notes[]) {
  if (notes[currentNote] == REST)
    tone.stop();
  else
    tone.play(notes[currentNote]);
}

void playNextNote() {
  if (currentNote <= 57) {
    playTone(trebleTone, trebleNotes);
    playTone(bassTone, bassNotes);
  } else
    currentNote = -1;
  currentNote++;
}

void setup() {
  bassTone.begin(3);
  trebleTone.begin(6);
  /*
  Serial.begin(9600);
  */
}

void loop() {
    waitForKnock();
    playNextNote();
    delay(pauseTime);
}

Assignment 2 – David Dohan

Observations

I observed students and professors before class over a two week span, focusing on MUS103, SOC204, and COS340.

  • MUS103 lecture
    • Most people on laptops – lots of email clients open
    • People beeline to clusters of friends / just talk
    • Others read over the class handouts for the day
    • See several reddit windows open
    • Handful of people shuffling through handouts from past weeks
    •  Observed one student playing Tetris, another was reading manga
  • COS340
    • Extremely sparse until 5 minutes before class – most people talking to friends
    • Many people discussing pset with their group / talking with friends
  • SOC204 – 10am class – few people arrive early
    • Very few people arrive early for morning class
    • See two people reading news
    • One on a kindle reading a book
    • Several attempting to cram the assigned reading
    • Professor arrives about 5 minutes early. Lays out folders to collect homework. Spends remaining time setting up laptop, chatting with preceptors, and generally standing at front of room
  •  General observations
    • Reading over/reviewing past lecture slides/notes
    • Snacking – generally sandwiches
    • Lots of people looking at their phones. Seems to be little interaction outside friend groups.
    • See a few people napping right before some classes (primarily in afternoon classes)
    • Lots of community auditors in the back of classes
    • One or two students usually go ask the professor a question before class
    • See PFML pop up a few times
    • Talking to a few students showed that many were rushing between classes and did not have much free time for large blocks of the day

Brainstorming

With Shubhro Saha and Andrew Cheong

  1. Pair people up to review each other briefly before class
  2. Spaced repetition learning with flashcards tailored to collaborative card creation within a class.
  3. Competitive quiz app for students who arrive early.
  4. Class todo list that pulls due dates and readings from blackboard or course websites and presents in order of due date.
  5. Competitive games (e.g. speed chess/checkers/etc.) with others inside the classroom
  6. Order food so it is ready to pick up on way to class
  7. Guided meditation app tailored to time you have before class begins.
  8. Workout app tailored to time before class begins.
  9. Collaborative playlist app for students who are in the class early. Could optionally interface with classroom speakers.
  10. Complete psych studies a few minutes at a time instead of going in for blocks of time.
  11. App that takes in the free food listserv and lets you know if there is anything along your way to class
  12. News summary app to catch up with the outside world
  13. Local chatroom for students in the class
  14. Campus wide virtual whiteboard for chat and other interactions. Provides a forum other than pfml.
  15. Students can give mini lectures on topics before class (not necessarily related to class).
  16. Collaborative puzzles for the classroom (crosswords etc.)
  17. App to facilitate students answering each others questions during the time before class, and the teacher starts class with any remaining questions.

Favorite Ideas

  • Notable – a class review app
    • A spaced repetition notecard app that makes it very easy to collaborate on cards within a class. Should be simple to find cards other people in the class created and share your own.
  • Tiger Nap – a guided meditation app
    • The app provides audio for guided meditation that lasts until the class begins and slowly wakes you up. Also has a nap mode that can generate sound to drown out distracting noises. Should be as simple as possible – only needs a single button to start nap/meditation.

Prototype Pictures

TigerNap:

This slideshow requires JavaScript.

 

Notable:

This slideshow requires JavaScript.

Feedback

This slideshow requires JavaScript.

  • David Bieber – COS ’14
    • Left out back button on the “deck” page [now added]
    • Need to make it clear how checks behave after going back – do they clear? Do they hold previous state?
    • On the find cards page, make it clear what each column does. Does clicking the user column list all cards from a user or just select that card?
    • Make it clear what a shared deck is
    • Add an option to cram certain cards by categories/topic/when they were added
    • Don’t mix metaphors – the Q and A on the quiz page don’t match with all card types
    • Possibly auto generate cards from notes and/or syllabus
    • Need a web/computer interface to make typing in cards easier if auto generation is not possible. Typing on a phone is tedious
    • Add a simple way to delete cards/decks [now added]
  • Harry Cape – CHM ’15
    • Confusion about buttons on the add/edit page. Does share share the current card? Does adding add the current card then bring up a blank card?
    • Easy/Hard/Repeat makes sense immediately
    • There are multiple ways to reach some pages, which can make it very confusing.
    • Find card page should allow user to ‘zoom into’ a card (pop it up in a larger size)
    • Allow to sort by username on find page
    • Way to share multiple cards at once
  • Clayton McDonald- MAT ’15
    • Like Harry, thought Easy/Hard/Repeat made intuitive sense
    • The back button on “add/edit” and edit card list should always go back to the deck page. Currently defined it in such a way that loops are possible, so a user could have to press back many times to return to deck page.
    • Combine add/edit into a single listing
    • Review all – Should go directly to card review page instead of to deck page
    • Should have a clear button on the add/edit page to immediately clear current card
    • Add the ability to tag cards by keyword. This allows searching by keyword when trying to study a specific topic or finding cards from other people.

Insights

The major insight from testing is that I need to simplify my design whenever possible. While most of the buttons make sense, having multiple ways to reach the same page can be confusing for the user. Users are also conflicted in exactly what they want. Harry liked the combined add/edit pages, while Clayton suggested separating them.  The 5 buttons on the add/edit page were especially confusing since there are many possible behaviors that make sense.

One feature that I would definitely want in the next iteration is the ability to tag cards and cram by tag and age. It is great to use spaced repetition techniques for people who do a little bit of studying every day (which is the main use case of the app), but the ability to concentrate studying is definitely helpful.