P6 – Team TFCS

Basic Information (a,b,c)

Group Number: 4
Group Name: Team TFCS
Group Members: Collin, Dale, Farhan, Raymond
Summary: We are making a hardware platform which receives and tracks data from sensors that users attach to objects around them, and sends them notifications e.g. to build and reinforce habits.

Introduction (d)

We are evaluating the user interface of an iOS application for setting up and managing sensors. (From a design perspective, this is the most design-dependent part of the project, in which we construct the appropriate affordances and mental models for users to interact with our sensors.) Our app uses Bluetooth Low Energy (BLE) to connect to physical sensors which users attach to everyday objects. Each sensor is associated with a task when it is set up within the app. Then, the app logs when the task is performed, sends reminders when the task is not performed, and displays to the user their history in completing the task – and by extension, how successful they have been at maintaining a habit or behavior. Our P6 experiment is targeted towards evaluating the intuitiveness and accessibility of the sensor setup user interface, and gaining a better understanding of what tasks users would track using our app. This builds upon our P4 evaluation, which revealed several problems with the reminder model we presented to users, motivating us to rethink and simplify how sensors are set up.

Implementation and Improvements (e)

Link to P5 blog post

There were some changes made to the prototype since P5:

– A view to add new tasks was added. This view allows users to choose a sensor tag and then specify the task they are trying to monitor, and the frequency with which they want to use it.
– A webview for viewing statistics on tasks was added. This view is used to show the logs of tasks completed/missed by the user as graphs and charts. This helps the user track their progress in creating the habit and performing the task.
– Our backend server uses Twilio’s text service instead of APNS to send reminder notifications to the user. This is a simplification that lets us get by without an Apple developer account.
– In addition to sending alerts when a task is ignored, our backend tracks how frequently the user does complete tasks. We hope to use this data in the future for a proof-of-concept gamification feature.

Method (f)

Participants: Our participants were students who came to the Terrace library on the night of our test. There was a large number of students of different demographics present at the time of our test, and choosing one location enabled us to maintain a more consistent testing environment. Additionally, we found that the library was an excellent setting for finding target users, busy students who must maintain a consistent schedule amidst chaotic demands (see P4). To conduct the tests, we matched each task with a participant who expressed interest in developing a routine around that task. We found one participant who wanted to make a habit of going to the gym, another participant wanted to do a better job of reading books on a regular basis, and a final participant who was selected randomly, since we did not find an appropriate way to ask students about their medication usage before asking them to be participants in our study. Two were club members and one was independent.

Apparatus: We conducted all tests on the large table in the Terrace library. Our equipment included:

– An iPhone with the app installed
– Three sensor tags
– Three custom-made enclosures for the sensor tags, one for connecting the tag to a book, another for clipping to a bag, and a third similar one for sticking onto a pill box

Custom-made enclosure for attaching sensor tag to a textbook.

Custom-made enclosure for attaching sensor tag to a textbook.

Custom-made sensor tag enclosure to attach to bags.

Custom-made sensor tag enclosure to attach to bags.

The sensor tag is protected within the felt pouch.

The sensor tag is protected within the felt pouch.

– A textbook for our textbook tracking task
– A pencil case representing a pill box
– A computer which volunteers used to fill out our survey

Tasks: Our easy task tracks trips to the gym using the accelerom­e­ter built into our sensor tags. This task is rela­tively easy because the tracker only needs to be left in a gym bag, and it uses accelerom­e­ter motion to trigger an event. For our medium task, we track a user’s med­ica­tion usage by tagging a pill box. This task is of medium dif­fi­culty because only cer­tain changes in mea­sured data actu­ally cor­re­spond to task-related events; we have to fil­ter our data to find out when events have really occurred, introducing ambiguity in the user experience. Finally, for our hard task, we track the user’s read­ing habits by tag­ging text­books. This is the hard­est task for us to imple­ment because of the com­plex­ity of connecting a sensor tag to an arbi­trary book in a way that is intu­itive to the user. We also have to fil­ter our data to trig­ger events only in the appro­pri­ate cases, just as in our medium task.

These three tasks are worth testing because they require different enclosures and setup procedures for pairing a sensor tag to the object of interest. Since each task’s setup process involves the same UI, we expected to learn something about the comparative difficulty of each physical sensor enclosure and physical interface, in addition to how intuitive users found the setup process and reminder model. (These tasks remain unchanged from P5.)

Procedure: Our testing process followed the script we provide in the appendix. First, we explain the experiment to the user and obtain consent for their participation and using video recording. Then we ask them to sign our consent form and take a brief demographic survey. Next, one of the team members explains what the app tries to accomplish and introduces the user to the physical sensor tags. Then the user goes through the three tasks outlined above. For each task, we explain the objective – for example “the next task is to try and track how often you go to the gym by tracking movement of your gym bag”. The user is then given the sensor tags, the appropriate item (gym bag, book) and the iPhone with the app loaded. We then observe and record the user’s interaction with the system as they attach the sensor tag to the object, pair it with the app and add a new task. Because the users were not told exactly how to use the app or setup the tags, we noted how their actions differed from what we expected as they went through the setup stages. To simulate receiving a notification, we allowed the user to leave and sent them a notification in a 20 minute time window and used that as a reference point to get feedback about the notification system. Finally, we gave users a post-evaluation survey as linked in the index.

Test Measures (g)

Our dependent variables originated from two parts of our test – timing of user interaction during the evaluation, and user-reported difficulty in the post-evaluation survey.

– Time taken by users to complete setup of sensor tag through iPhone app (but not to physically attach the sensor tags)
– Time taken by users to attach sensortag (NB: users attached/set up tags in different orders for counterbalancing purposes)
– Setup accuracy. We broke down the process of tracking a task into 8/5 stages as follows:
Step 1: Begin Pairing Screen
Step 2: Enable Bluetooth Screen
Step 3: Check Bluetooth is Enabled (only on first task)
Step 4: Navigate Back to Taskly (only on first task)
Step 5: Pair sensor
Step 6: Select sensor (sensor succeeded in pairing)
Step 7: Complete new task screen
Step 8: Actually attach sensor

We then gave each user an “accuracy” rating, which was the ratio of (correct actions taken over all steps)/(total actions). This became a useful measurement in telling us which setup stages users found most confusing.

– Satisfaction with physical form and attachment of the sensor tags (were they too intrusive?).

– Satisfaction with the notification system. Specifically, we wanted to measure what users felt about the intrusiveness or non-noticeability of the notifications when they forgot to do a task.

– The difficulty in setting up different tasks, as surveyed. It was important to test if the process of setting up tasks and attaching physical sensors was too complicated, since that was an issue we ran into during P4.

Video:  http://youtu.be/aJ6ZIN6jzLc

Results and Discussion (h)

– We found the time it took for users to set up each sensor to be within our desired ranges. Users took an average of 51.33 seconds to go through the iPhone application and set up tracking of a single task, with a standard deviation of 14.72 seconds. We also observe that the average setup time decreased with each task the user setup. Users took on average 64 seconds to set up a task the first time, compared to 39.3 seconds on the third time through. Regardless of which task the user started their test with, the average time taken to setup the tasks decreases from the first task to the third. This reinforces our hypothesis from P5 that our application interface should be agnostic to different tasks or actions being created. This was a change from P4 which had interface elements specific to each task, as opposed to P5 where we tried to create a unified interface for creating any task.TimeToCompleteAppSetupChart

– To physically attach sensor tags, users took an average of 35.33 seconds (with a standard deviation of 12.91 seconds) to attach a tag over all tasks. We found, however, that while users very quickly set up the sensor tag to track reading a book, they often did so incorrectly. We gave users an elastic band enclosure that was designed to keep the sensor tag attached to the cover of the book, but users were confused and slipped the tags around the entire book. Most users said they would have prefered attaching the book sensor with a double-sided sticker, as they had with the pill box sensor. This was confirmed in the survey, when all users indicated the book sensor tag was “Slightly Intrusive.”


– The notion of using different sensors on each tag to track individual tasks was confusing to most people; this was indicated in the “What did you find hardest to accomplish?” section of the survey. They were unable to understand what each individual option meant, since we only provided the name of the sensor (accelerometer, gyroscope, or magnetometer) and asked the user to choose from them. Based on the results, we will simplify the motion-tracking sensors, combining use of the accelerometer and gyroscope into the rough class of motion-triggered sensors, and offering the user the choice of motion-triggered and magnet-triggered sensor. We will also provide the user with approximately one sentence explaining the difference. A more advanced way to do this would be to allow the user to train a machine learning algorithm by performing the action to be tracked several times. Then, we could learn what sensor values corresponds to each task. We could also allow the user to specify types of tasks, to make training more effective, e.g. moving bags, book openings, etc. However, this introduces significant complexity and relatively little benefit for our project.

– Users found the physical tags to be generally unintrusive. One of the things we were trying to test was how comfortable people were with using these not-so-small tags on everyday objects, and the general consensus on this question reinforces that users are willing to accommodate these tags despite their size and clunkiness. This might also be a result of the preference of the users we tested on; most of them currently use text files, the backs of their hands, and email as task reminder systems. Those are the lowest-friction and more primitive task reminder systems that we asked about, which suggests that users would be interested in a low-friction system in which objects themselves remind the user that they have not been used.

Q2 Q5


– Almost everyone was confused by the physical setup process in which the sensor tag was attached to the book. We intended for users to wrap the tag enclosure’s band around the outer cover of the book. People responded that they found setting up each tag on a book very easy, but performed the setup incorrectly, putting the band around the whole book, so that they had to remove the band in order to read the book. Two did so vertically, and one horizontally; this suggests that users did not go through our thought process of determining how the sensor tag would be used while reading the book. This could indicate a disparity in their understanding of how the system works and what they were using it for, but from observations during the tests, we found it more likely that users were not maintaining a high level of attention towards the task. The result could be that users would fix the sensor attachment upon actually reading their books, or it could be that they would remove the sensor enclosure when reading and forget to replace it. After the test, users suggested that we should allow them to stick the tag on the book instead of the band. Based on this recommendation, and the fact that users set up the other two tasks exactly as expected, we will focus on lighter and simpler stick-on/clip-on enclosures for future applications. (This was one of the dependent variables we set out to measure.)

– No users indicated the setup process was “slightly hard” or “extremely hard”. However, users only indicated that their likelihood of using our system is “somewhat likely” or “not likely”. We would have benefited from offering a broader spectrum of choices on both of these questions. However, the results that we gathered suggest that we are close to providing a significant benefit to users, and that we no longer need to make significant changes to the premise of our system or our reminder model; we are now at a point where we should focus on increasing the overall ease of use of the application and sensors, and making the utility of individual use cases more apparent.

– Notification system

Another aspect of the product that we set out to test was the notification system. We asked users to respond about how intrusive they felt the reminders were or if they weren’t noticeable enough.


These survey responses indicated that people were generally satisfied with the information they received in reminders. One good suggestion was to include the time since the user’s last activity in the notification. Finally, the user who rated receiving reminders as “annoying” suggested that we use push notifications for reminders, since they did not expect to receive reminders in text messages. This is a change we plan on making for the final product. The notification system will have the same information, but be sent to the users iPhone as a notification (using the iPhone’s APNS system) instead of a text message.

– Further tasks to incorporate

Finally, we tried to gain insight into additional use cases for our system. Users provided several good suggestions, including instrument cases and household appliances (e.g. lawnmowers). These could form the basis for future tasks that we could present as use cases for the application.

Appendix (I)

Consent Form
Demo Script
Raw Data
Demographic Survey
Post-Evaluation Survey

P6 – Dohan Yucht Cheong Saha

Group # 21

  • David
  • Miles
  • Andrew
  • Shubhro

Oz authenticates individuals into computer systems using sequences of basic hand gestures.


In this experiment, we’re collecting data regarding test usage of a hi-fi prototype of our handshake authentication system. The data we’re interested in collecting are total time necessary to complete the tasks, number of errors during each use, and user-indicated scores of usability after they participate in the study. The hi-fi prototype has advanced quite a bit since P5 (see details below), so we can now have a more precise understanding of how users tend to interact with and learn about our handshake authentication system.

Implementation and Improvements

Link to P5 Submission

Our working prototype has changed greatly between P5 and P6. Now there is no longer a Wizard of Oz requirement for checking the veracity of the handshake. The algorithm does it all. In addition, we made improvements in our box/camera design that involves a green background to better distinguish hand gestures (with a black glove). Finally, the experience is capped with full browser integration via a Chrome extension, so the PDF presented in P5 is now rendered irrelevant. We have the real experience. The main remaining limitation for our system is that we limit hand gestures to a small set for the moment (each individual finger, fist, peace sign, and a handful of others).  We also had users wear a black glove for simplicity of hand recognition, but this can be removed in future versions without significant technology changes.


Our participants for our prototype testing are students from Princeton University. We came across these participants when walking through Whitman College. Based on our demographic questionnaire, we have learned that participant 1 is a male Woodrow Wilson School major, who has no exposure to hand gesture recognition software. Participant 2 is a female who is also in the Woodrow Wilson School, who has a little exposure to hand gesture recognition software. Lastly, participant 3 is a male Chemistry major who has no exposure to hand gesture recognition software. These participants are valid candidates from our target user group, that have members who would often log into computer clusters and web accounts on a daily basis.


Our study was carried out in a campus study room in which we had a computer set up with the Oz system.  The main components of the system are a colored box and webcam (replacing our previous Leap Motion) to capture hand motions. As discussed above, we also have users wear a black glove to simplify hand capture.


We changed our tasks from our low-fi prototype testing in P4.  For this test, we used logging in, initial registration, and password reset as our easy, medium, and hard tasks respectively.  We created a test Facebook account for our participants to use.  In the registration task, participants set and confirmed an initial handshake to go with a Facebook account. Prior to setting the handshake, the user was required to insert his/her original username/password combination (which we provided) to login. In the password recovery task, the participants used our password reset feature, which sends a verification code to the user’s phone, to verify their identity and to allow them to set another handshake. The last task had participants complete a login from beginning to end using the handshake they set in previous steps.  We chose to replace profile selection with initial registration because profile selection is included in the login process.


For each participant, we began by obtaining informed consent and then explaining the goals of the system and the basic idea behind it as described in our demo script.  We then had them complete the tasks in order of initial registration, password reset, and finally user login.  Because our system is not completely integrated (switching between windows is required), David acted as the Wizard of Oz for steps that required switching between the browser and our recognition program.  Andrew guided the participants through the tasks, Miles recorded a time-stamped transcript of incidents and anything the participants said, and Shubhro recorded and took pictures of our trials.  At the end of the three tasks, each participant was asked to fill out a brief survey providing feedback on the system.

Test Measures

We’re measuring the following statistics for the indicated reasons:

  • Total time to complete each task
    • Speed is one of the top motivators for creating a handshake based authentication system. The goal is to be, in many instances, faster than typing in passwords on a keyboard for the same level of security.
  • Number of errors in each task-participant instance
    • In tandem with speed is the number of points a user is confused during the system’s use.  This should be minimized as much as possible
  • Participants score from 1 to 5, with 5 being the highest:
    • Ease of use
      • For obvious reasons, we want our users to leave the product feeling they had a non-challenging experience
    • Speed of authentication
      • The user’s sense of speed is important, as it may actually be different from real time spent
    • Clarity of expectations and experience
      • If the user is confused about what to do with the authentication system, this should be addressed with documentation and/or prototype adjustments.

Results and Discussion

Quantitative: We had both quantitative and qualitative results to our tests. The original qualitative results are linked to in our second-by-second notes and questionnaire responses (see below). The quantitative results are summarized below:

Task (Time, # of Errors)

Participant 1

Participant 2

Participant 3



(1:55, 1)

(2:04, 1)

(1:53, 2)

(1:57, 1.3)


(2:10, 2)

(3:06, 2)

(4:00, 4)

(3:05, 2.6)


(0:15, 0)

(0:30, 0)

(0:20, 0)

(0:21, 0)


Participant 1

Participant 2

Participant 3


Ease of Use





Speed of Authentication





Clarity of Expectations









The time measurements from these trials are in-line with what we expected. Handshake reset took the most time of the three tasks, followed by registration, and login. User login, at a mean of 21 seconds, is longer than we would like, but we expect this number would be more reasonable as users become more familiar with handshake authentication systems. That there were no errors during the login process is a testament to the general accuracy of our gesture recognition.

It is interesting that the first two participants ranked their experience quite higher than the final participant. This can probably be attributed to the difficulties the final participant had with shadows in the box affecting the accuracy of gesture recognition in non-login processes.

General observations: It seemed that Oz had a somewhat steep learning curve.  As shown in our data, all of our test subjects misentered a handshake at least one time for several reasons. First, our explanation in the script wasn’t clear enough on the limitations of the current prototype (e.g., can’t take into account hand rotation) or how to use the prototype properly (e.g., didn’t insert their hand into The Box far enough). Consequently, Oz misinterpreted hand gestures relatively easily, and it often took several tries for the users to even enter in the same handshake twice in a row. Additionally, during the testing we realized how important the lighting is in the overall usability and accuracy of the system: shadows cast onto the hand or in The Box were often disruptive and resulted in inaccurate measurements. However, over the course of the testing, all three users were able to understand the prototype enough in order to use it fluently for the final test.

During the testing, users remarked the system with phrases such as “Cool!,” “Awesome!,” or “Aw, sweet!,” even though they sometimes struggled becoming acquainted with the handshake system. We suspect that this is due to the fact that though hands-free interfaces for computers have existed for several years (e.g., webcams, headsets), hands-free interfaces for controlling computers are relatively novel and unused. An interesting observation that we noted during our test trials was that every user set three hand gestures for their final handshake without our influence. It would appear that three gestures maybe the ideal number for users. More testing data is needed to justify this claim.  It is also possible that users would use much longer handshakes if prompted to do so, just as many existing websites require a password of a certain length.

Possible changes: There are several steps we can take in response to our observations.  In future versions, we intend to make The Box enclosed with consistent lighting to avoid environmental factors such as shadows from affecting the read gesture (which was a significant problem for participant 3).  Additionally, if we were able to spend significant time revamping the technology, using a depth camera such a Kinect would alleviate issues surrounding lighting (and obviate the need for the black glove used in our current prototype). In the final version of the product, we definitely want to make the Chrome plugin the one point of interaction. Currently, a terminal window is required to start the handshake reading process. Also, we would like to have facial recognition during the user selection process because typing in a username, as we have it in the status quo, would probably be slower.  One last interaction mechanism that we should implement is feedback as users enter the password.  Users appeared to rely on the terminal printout to signal when a gesture was read in, allowing them to move on to the next gesture.  We would like to have the Chrome plugin provide a visual “ping” that indicates a gesture has been recorded.


Link to Consent Form

Link to Demographic Questionnaire

Link to Demo Script

Link to Questionnaire Responses

Link to Second-by-Second Notes

Link to Code

P6 – Runway

Team CAKE (#13): Connie, Angela, Kiran, Edward

Project Summary

Run­way is a 3D mod­el­ing appli­ca­tion that makes 3D manip­u­la­tion more intu­itive by bring­ing vir­tual objects into the real world, allow­ing nat­ural 3D inter­ac­tion with mod­els using gestures.


We describe here the methods and results of a user study designed to help us with the design of our 3D modelling system. As described in further detail below, our prototype system provides many of the fundamental operations necessary for performing 3D modelling and viewing tasks, all taking advantage of gesture tracking aligned with our mid-air display. The purpose of this experiment is to evaluate the usability of our prototype system, more specifically to determine if there are any unanticipated effects of using our stereoscopic and gesture components together. We are performing this user test because observing actual users interacting with our system in its intended use cases will provide more useful insights on how to improve our system, compared with more artificial prototypes like the low-fi prototype of P4 or with more directed experiments such as only testing stereoscopic perception or gesture recognition.


Our implementation does not significantly differ from its P5 state. We spent the week fixing minor bugs in mesh handling and performance, since these were most likely to affect the user experience.



Our three participants were all undergraduate students here at Princeton, from a variety of backgrounds. Unlike for P2 and P4, when we specifically sought out users who would be more familiar with 3D modeling applications, here we sought users with a more mundane (or unrelated) set of skills, in order to focus more on the usability and intuitiveness of our system. None of our three users were intimately familiar with conventional 3D modeling software, nor are they from any particular field of study (although they did know each other prior to this experiment). From this we hoped to get a wider and perhaps less experienced/professional perspective on how approachable and intuitive our system is to someone who has not had to do these sorts of tasks before.


Hardware used for system:

  • 120Hz 3D Stereoscopic Monitor (Asus VG278H)
  • Nvidia 3D Vision Pro Kit (USB Emitter and Wireless Shutter Glasses)
  • Leap Motion gestural controller
  • Desktop with 3D-Vision compatible graphics card

Additional Hardware for experiment:

  • iPhone for taking photos

This experiment was performed at a desk in a dorm room (this being the location of the monitor and computer).


The tasks cover the fundamentals of navigation in 3D space, as well as 3D painting. The easiest task is translation and rotation of the camera; this allows the user to examine a 3D scene. Once the user can navigate through a scene, they may want to be able to edit it. Thus the second task is object manipulation. This involves object selection, and then translation and rotation of the selected object, thus allowing a user to modify a 3D scene. The third task is 3D painting, allowing users to add colour to objects. In this task, the user enters into a ‘paint mode’ in which they can paint faces various colours using their fingers as a virtual brush.

From user testing of our low-fi prototype, we found that our tasks were natural and understandable for the goal of 3D modelling with a 3D gestural interface. 3D modelling requires being able to navigate through the 3D space, which is our first task of camera (or view) manipulation. Object selection and manipulation (the second task) are natural functions in editing a 3D scene. Our third task of 3D painting allows an artist to add vibrancy and style to their models. Our tasks have remained the same from P5.


We started by emailing several suitable candidates for participating in our experiment, acquaintances (but not classmates or close friends) who were technically-savvy but not extremely experienced with our type of interface. Our email contained basic information about our system, but did not describe any of the specific capabilities in detail. For each of the three participants we obtained, we first gave them our consent form and pre-experiment survey (see below for original versions). The pre-experiment survey asked about demographic information as well as experience with stereoscopic displays, 3d modelling, and gestural interfaces. We then gave them a brief explanation and demo of how our system worked, in which we demonstrated the workflows and fundamental gestures that they had at their disposal. After making sure that they were able to perceive the objects floating in front of them, they then performed the calibration workflow and began the three tasks. Throughout the tasks, we had one person “coaching” them through any difficulties, giving the suggestions anytime they seemed to get stuck for too long. This was necessary since it was sometimes difficult to understand the gestures only from our demo (this is discussed in more detail below). After finishing the tasks, we then performed a brief interview, asking specific questions in order to stimulate conversation and feedback about the system (questions included below).

Subject 1 preparing to paint the object magenta

Subject 1 preparing to paint the object magenta.

Subject 2 preparing to manipulate a vertex.

Subject 2 preparing to manipulate a vertex.

Subject 3 translating the camera view.

Subject 3 translating the camera view.

Test Measures

We measured mostly qualitative variables, because at this stage, a lot of quantitative analysis would not be particularly helpful–we are not yet fine-tuning, but rather still gathering information as to what a good interface would be.

  • Critical Incidents: We recorded several incidents that indicated both positive and negative characteristics of our system. This is the most important qualitative data we can collect because it shows us exactly how users interact with our system, and thus illustrates the benefits and drawbacks to our system.
  • Timing: The amount of time it takes for the user to complete the task. This variable works as a preliminary measure as to how intuitive/difficult each task is for the users.

The following measures were obtained through post-experiment interviews. We asked participants to rate them on a scale from 1 to 5, where 1 was the worst and 5 was the best.

  • Ease of Use User Rating: This measure was meant to evaluate how easy the users subjectively found the interface to use–what good is an interface if it’s very hard to use?
  • Difficulty with Stereoscopy User Rating: We are using a 3D screen in our interface. One problem that tends to crop up with 3D screens is that they sometimes hurt the eyes and/or are hard to use. For this reason, we had the users rate how difficult it was to perceive the 3D objects in their locations in front of the scene.
  • Intuitiveness User Rating: A main aspect and important measure of how good a gestural user interfaces is derives from the intuitiveness of the gestures use. This class of interface is called a Natural User Interface (NUI) for a reason, it should simply make sense to the user. For this reason, we included this measure in our assessment of quality.
  • Preference of interface User Rating: In order to truly succeed, the interface we created has to be better than existing user interfaces–if no user would want to actually use the interface we created, then there are clearly problems with the interface. For this reason, we wanted to know if the users thought the interface was useful compared to existing mouse and 2D monitor 3D interfaces.


First of all, from the preliminary survey, it is apparent that aside from Subject 2, the group in general had very little experience with gestural interfaces, which made the group a good set of people to test the intuitiveness of our gestures on.

For the first task of view manipulation (translation and rotation of the camera view), all the users found translation to be easy and intuitive. Subject 1 found it confusing to distinguish between the effects of gestures using fists (view manipulation) and gestures using a pointed finger (object manipulation), and attempted to use object manipulation to complete the task. However, when reminded of the difference, she completed the task using the appropriate view manipulation gestures. She did attempt to continually rotate to achieve a large degree of rotation, which is an awkward gestures for one’s arms, and after a hint realized that she could stop and rotate again. Subject 2 picked up on the gestures more quickly and easily for both translation and rotation, though she also attempted to continually rotate for large rotations. Subject 3 also found it a little confusing to distinguish between the fist and finger gestures at the start, and found the ability to rotate objects out of the field of view to be confusing (with regard to getting them back in view). All of the subjects reported that the interface was easy to use and intuitive; Subject 2 (who used the interface with the most ease) found the gestures to be very intuitive.

For the second task of object manipulation (rotation of the object and object deformation), all the subjects found rotation easier than for the first task, after having gotten more used to the gestures. Vertex manipulation to deform the object was also grasped quickly and easily by Subjects 1 and 2; however Subject 3 did not realize that he needed to point very close to a vertex to select it, but after selecting the vertex, manipulation was easy. Subjects 2 and 3 forgot some of the gestures and needed reminding of which gestures corresponded to which functionality. With regard to remembering gestures, Subject 1 pointed out that having one fist to translate and two fists to rotate was confusing.

For the third task of object painting (the user is required to color in the sides of the object, and rotate it as well to paint the faces hidden from view), which was supposed to be the hardest task, the users surprisingly found it very intuitive and easy! Perhaps this was because the task corresponding most directly to a task you’d actually perform in the real world, like painting a model–changing the scene angle of the camera is not so much a real-world application, and could be more confusing. Subjects 1 and 2 did not realize that they could not paint on faces that weren’t visible, and needed to rotate the view to see the faces and paint them.

All the users found it easy to see stereoscopically, which was a pleasant surprise, since in the past there has been some time required before a user could see the stereoscopic 3D objects properly. They also all noted that the instability in detecting fists and fingers — the leap would often detect fists where there was a pointed finger — made the interface a bit more difficult to use. This significantly affected the difficulty of rotation, which Subject 3 found difficult enough to suggest that the learning curve might be steep enough that he would likely prefer using a traditional mouse and keyboard interface for 3D modelling.

Overall, rotation seemed to be the task that was hardest to learn, suggesting that we need to improve our rotation gestures. However, rotation is also the gesture most affected by the instability in leap gesture detection, which exacerbated the difficulty in rotation. Based on our experimentation with the Leap sensor, we have considered replacing our rotation gesture with a palm-orientation-based scheme. Another important issue to fix is that users commonly forget core gestures, especially forgetting the distinction between fist and finger gestures. We also commented on this issue in P4, but it was revealed to be a very important problem in P6; a reminder system (perhaps a sign floating in the background, or a training course) could be very helpful in mitigating this issue.


Consent Form
Pre-Experiment Survey
Post-Experiment Interview
Raw Data

P6 – Team Chewbacca

Group Number and Name
Group 14, Team Chewbacca

Group Members
Eugene, Jean, Karena, Stephen

Project Summary
Our project is a sys­tem con­sist­ing of a bowl, dog col­lar, and mobile app that helps busy own­ers take care of their dog by col­lect­ing and ana­lyz­ing data about the dog’s diet and fit­ness, and option­ally send­ing the owner noti­fi­ca­tions when they should feed or exer­cise their dog.

This is a system consisting of a bowl, dog collar, and mobile app that helps busy dog-owners take care of their dog. The bowl tracks when and how much your dog is fed, and sends that data to the mobile app. The collar tracks the dog’s activity level over time, and sends pedometer data to the mobile app. You can also check your dog’s general health, and suggestions will be given for when it is imperative to either feed or walk your dog. The mobile app links the two parts together and provides a hub of information. This data can be easily accessed and shared with people such as family members and veterinarians. The purpose of the experiment is to assess the ease of usage of the application, bowl, and collar system, and to identify critical changes that must be made for its potential success.

Implementation and Improvements

  • Covered up the graph on the activities page, instead just simply displaying a total recommended number of steps a dog should take in a day, and presetting the number of steps already taken to 2585, 15 steps below the ‘recommended level’. We did this so that the user could assume their dog has already taken steps throughout the day, and it would be simpler for us to test if the user could successfully register 15 steps that the dog has taken.Overall, besides the above modification our application was suitably prepared for the pilot usability test.


Link to P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-14-team-chewbacca/


Three participants tested our prototype. Master Michael Hecht is an academic advisor for Forbes Residential College, as well as a Professor in the Chemistry Department. He owns Caspian, a poodle.  He was selected because he frequently brings Caspian to Forbes as a therapy dog and was willing to bring Caspian in to test our system.  He also fit our target user group of “busy dog owners”. Emily Hogan is a Princeton sophomore student studying Politics. She owns a beagle named Shiloh.  She was selected because she reported sharing responsibility for her Shiloh with her family when she is at home. Christine Chien is a Princeton sophomore studying Computer Science.  She owns a small maltese poodle mix.  She was selected because she was responsible for taking care of her dog at home, and, as a computer science major, was familiar with new technologies and Android apps.  We chose both male and female participants of varying ages, areas of study, and levels of comfort with technology (particularly Android applications), which allowed us to gain diverse perspectives and see how different groups of users might react to our system.

We conducted our test with Master Hecht in the Forbes common room, and with Christine and Emily in a Butler common room.  We did not use any special equipment apart from the bowl and collar in our system.  We recorded critical incidents on a laptop during testing, and had users fill out an online questionnaire before and after the prototype test.


Task 1: Exporting Data to a Veterinarian
This prototype allows users to send either the mobile app’s collected diet data, activity data, or both via email. This functionality is intended to be used primarily to send data to veterinarians  This is an easy task, as users need only to enter the “Generate Data” screen, select which type(s) of data they wish to send, enter an email address to which to send it, and press the “Send” button to complete the task.

Task 2: Monitoring a Dog’s Diet
Our working prototype allows users to check whether or not they need to feed their dog based on when and how much their dog has been fed.  They can view the time their dog was last fed as well as the weight of the food given at that time.  This is a medium-difficulty task, as the user needs to interact with the dog bowl (but only by feeding their dog as they usually do), then interpret the data and suggestions given to them by the mobile app.

Task 3: Tracking a Dog’s Activity Level
This working prototype allows users to track the number of steps their dog has taken over the course of a day, compare that value with a recommended number of daily steps, and view long-term aggregated activity data.  In order to carry out this task, they must put our dog collar on their dog so that the collar can send data to our mobile app via bluetooth.  This is a hard task for the user, as they must properly attach the dog-collar and interact with the live feedback given by the mobile app.

We have chosen these tasks because they are the primary tasks that we would like users to be able to complete using our finished system.  The three tasks (communicating with a veterinarian, monitoring a dog’s diet, monitoring a dog’s activity/fitness level) are tasks that all dog-owners carry out to make sure their dog is healthy, and we wish to simplify these essential tasks while increasing their accuracy and scope.  For example, while dog-owners already monitor a dog’s diet and activity by communicating with a dog’s other caretakers and using habit and short-term memory, we wish to simplify these tasks by putting the required data onto a mobile app, using quantitative measures, outputting simple suggestions based on quantitative analysis, and allowing long-term data storage.  We relied heavily on interviews with dog-owners in choosing these tasks.

We conducted this study with three participants in the settings described above (see “Apparatus” section).  We first had each participant sign a consent form and fill out a short questionnaire that asked for basic demographic information.  Then, we read a scripted introduction to our system. After this introduction, we gave a short demonstration of our mobile app that involved sliding to each of the major pages and viewing the long-term activity level graphs.  We then read from the Task scripts, which required users to interact with all parts of our system and give us spoken answers to prompted questions.  During these tasks, we took notes on critical incidents.  After the tasks were completed, we asked for general thoughts and feedback from the participants, then asked them to complete a questionnaire that measured satisfaction and included space for open-ended feedback.

Test Measures

  • critical incidents: Critical incidents allowed us to see what parts of the system were unintuitive or difficult to use, what parts participants liked, if anything in our system might lead participants to make mistakes, etc.  It allowed us to see how participants might interact with our system for the first time, and how they adjusted to the system over a short period of time.
  • difficulty of each task (scale of 1-10): This allowed us to identify what parts of our system participants found most difficult to use, and whether we could simplify the procedure required to carry out these tasks.
  • usefulness of mobile app in accomplishing each task compared to usual procedure (scale of 1-10): This allowed us to measure whether and how much our system improved upon current task procedures, and identify areas for improvement/simplification.
  • relevancy of task to daily life (whether or not they perform this task as a dog owner): This allowed us to see whether any part of our system was superfluous, and which parts were essential.  This would allow us to decide whether to eliminate or improve certain parts of our system.
  • general feedback: This allowed users to give us feedback that might not be elicited by the specific questions in our questionnaire.  It allowed us to collect more qualitative data about general opinions about the system as a whole, and also gave us insight into possible improvements or new features that participants might like to see in the next version of our system.

Results and Discussion
Participant 1 (Master Hecht), had never used an Android phone so he was unfamiliar with the swiping functionality of the home page, and was unsure how to access the settings page that helped to generate the report. He even mentioned how he would have spend about 1 minute learning the system if he was given an iPhone app, and maybe about 3 minutes since it was an Android app. Our user was really excited about the idea of tracking the dog as opposed to tracking the number of steps that his dog was taking. He thought that the data should be continuous as opposed to binary because it would provide greater functionality for the user–and be more interesting in general. He thought the dog feeding data that told him whether or not the dog  had been fed wasn’t as useful as it could be. When he learned that the data actually had higher resolution (we could report the weight of the food in the bowl as well), he thought it would be more useful. In the post-questionnaire, our user said that the tasks were generally pretty easy, and the only trouble came from navigating through the pages of the app. He also mentioned how he would probably not use the export feature (task 1) simply because he, as a pet owner, didn’t have too much interaction with his veterinarian since his dog was normally healthy. When we applied the device to the actual dog, the accelerometer recorded the steps pretty accurately. The only problem was that the physical device was pretty bulky and provided a lot of weight to the little dog.

Participant 2 (Christine), an Android user, easily navigated the mobile application and completed all of the tasks in little time.  For task 1, she had some trouble finding the “export data” popup, and felt that a non-Android user might not know where to find this button.  For task 2, she pressed the “diet” and “food” columns on the homepage instead of swiping, which she said she found confusing.  She easily interpreted the suggestion on the homepage that the dog should be walked.  However, at first she misinterpreted the “already walked” step count with the recommended step count, though she quickly corrected her error.  For task 3, she quickly interpreted the homepage alert that her dog needed to be fed, easily found the time last fed, and successfully used the dog bowl so that this time updated.  In the post-task questionnaire, she reported that the tasks were all easy and that the app was useful in accomplishing all of the tasks.  However, she said that Task 3 was not a task she would actually perform.  She also suggested that if one of the diet or activity columns on the homepage is “lit up” because your dog should be fed/exercised, a click should take the user to the appropriate page.  She thought that exporting data from the homepage was unintuitive, and that data should be exported from the page that displays that data.  She reported that a useful feature would be allowing users to set up alarms for feeding, as that’s what she would use the app for.  Overall, she thought the health score was not well explained, and that the “eating” functionality of the app was more useful to her.

Participant 3 (Emily), had little to no experience with the Android interface, and overall had more difficulties using the device. She commented that overall the android device interface was very difficult and non-intuitive. In Task 2, we noticed that the pedometer was more sensitive than it should have been, as it continued to increase in number even when it was barely moving. In Task 3 she had a bit of difficulty interpreting the time, since it was placed in military time. Overall, in the post-questionnaire she reported that she felt Task 1 wasn’t as important as Tasks 2 and 3, and she said that they were generally easy to perform. She noted, “If I knew more about using an Android it would have been much easier.” She felt that she would use the diet component of the device far more than the activity component, since she often must collaborate with her siblings and parents to figure out if the dog has already been fed. She also explained that she could benefit from notifications and calendar reminders, since she often relies on those for her daily routines.

Overall, our users offered us valuable input that could prompt changes in the structuring of our prototype. We definitely need to make the “generate report” option in a more visible location, possibly already apparent on the home screen, or include in our demo a demonstration of how to use the menu option. In addition, allowing users to navigate to the “diet” and “activity” pages by clicking buttons on the home screen (in addition to, or perhaps in place of swiping) would make our mobile application interface more intuitive, as all of our users commented on this aspect of our interface. We could also develop our prototype to be accessible on both Android and IOS devices, so that users would feel more comfortable with the interface, given their previous experience. We might also incorporate more data about the weight of food placed in the bowl – for example, having a growing list of all times the dog has been fed the weight of the food at each time. This data could then be used to give the user more relevant information about their dog’s diet, so they could check when they missed a feeding, and if their dog is eating more/less than usual. Following suggestions given by both Emily and Christine, we are also considering adding a notification/alert system.  Finally, as the pedometer was a little more sensitive than expected when testing with a real dog, and was also a little heavy for the small frame of our test dog (Caspian), we hope to update the pedometer and increase its accuracy by updating the threshold and compacting the device.


Consent form:


Demographic Questionnaire:

Demo Script:

Raw Data

Critical Incident Logs:





Questionnaire Responses


P6_1 P6_2P6_3

P6 – The Elite Four

The Elite Four (#19)
Jae (jyltwo)
Clay (cwhetung)
Jeff (jasnyder)
Michael (menewman)

Project Summary

We have developed a minimally intrusive system to ensure that users remember to bring important items with them when they leave their residences; the system also helps users locate lost tagged items, either in their room or in the world at large.


The system we will be testing is a prototype with the basic functionality described in the project summary above. The system is able to detect when a user leaves the room, alert him/her if tagged items are missing (task 1), and help him/her find the items either inside (task 2) or outside (task 3) of his/her room. For our tests, we will have our users perform each of the three primary tasks. The goal of these tests is to ensure that our prototype is an intuitive and effective system for the average user.

Implementation and Improvements

P5 Blog Post: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/the-elite-four-19-p5/

We have not made any changes to our working prototype since submitting P5. However, we certainly plan on improving and adding features before the final submission, focusing on the feedback we receive from our participants.


i. Participants:

Participants were randomly selected from Terrace Club. At the time of testing (during the afternoon) students who were working in the dining room were approached and asked for their participation. Amongst the users was a senior MOL major, a senior COS major and a senior HIS major. Each of the participants were members of our target user group, as they all lived in on-campus dormitories with automatically locking doors. More specific demographic information can be found in the questionnaire data within the Appendix.

ii. Apparatus:

We conducted the tests using our prototype. The current prototype uses an Arduino Uno to control LEDs as well as our RFID receiver and transmitter. These components are connected with a breadboard and jumper wires, along with miscellaneous items like electrical tape. We conducted our tests at Terrace Club, which was not a dorm room per se but sufficed because it has doors.

iii. Tasks:

Users should be able to perform three tasks with our prototype. The first task (easy) is to identify when a door has been opened and alert the user if s/he tries to leave without tagged item(s). The second task (medium) is for the system to help the user locate lost tagged items in his/her own room. Our final task (hard) is to help the user locate lost tagged items outside of his/her room. This task is very similar to the second from the system’s point of view, but for the user it is far more difficult, since the potential location of the lost item(s) is much greater.

iv. Procedure:

We conducted the study by setting up in a semi-public area and asking random students if they would like to participate. At that point, we showed them the consent form, asked them to perform our prototype’s three tasks, and had them fill out the demographic and post-task questionnaires.

Test Measures

For task 1, we had each participant leave the room 10 times with the tagged item and 10 times without. The resulting raw data can be found in the Appendix. Our system consistently identified the item 70% of the time when the user had the tagged item. It’s important to note that the RFID transmitter was generally put in the user’s sweatshirt or pants pocket, which appeared to interfere with the signal strength. We also had User 2 experiment with the left vs. right pocket of his sweatshirt since the sensor was on his right side in our setup. In his right pocket, our system was 5 for 5, but in his left pocket, it was only 2 for 5. We also tested our system without the tagged item to test that there were no false positives. For Users 1 and 3, we varied where the tag actually was while we were testing this. Since a college dorm room isn’t very big, there were realistic scenarios where the transmitter was in a pants pocket on the floor just a few feet away from the door, and these were the situations where false positives were detected. This will be a difficult problem to solve.

For task 2 and 3, we measured the time required to find an object in each of 3 preset hidden locations. We made sure this test was double-blind by having one group member hide the transmitter and a different group member following our participant as he or she tried to locate it. Throughout all 3 of the tasks, we made sure to take note of any qualitative comments or suggestions they made.

In addition we had each participant complete a more quantitative response form. In this form they were asked to rate the intuitiveness and effectiveness of our system during each task.

Results and Discussion

Users were not satisfied with the current alert system, which just uses red and green LEDs to inform the user of tag presence. This was shown to be inadequate for all of our users, who were somewhat unaware of the LEDs’ presence for all three tasks. For the first task, users would often be past the door before they could see the LED light up, especially if they simulated the situation as if they were in a rush. For the second and third tasks, users would have to spend a lot of time looking down at the LEDs to see how quickly it was blinking, and since they were looking around at where they were going, they would sometimes miss important changes in blinking speed. As a solution to this we will be adapting our prototype to add audio notifications as well. We hope that this change will help make our system’s alerts more intuitive to the user.

There also some rather severe usability issues with the item finding feature. This feature had issues with responding quickly and accurately to distance changes. This is a result of our RFID transmitter having a weak signal and only transmitting every ~2.4 seconds. This meant that proximity updates to the user were too slow and caused confusion. In order to remedy this issue, we will be adding an antenna that will increase signal strength and reduce transmission time. This means faster updates to our user and a much more usable system.

Users were also confused by the use of LEDs while using the finding feature. In the current prototype, both red and green LEDs flash at varying rates depending on tag proximity. However, users complained that they were unsure whether the tag was just far away or fully out of range. To fix this we are now using the green LED to display proximity and the red LED to indicate that the tag is out of range. One user even suggested including more LEDs to indicate proximity. For example, it would flash red if the tag is out of range, orange if the tag is at the edge of its range, yellow if it is near the middle of its range and green if it is very close. After implementing and testing the method of using the green LED to display proximity and the red LED to indicate that the tag is out of range, we have decided that users are satisfied with this and that we do not need to add more LEDs.

Another issue users observed was that the tag was not always detected by our system when opening the door. This occurred when the tag was covered (i.e. in pants or a backpack), and is is a severe issue for us. It is caused by the weak signal strength on the transmitter, and because of this the new antenna that we will add should solve this problem as well. We will also need to decrease the threshold of signal strength where the tag is considered found, as the stronger signal from the antenna will mean the tag could be detected from further away which could create more false positives.


Consent Form:


Testing Script:


Demographic Questionnaire:


Demographic Results:


Post-Task Questionnaire:


Post-Task Results:


Task 1 Data:

With tagged item (10 times)

Without tagged item (10 times)

User 1

7 right, 3 wrong

8 right, 2 wrong

User 2

7 right, 3 wrong

10 right, 0 wrong

User 3

7 right, 3 wrong

8 right, 2 wrong

Task 2 Data:

Location 1

Location 2

Location 3

User 1

Time limit reached


Time limit reached

User 2




User 3





Group 9 (VARPEX) – P6

Group Number and Name: Group 9, Varpex

Group Members: Abbi, Dillon, Prerna and Sam

Project Summary: Our project provides a new musical listening experience through a jacket that vibrates with the bass.


Over the past few months, we have developed a jacket that provides a new means of experiencing music. There are 12 motors that line the jacket and provide sensation. There is a box that plugs into the jacket and produces the sensation. A user puts on the jacket, plugs in their headphones and MP3 player to the box, and connects the jacket to the box. They play their music using their MP3 player and this causes vibration of the motors. In this report, we evaluate the ease of setup of the system and how well it provides an enjoyable experience for our users. This experience includes both listening to music with the sensation of the motors while sitting and walking and the experience when the music is off and participants wear it as they would a normal jacket.

Implementation and Improvements

Please click here for our P5 submission

Changes since P5:

  • New box: We switched to a plastic box because we were concerned about the possibility of the cardboard igniting from a spark should something go wrong with the electronics.

  • We labelled the ports on the box for clarity.


1. Participants

We solicited volunteers through our eating clubs and student organizations. The students that responded expressed interest in trying our prototype, knowing that its purpose was providing a new way of experiencing music. The students that ultimately tried the jacket all enjoyed music, but often of different genres. All enjoyed the feeling of music experienced when listening to live music. We tried to draw from the general student body so these students had a wide range of technical and musical knowledge. Participants were all aged 19-21, students at the university, and live in dormitories.

2. Apparatus

The tests were conducted in the undergraduate ELE lab. We chose to perform the tests there so we could more easily troubleshoot any technical problems with our system. The equipment we used were a set of headphones, a phone-based mp3 player, our jacket of embedded motors and the box containing our system. These items together represent the complete set-up one would need to actually use our system in a non-lab setting, so we are confident that the testing we do today represents well how someone might actually use our system to feel music in real life.

3. Tasks

As in the past, our system does not exactly conform to the “three tasks” paradigm. Our study, however, does try to give the user a sense of how they might use the jacket in one of the three settings we have described previously: listening to music in your room (easy),  listening to music in a public library (medium), listening to music while walking out and about (hard). They are little changed from P5. These tasks represent three times when people commonly listen to music via headphones. In this scenario, participants listened to music first while sitting in a chair as well as while interacting with us for the first part of the testing phase. We then had the users stand up and walk around the lab a few times while listening to music. These scenarios gave us a good idea of how users would react in the three original tasks.

4. Procedure

We first welcomed our participants and they filled out the consent form and the first page of our google form. This form asked demographic information and asked about music preferences. After they filled out this initial form, we explained the system to the user and asked them to set it up. We gave them an overall description but did not give them specific instructions for setup. Participants then set up the system and asked questions when necessary. We asked for their initial feedback on the setup process and confusing aspects of the design. Next, we had them listen to music for few minutes however they felt comfortable. We then asked for their feedback about the jacket without any music playing (and no vibrations). We also went through a series of questions to understand how pleasurable the experience was and where it could be improved. As we interviewed participants, we observed their behavior for fidgeting and signs of discomfort. Participants then walked around with the system to test how it felt while walking. We asked for feedback on comfort and sensation in this task as well. At the end, we opened a general conversation with the participant about the experience and asked how it could be improved.

Test Measures

As mentioned before, our prototype is not directed towards helping a user improve any sort of objective measure of a task. Therefore our testing sought to solicit subjective evaluations of things like comfort, sensation, and likelihood of using the jacket for one of the three tasks we thought would be good usage scenarios.

We ultimately collected a wide variety of subjective measures. Questions regarding ratings were asked on a scale of 0-5. For the full list of questions, see the Google form in the appendix. The subject matter of the data collected included:

Demographic Data:

  • Musical preferences/opinions of electronica
  • Frequency of concert-going
  • Frequency of music-listening outside of live music setting

Ease-of-use data:

  • Logs of users attempting to use jacket with little-to-no instruction
  • Self-evaluation of difficulty in putting on jacket
  • User perception of valuable information in jacket-use instructions

Comfort/Likelihood of use data:

  • Comfort of jacket relative to a non-vibrating jacket when jacket is off.
  • Comfort of jacket while sitting down and standing up.
  • Adjectives used to describe jacket.
  • Likelihood of using jacket relative to amount user currently listens to music in different settings.
  • Desire to own jacket.

Results and Discussion

Our test subjects came to us with a diverse array of musical interests- when asked for their top three musical genres, over 12 distinct genres were named among the subjects. Interestingly only one of them named “electronica” in their top three, which was one of the genres we had targeted as a good candidate for “feeling” music. No matter what their tastes were reported to be we first had the users test the jacket using the song “Thunder Bay” by Hudson Mohawke. One of our users who described dubstep as “overrated, boring, tuneless” still reported a positive experience with the jacket, describing it as “fun” and “interesting.” When we asked users to try the jacket with their preferred music, however, they responded much more positively- that same user with a lower opinion of dubstep and electronica later tried the jacket with music from her favorite artist (“Bad Romance” by Lady Gaga), and reported an even more positive experience. This is a promising development that could expand our target user base, but it will require some modification of the jacket. Currently, the jacket is capable of responding to frequencies below 160 Hz. The intensity of vibrations is controlled by a comparator which allows current to flow through the motors for a particular fraction of the given low frequency wave. In genres like dubstep with lots of heavy bass, this fraction can be fairly low and still produce lots of sensation since the dominating frequencies are below 160 Hz. In other genres, the concentration of bass is lower so this fraction needs to be increased to produce a comparable sensation. We can give the user the ability to adjust this fraction using a potentiometer so they can calibrate it themselves to the best sensitivity of their preferred music. (Perhaps this could even be done in a future iteration on a microcontroller like the Arduino, where a simple pre-processing step could analyze a song to find the ideal threshold level with no overhead for the actual performance of the jacket). It’s certainly a functionality that people seemed to expect already; it appeared that people would play with the volume knob in our usability test in the expectation that it would modify the sensitivity of the bass already, so adding that function is definitely appropriate.

We were also pleased to see that users rated the comfort of the jacket highly- on a scale of 0-5, 0 being not at all comfortable and 5 being as comfortable as any other jacket, the users rated the jacket on average as a 4.1 when worn while sitting down, and a 3.5 when standing up and walking. There were other factors, however, that affected the user experience when wearing the jacket. Users- generally women of shorter stature- reported that the jacket’s loose-fit may have interfered with their ability to receive the full effect of the vibrating motors. Further, no one said that they would describe the jacket as “fashionable.” Three of the users even went as far as to say they would describe the jacket as “ugly.” Jacket fit between users was always going to be an issue, since a men’s hoodie can only fit so many people. In a next iteration however we will try to make the vibrating motor assembly more easily secured to the jacket using velcro rather than sewn-on pockets for the motors. That way, it can be more easily placed in jackets of varying style and size. Another inhibiting factor on user comfort was the fact that while up and walking, the users had to carry around the box containing our system with them. This was unfortunately a limitation on our prototype, which was implemented on two small breadboards attached to the three 9-volt batteries needed. In our next iteration we are exploring the possibility of soldering the project to a protoboard to allow for a more portable form-factor. In the meantime, we have to put the issue of the large box aside as our own version of a “wizard-of-oz” set up, since we never really expected that users would have to carry it around.

People generally found the jacket easy and intuitive to use, but certain prototyping aspects were noted to be cumbersome. The test subjects all said that having to carry around a bulky box was inconvenient, and a more intuitive plug and play interface easily built into the jacket would be preferable. Test subjects also found the direction of the volume knob unintuitive since it rotated in the opposite direction to standard knobs. Both these interface issues are ones which we plan to address and fix in the next prototype. Another important design idea which emerged from this stage of testing was customizability. Fit of the jacket was a problem for some test subjects, and we believe it can be addressed by redesigning the jacket into a frame of motors that can potentially fit into many different jacket sizes. While this design isn’t something we can implement in the next iteration, it is a good idea for future design changes. In summary, we would like to implement the following changes to our jacket for the next iteration:

  • Change the direction of the volume knob to make it more intuitive.
  • Integrate the circuitry into the jacket so the wearer is not required to carry the box.
  • Add a threshold knob so the sensations can be customized based on individual comfort levels.
  • Make the power switches more accessible to ease the plug-and-play interface design.

The three main tasks we listed when thinking of use cases for our jacket were as follows:

  1. Listening to music while walking to class
  2. Listening to music while studying in the dorm room, library or other such quiet place.
  3. Listening to music while in a silent disco.

Our test subjects said they would use the jacket under all the three listed situations. The idea of using it at the Silent Disco was very popular, but while thinking about performing everyday tasks, testers were concerned about the storage of the jackets when not in use. One of the testers was concerned about walking around with the jacket, and how the process would work with motors and wires in the jacket. Final additional concerns that were raised included wearing a jacket in warm weather, its durability, and its washability. These are secondary to our primary goal of creating a new music listening experience but will be key in future iterations of the jacket.


Raw Data


Biblio-File P6

a. Group num­ber and name

Group num­ber: 18
Group name: %eiip

b. Mem­ber names

Erica (eportnoy@), Mario (mmcgil@), Bon­nie (bmeisenm@), Valya (vbarboy@)

c. Project summary

Our project is a smart book­shelf sys­tem that keeps track of which books are in it. You can see the web appli­ca­tion por­tion of our project here, and our source code here!

d. Introduction

In this user test, we evaluated both the hardware and software components of our system. We had users actually use Biblio-File to add books to their shelf, search for books (both those that were and were not on the actual shelf) and play with the system. In particular, we were testing to see whether or not our system actually made searching for books easier. To test this, we had users search for book both with and without Biblio-File. Moreover, we wanted to see if adding many books was annoying or tedious for the users, and if the delay due to RFID reading was particularly annoying or frustrating.

e. Implementation and Improvement

Our P5 post can be found here. We also made a few changes since P5, which are listed below:

  • Completed implementation of the server-client-daemon architecture – we can now user-test using only our system, without needing “Wizard-Of-Oz” simulation
  • Modified the RFID library to enable low-latency use of multiple scanners (original code didn’t support multiple scanners)
  • Alphabetized entries by title in the web app to let users scroll through books in a sensible order
  • Added magic administrative functions to help with user testing: auto-populating bookshelf, clearing database, etc.
  • Computer now beeps loudly after a sensor detects an RFID to give users feedback

f. Method

i. Participants
For our user test we chose three undergraduate students with varying levels of technical expertise. Our first tester is an engineer, who often uses technology, and was very interested in how our system worked. Our other testers were less experienced with technology as a whole, and were completely unfamiliar with the Arduino and the other tools that we used in our bookshelf. We chose them to see how intuitive our system was, and how annoying or frustrating the delays might be. People who are less familiar with these tools are also less used to delays, and will therefore have a more natural reaction to them. Similarly, people with less technical expertise are less likely to assume anything about the app, and we can therefore see how they use it, what they try to do but cannot etc.

ii. Apparatus
In order to conduct the test we used a stack of shelves in the TV room of Charter. The location might not have been optimal, because there were other people there playing video games, so there was a lot of noise and distractions. That being said, it was cool that our system was so mobile, and could be applied to any bookshelf anywhere. We attached our system to the shelves, and brought our own books to use for the tasks themselves. We let the users use one of our own phones to test it, to avoid the need to download pic2shop for barcode scanning. That being said, they could have easily used their own mobile device if they chose to.

This is the version of the system that we used for our user tests

iii. Tasks
The easy task is find­ing a book on the shelf, or search­ing for a book that is not present. Users can choose to use our mobile app to search for a book, or they may attempt to man­u­ally search the shelf. If our sys­tem pro­vides added value, we hope that they will opt to con­sult our mobile app. To do this task we provided the users with a very full bookshelf. Each user saw exactly the same books on exactly the same shelves. We timed how long it would take the users to find a book that was on the shelf, and one that wasn’t, both with Biblio-File and without.

Our medium task is adding a sin­gle new book to the sys­tem; it con­sists of adding an RFID tag to the book, adding it to the sys­tem using our mobile inter­face, and then plac­ing the book on the shelf. The purpose of this task was to test how easy our system is to use, and what a user would intuitively want to do given a system like ours.

Our hard task is adding an exist­ing book col­lec­tion to the sys­tem; this consisted of four books, for test­ing pur­poses. This is the last task a user would have to complete with our sys­tem, and it this is very sim­i­lar to the pre­vi­ous task. It consists of using the mobile inter­face and an RFID tag to add books to the bookshelf. The main purpose of this task was to test the tediousness of adding many books to a collection.

A video of one of our user tests can be found here!

iv. Procedure
To conduct the study, we first introduced our team and had the user read and sign the consent form, and fill out the demographic questionnaire. We then explained the concept of a Think-Aloud Study, and practiced the methodology on an unrelated problem (see script in appendices). After this, we demonstrated how our system works (in general, without showing them any of our tasks). We then ran the easy task, and timed it, followed by the medium task and then the hard task. Afterwards, we told the users how our system was meant to work (if they didn’t understand it to begin with), and asked some follow-up questions to check how annoying our system was (if it was at all) and how the user felt using it. The answers to these questions are included in the appendices. The users were encouraged to think aloud and ask questions throughout the study.

g. Test Measures

We measured two within-subjects variables related to book access and retrieval, whether or not a book is on the shelf and whether or not the user was using our system. We chose to measure these to see if our system gave the user any quantitative speedup in common book interaction processes.

  • Time taken to retrieve a book on the shelf.
  • Time taken to realize a book is not on the shelf.
  • Time taken to interact with a book using only the physical bookshelf.
  • Time taken to interact with a book using our system.

h. Results and Discussion

Our tests showed that in general, our design is sound, although a repeated measures ANOVA with a sample size of 3 showed no significant difference in using or not using our system (p > .05, see ANOVA output in appendices). Many users were enthusiastic about what we were able to do; in particular, many were delighted that we could gain a lot of information from a single photo of the ISBN barcode. We deliberately did not give users a complete demo of our system because we wanted to judge its intuitiveness. Even without complete instructions, our testers largely understood the system, which we’re very proud of. However, for some tasks, such as removing books from a shelf, it’s clear that more specific instructions would be helpful.

Users relied heavily on receiving some sort of feedback that an RFID tag had been sensed, which we implemented in the form of a loud beeping sound. This worked well. Also, users did not seem impatient when we asked them to add many books to the system at once.

There are some small changes that we’d like to make. For example, when there are no books in the shelf, we shouldn’t display the search bar, since users often attempt to add a book by typing the title into the search bar first. We may also want buttons to display a “not on shelf” message instead of a greyed-out “Light up!” button when they’re not on the shelf, since 1 in 3 users did not recognize the design motif of a disabled button. Also, there are some bugs we still need to fix, such as the barcode scanner redirecting to iTunes occasionally, and the lag with the LEDs on the shelf.

We also need to clarify the tap-in tap-out process to the user. While it is not the most intuitive, it is based on the technical limitations of our hardware, so we will have to compensate in instructions. Since some users attempted to tap a book in before adding it via the software, we should enable our software to allow either ordering. We should also change the instructions to say “place the bookmark inside the front cover,” since some users placed the bookmark too deep inside the book to be read.

i. Appendices

i. All things read or handed to participant

ii. Raw data

Group 17 – P6

Group Number: 17

Names: Evan, Jacob, Joseph, Xin Yang

Project Summary: We are testing an add-on device for the cane of a blind user which inte­grates GPS func­tion­al­ity via blue­tooth and gives car­di­nal and route-guided direc­tions via hap­tic feedback.

Introduction: Over the course of our project, we have prototyped an attachment for the long white cane used by the blind. Intended to work as a bluetooth extension to a GPS device, the BlueCane provides haptic and touch-based navigation guidance and features a passive mode in which it haptically gives an intuitive compass orientation to the user. We are now testing this prototype with blind users. Our purpose is to determine the usability of our current prototype, in terms of how much of an improvement (if any) it would provide over current systems, and to determine which features promote the usability and which should be improved or removed. We hope to understand the usefulness of haptic and touch-based guidance in a navigation interface for the blind.


Link to our P5 prototype:


Since our P5 submission, we have made the following changes:

  • Cut down on the size of the PVC apparatus to facilitate easier attachment to individual canes

  • Consolidated and organized wiring to prevent shorts, breaks, and entanglements

  • Added an accelerometer and experimented with gravitational tilt compensation for the compass unit


Participants: All three participants were blind or visually impaired individuals, with varying levels of mobility and experience with cane travel, living in the Mercer county area. They were recruited via a notice sent over the listserv for the New Jersey Foundation for the Blind which advertised an opportunity to help test a prototype for new technologies in the area of navigational tools for the blind. Participant #1 was a completely blind, retired female, who, despite having above-average mobility and confidence, was primarily a seeing-eye dog user and thus had limited experience with cane travel. She used a GPS device regularly. Participant #2 was a blind woman who held a full-time job, but primarily used transit services to get around. Though she had far more experience with cane travel, she had limited experience with GPS technology. Participant #3 was a legally blind, working male who had moderate experience with cane travel. He worked in the field of technology and had experience with GPS. No participant had any physical issues with mobility, and all seemed to understand well the nature of the task and were excited about the advancements that we were proposing.

Apparatus: Our prototype was essentially the same as demoed in our P5 video, with the small modification that we carved out a portion of the cane handle to give the attachment a better form factor. An accelerometer had been added to allow for more accurate directional calculations in the final version, it had not yet been implemented at the time of testing. In addition, we also utilized a small blue briefcase for use in the 3rd task, as well as audio tracks of city background noise. In order to minimize the demand on our participants, who had difficulty traveling, all testing was performed at the house of the individual, typically outdoors in their yard or neighborhood because of space requirements. As a result, the participants all had a fair degree of familiarity with their environment, which, while perhaps allowing them to rely less on purely external directional instructions, lessened the already considerable stress associated with their participation.

Tasks: Our first, eas­i­est task arises when­ever the user is in an unfa­mil­iar space, such as a shop­ping mall or store, but does not have a well defined destination, as is often the case when browsing or shopping.  As they men­tally map their sur­round­ings, it’s imper­a­tive that the user main­tain a sense of direc­tion and ori­en­ta­tion. Failure to do so can reduce the user’s ability to do the things they want, and can even be a safety concern if the user becomes severely lost. Our cane will allow users to find and main­tain an accu­rate sense of north when dis­ori­ented by providing intuitive haptic cues, increas­ing the reli­a­bil­ity of their men­tal maps.

Our sec­ond and third tasks both con­front the prob­lems that arise when a user must rely on maps con­structed by some­one else in order to nav­i­gate an unfa­mil­iar space with the intent of reaching a specific destination, as is the case with nav­i­ga­tion soft­ware and GPS walk­ing guid­ance. In the sec­ond task, (medium dif­fi­culty) our cane would assist users on their after­noon walk by pro­vid­ing hap­tic and tac­tile GPS direc­tions, allow­ing users to explore new areas and dis­cover new places, in much the way that a sighted person might want to visit a new street, building, or park on a leisurely stroll.

In our third and most dif­fi­cult task, our cane alle­vi­ates the stress of nav­i­ga­tion under dif­fi­cult cir­cum­stances, such as fre­quently occur when run­ning errands in an urban envi­ron­ment. In noisy, unfa­mil­iar ter­ri­tory, the Blue­Cane would allow users to travel unim­paired by envi­ron­men­tal noise or hand bag­gage, which can make it very dif­fi­cult to use tra­di­tional GPS systems.

Procedure: Upon arriving at the participants’ houses, we began by explaining to them who we were and what our system hoped to accomplish. After obtaining their informed consent, we introduced them to the prototype and its features, and allowed them to get familiar with it before explaining each task. We performed the tasks sequentially, gathering data during the trial itself (success on responding to cues and appropriately reaching destinations), and then obtained intermediate feedback after each task. After all tasks were completed, we reminded each participant to be as honest as possible and read out the survey questions, allowing the them to qualify their answers as freely as they wished (after stating a value on the likert-style questions). Finally, when all of our predetermined questions had been answered, we opened the conversation to a full discussion of any questions or other feedback they had.

Test Measures:

Task 1:

  • Whether user was able to turn himself/herself towards a given cardinal direction given tactile feedback from the cane when it points.
  • If unsuccessful, approximate angle at which user deviated from the correct direction.
  • Qualitative Feedback

Task 2:

  • Without any additional cues, we gave the user a few turns to follow using the raised ridges in our navigational hardware. Out of these, we counted how many they were able to follow.
  • Qualitative Feedback

Task 3:

  • Same as task 2.


Participants who succeeded in task 1: 3/3

Overall fraction of turns followed in task 2: 1/3, 0/5, 2/5

Overall fraction of turns followed in task 3: 3/3, 3/5, 2/2

Likert ratings: (1 for “Strongly Disagree”, 5 for “Strongly Agree”)

I found the vibration in the direction of North useful in maintaining a sense of my orientation.  4 + 5 + 4 (avg: 4.33)

I found the vibration in the direction of North intuitive and easy to use. 5 + 5 + 4 (avg: 4.67)

I found the turn-by-turn commands useful in navigating to a destination. 4 + 4 + 5 (avg: 4.33)

I found the turn-by-turn commands intuitive and easy to use. 3 + 4 + 5 (avg: 4)

I would prefer to have directions read to me aloud instead of or in addition to haptically (as in the current system). 3 + 2 + 4 (avg: 3)

I would prefer to use (a refined version of) this system over a standard cane. 5 + 5 + 4 (4.67)

I feel that having such a system available to me would increase my confidence or feeling of autonomy. 2 + 5 + 4 (avg: 3.33)

I feel that (a refined version of) such a system would help me navigate indoor spaces. 5 + 4 + 4 (avg: 4.33)

I feel that (a refined version of) such a system would help me navigate outdoors (with or without GPS navigation). 5 + 4 + 4 (avg: 4.33)


Given that the profiles of our 3 users varied considerably, it is likely that there are other profiles of blind users we have not considered/encountered. This causes us to be hesitant with any assumptions about our external validity.

Variations between blind users:

– Amount of experience in cane travel

– Cane travel technique (how they like to hold the cane)

– How good their sense of direction is (none of them normally think in terms of cardinal directions, but they generally know how much a 90 degree turn is)

– Experience with assistive technologies

– Sense of autonomy.

Our system generally received positive reception (as indicated by Likert feedback). All users were enthusiastic about our developments and asked to be informed of future possibilities to test the system.

Salient points from post-task discussions:

– Our method for indicating the turn-by-turn instructions needs to be more ergonomic – the current placement makes it difficult to detect both left and right signals with a single finger.

– Because of the many variations in hand placement, users are not always aware of when turn signals are passed.

– When user missed a turn, it was hard to recover using current system.

– We will need a way to adapt the layout of the ridges to many different hand placements and holding styles.

Discussion: For this round of testing, we were fortunate enough to work with visually impaired individuals and receive their feedback. We found that demonstrating, testing, and discussing our prototype with them was highly informative—affirming some features of our prototype and challenging others. The three individuals we visited had varying amount of experience with cane travel, degree of autonomy, navigational technique, experience with technology, and sense of direction. Each participant acknowledged his or her degree of autonomy or “mobility” as well as how age has affected their ability to navigate independently. Furthermore, they lived in different environments and performed different tasks on a day-to-day basis. Together they provided a variety responses to our questions and offered alternatives to some of our presumptions in the design process.

Naturally, it was more difficult to control the testing process, and our results were almost certainly influenced by testing location, individual preferences, and level of visual impairment. Whereas previously we performed the tasks in the confines of the electrical engineering lab with blindfolded students, this round of testing required traveling to participants’ neighborhoods. Even so, this revealed a range of use cases for our device and was ultimately helpful.

Participants’ performance on the three tasks helped reveal differences between individuals, owing in part to their particular impairment. All three fared well on the first cardinal direction task. They understood the task and were able to identify the direction of north using haptic feedback from the cane; they also identified other cardinal directions using north as a point of reference within an acceptable degree of error. When asked if this feature was useful and intuitive, all three participants (as well as one participant’s husband) responded either “Agree” or “Strongly Agree”. One participant expressed an interest in being able to set the direction indicated by the cane, which affirmed our original intention. Interestingly, few if any of the participants said that they navigate with respect to cardinal directions currently and prefer to think of their environment as a series of relative turns and paths. This challenged one of our presumptions about users’ perception of their environment. We originally suspected that blind people discarded relative direction in favor of absolute direction, but this turned out to be incorrect. Nevertheless, all participants indicated that they were open to the idea of using the device to learn cardinal directions, and they acknowledged that the feature would be helpful in unfamiliar environments.

The turn-by-turn navigation task was more challenging and ultimately more informative. The task relied on the user’s ability to perceive and respond to instructions sent from our laptop. Variation in grip technique and hand size led to some difficulty performing the task or accomplishing a turn in an adequate time frame. We found that users were better at the task when they were walking on well-defined paths (i.e. a sidewalk) where the location of the turn is already demarcated along the path itself. Navigation in the user’s backyard was more difficult because it lacked these cues, and so the user had to infer the timing and magnitude of turns.

The first two users gripped the cane the way that we had anticipated in design, but the third user preferred the less-frequent “pencil grip,” perhaps owing to height or cane length. As a result, we learned that the design of the cane handle should be more ergonomic—not only more comfortable but flexible to different preferences, or at least designed to suggest the intended grip more clearly. We were also told that the distance between the turn indicators was too long and made it difficult to receive instructions exogenously (i.e. without attending to the device directly). Perhaps for this reason, most users agreed that they would prefer to use the cane in conjunction with an optional auditory GPS program. Despite these difficulties and qualifications, users still reported the turn-by-turn navigation feature as intuitive and easy to use in our survey questions. Two of the participants were especially optimistic about the potential for the device in indoor environments, and the third said that he would prefer to use a normal cane indoors.

In the third task—as in our previous round of testing—users were not hindered by the addition of background noise and even demonstrated a notable improvement over the second task. We were also informed by one user about the concept of “parallel traffic” noise, which is used for inferring traffic patterns and deciding when to cross roads. With this in mind, the ability to navigate without aural distractions seems more important than ever.

We also asked about the desired form factor for the final product, and participants gave varying responses. Some preferred the idea of a built-in, integrated navigational cane, but others decided that a device that attaches to their existing cane would be preferable (in case the cane breaks, for example). Most of the users expressed a desire simply to see more affordable technology, since existing screen-readers and navigational devices cost thousands of dollars and aren’t covered by health insurance. Overall, the participants were gracious with their feedback and asked to stay informed about the future of the project.


Document 1: Demo script  https://docs.google.com/file/d/0B8SotZYUIJw4V3pPTWhNRDJRaVU/edit?usp=sharing

Document 2: Consent form  https://docs.google.com/file/d/0B8SotZYUIJw4bEhyMzdtbXR0LVE/edit?usp=sharing

Document 3: Post-task questionnaire  https://docs.google.com/file/d/0B75b-7tqGKTkbUpwdU9SeHFQY3M/edit?usp=sharing


Figure 1: Participants were introduced to the system and shown its relevant features.

Figure 2: Participants were tested on their ability to use the cardinal features of the BlueCane in task #1.

Figure 3: Participants followed directional cues in task #2.


Figure 4: Participants completed the same navigational task, but with the added distraction of background noise and luggage to carry.


Figure 5: A video of a Participant undergoing testing is hosted at the link above.