P6 – Team Chewbacca

Group Number and Name
Group 14, Team Chewbacca

Group Members
Eugene, Jean, Karena, Stephen

Project Summary
Our project is a sys­tem con­sist­ing of a bowl, dog col­lar, and mobile app that helps busy own­ers take care of their dog by col­lect­ing and ana­lyz­ing data about the dog’s diet and fit­ness, and option­ally send­ing the owner noti­fi­ca­tions when they should feed or exer­cise their dog.

Introduction
This is a system consisting of a bowl, dog collar, and mobile app that helps busy dog-owners take care of their dog. The bowl tracks when and how much your dog is fed, and sends that data to the mobile app. The collar tracks the dog’s activity level over time, and sends pedometer data to the mobile app. You can also check your dog’s general health, and suggestions will be given for when it is imperative to either feed or walk your dog. The mobile app links the two parts together and provides a hub of information. This data can be easily accessed and shared with people such as family members and veterinarians. The purpose of the experiment is to assess the ease of usage of the application, bowl, and collar system, and to identify critical changes that must be made for its potential success.

Implementation and Improvements

  • Covered up the graph on the activities page, instead just simply displaying a total recommended number of steps a dog should take in a day, and presetting the number of steps already taken to 2585, 15 steps below the ‘recommended level’. We did this so that the user could assume their dog has already taken steps throughout the day, and it would be simpler for us to test if the user could successfully register 15 steps that the dog has taken.Overall, besides the above modification our application was suitably prepared for the pilot usability test.

 

Link to P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-14-team-chewbacca/

Method

Participants
Three participants tested our prototype. Master Michael Hecht is an academic advisor for Forbes Residential College, as well as a Professor in the Chemistry Department. He owns Caspian, a poodle.  He was selected because he frequently brings Caspian to Forbes as a therapy dog and was willing to bring Caspian in to test our system.  He also fit our target user group of “busy dog owners”. Emily Hogan is a Princeton sophomore student studying Politics. She owns a beagle named Shiloh.  She was selected because she reported sharing responsibility for her Shiloh with her family when she is at home. Christine Chien is a Princeton sophomore studying Computer Science.  She owns a small maltese poodle mix.  She was selected because she was responsible for taking care of her dog at home, and, as a computer science major, was familiar with new technologies and Android apps.  We chose both male and female participants of varying ages, areas of study, and levels of comfort with technology (particularly Android applications), which allowed us to gain diverse perspectives and see how different groups of users might react to our system.

Apparatus
We conducted our test with Master Hecht in the Forbes common room, and with Christine and Emily in a Butler common room.  We did not use any special equipment apart from the bowl and collar in our system.  We recorded critical incidents on a laptop during testing, and had users fill out an online questionnaire before and after the prototype test.

Tasks

Task 1: Exporting Data to a Veterinarian
This prototype allows users to send either the mobile app’s collected diet data, activity data, or both via email. This functionality is intended to be used primarily to send data to veterinarians  This is an easy task, as users need only to enter the “Generate Data” screen, select which type(s) of data they wish to send, enter an email address to which to send it, and press the “Send” button to complete the task.

Task 2: Monitoring a Dog’s Diet
Our working prototype allows users to check whether or not they need to feed their dog based on when and how much their dog has been fed.  They can view the time their dog was last fed as well as the weight of the food given at that time.  This is a medium-difficulty task, as the user needs to interact with the dog bowl (but only by feeding their dog as they usually do), then interpret the data and suggestions given to them by the mobile app.

Task 3: Tracking a Dog’s Activity Level
This working prototype allows users to track the number of steps their dog has taken over the course of a day, compare that value with a recommended number of daily steps, and view long-term aggregated activity data.  In order to carry out this task, they must put our dog collar on their dog so that the collar can send data to our mobile app via bluetooth.  This is a hard task for the user, as they must properly attach the dog-collar and interact with the live feedback given by the mobile app.

We have chosen these tasks because they are the primary tasks that we would like users to be able to complete using our finished system.  The three tasks (communicating with a veterinarian, monitoring a dog’s diet, monitoring a dog’s activity/fitness level) are tasks that all dog-owners carry out to make sure their dog is healthy, and we wish to simplify these essential tasks while increasing their accuracy and scope.  For example, while dog-owners already monitor a dog’s diet and activity by communicating with a dog’s other caretakers and using habit and short-term memory, we wish to simplify these tasks by putting the required data onto a mobile app, using quantitative measures, outputting simple suggestions based on quantitative analysis, and allowing long-term data storage.  We relied heavily on interviews with dog-owners in choosing these tasks.

Procedure
We conducted this study with three participants in the settings described above (see “Apparatus” section).  We first had each participant sign a consent form and fill out a short questionnaire that asked for basic demographic information.  Then, we read a scripted introduction to our system. After this introduction, we gave a short demonstration of our mobile app that involved sliding to each of the major pages and viewing the long-term activity level graphs.  We then read from the Task scripts, which required users to interact with all parts of our system and give us spoken answers to prompted questions.  During these tasks, we took notes on critical incidents.  After the tasks were completed, we asked for general thoughts and feedback from the participants, then asked them to complete a questionnaire that measured satisfaction and included space for open-ended feedback.

Test Measures

  • critical incidents: Critical incidents allowed us to see what parts of the system were unintuitive or difficult to use, what parts participants liked, if anything in our system might lead participants to make mistakes, etc.  It allowed us to see how participants might interact with our system for the first time, and how they adjusted to the system over a short period of time.
  • difficulty of each task (scale of 1-10): This allowed us to identify what parts of our system participants found most difficult to use, and whether we could simplify the procedure required to carry out these tasks.
  • usefulness of mobile app in accomplishing each task compared to usual procedure (scale of 1-10): This allowed us to measure whether and how much our system improved upon current task procedures, and identify areas for improvement/simplification.
  • relevancy of task to daily life (whether or not they perform this task as a dog owner): This allowed us to see whether any part of our system was superfluous, and which parts were essential.  This would allow us to decide whether to eliminate or improve certain parts of our system.
  • general feedback: This allowed users to give us feedback that might not be elicited by the specific questions in our questionnaire.  It allowed us to collect more qualitative data about general opinions about the system as a whole, and also gave us insight into possible improvements or new features that participants might like to see in the next version of our system.

Results and Discussion
Participant 1 (Master Hecht), had never used an Android phone so he was unfamiliar with the swiping functionality of the home page, and was unsure how to access the settings page that helped to generate the report. He even mentioned how he would have spend about 1 minute learning the system if he was given an iPhone app, and maybe about 3 minutes since it was an Android app. Our user was really excited about the idea of tracking the dog as opposed to tracking the number of steps that his dog was taking. He thought that the data should be continuous as opposed to binary because it would provide greater functionality for the user–and be more interesting in general. He thought the dog feeding data that told him whether or not the dog  had been fed wasn’t as useful as it could be. When he learned that the data actually had higher resolution (we could report the weight of the food in the bowl as well), he thought it would be more useful. In the post-questionnaire, our user said that the tasks were generally pretty easy, and the only trouble came from navigating through the pages of the app. He also mentioned how he would probably not use the export feature (task 1) simply because he, as a pet owner, didn’t have too much interaction with his veterinarian since his dog was normally healthy. When we applied the device to the actual dog, the accelerometer recorded the steps pretty accurately. The only problem was that the physical device was pretty bulky and provided a lot of weight to the little dog.

Participant 2 (Christine), an Android user, easily navigated the mobile application and completed all of the tasks in little time.  For task 1, she had some trouble finding the “export data” popup, and felt that a non-Android user might not know where to find this button.  For task 2, she pressed the “diet” and “food” columns on the homepage instead of swiping, which she said she found confusing.  She easily interpreted the suggestion on the homepage that the dog should be walked.  However, at first she misinterpreted the “already walked” step count with the recommended step count, though she quickly corrected her error.  For task 3, she quickly interpreted the homepage alert that her dog needed to be fed, easily found the time last fed, and successfully used the dog bowl so that this time updated.  In the post-task questionnaire, she reported that the tasks were all easy and that the app was useful in accomplishing all of the tasks.  However, she said that Task 3 was not a task she would actually perform.  She also suggested that if one of the diet or activity columns on the homepage is “lit up” because your dog should be fed/exercised, a click should take the user to the appropriate page.  She thought that exporting data from the homepage was unintuitive, and that data should be exported from the page that displays that data.  She reported that a useful feature would be allowing users to set up alarms for feeding, as that’s what she would use the app for.  Overall, she thought the health score was not well explained, and that the “eating” functionality of the app was more useful to her.

Participant 3 (Emily), had little to no experience with the Android interface, and overall had more difficulties using the device. She commented that overall the android device interface was very difficult and non-intuitive. In Task 2, we noticed that the pedometer was more sensitive than it should have been, as it continued to increase in number even when it was barely moving. In Task 3 she had a bit of difficulty interpreting the time, since it was placed in military time. Overall, in the post-questionnaire she reported that she felt Task 1 wasn’t as important as Tasks 2 and 3, and she said that they were generally easy to perform. She noted, “If I knew more about using an Android it would have been much easier.” She felt that she would use the diet component of the device far more than the activity component, since she often must collaborate with her siblings and parents to figure out if the dog has already been fed. She also explained that she could benefit from notifications and calendar reminders, since she often relies on those for her daily routines.

Overall, our users offered us valuable input that could prompt changes in the structuring of our prototype. We definitely need to make the “generate report” option in a more visible location, possibly already apparent on the home screen, or include in our demo a demonstration of how to use the menu option. In addition, allowing users to navigate to the “diet” and “activity” pages by clicking buttons on the home screen (in addition to, or perhaps in place of swiping) would make our mobile application interface more intuitive, as all of our users commented on this aspect of our interface. We could also develop our prototype to be accessible on both Android and IOS devices, so that users would feel more comfortable with the interface, given their previous experience. We might also incorporate more data about the weight of food placed in the bowl – for example, having a growing list of all times the dog has been fed the weight of the food at each time. This data could then be used to give the user more relevant information about their dog’s diet, so they could check when they missed a feeding, and if their dog is eating more/less than usual. Following suggestions given by both Emily and Christine, we are also considering adding a notification/alert system.  Finally, as the pedometer was a little more sensitive than expected when testing with a real dog, and was also a little heavy for the small frame of our test dog (Caspian), we hope to update the pedometer and increase its accuracy by updating the threshold and compacting the device.

Appendices

Consent form:

https://docs.google.com/document/d/1FK5_7VmPZ3os32Q3CmF_3rFTRB_H8OUPCLy7xBNQ26k/edit?usp=sharing

Demographic Questionnaire:
https://docs.google.com/forms/d/1Cujqbl2Q_7GA9PFGzrrYe83vKs-CAeJY1Qso5g5WHuY/viewform

Demo Script:
https://docs.google.com/document/d/1W8unBcnXdMdZxjiDg7FT-lADs9LcMw_aOve7ByvXfi8/edit?usp=sharing

Raw Data

Critical Incident Logs:

https://docs.google.com/document/d/1Nzn7i44Cz2lIhsteAuCyvUOuY1HuCOEzsg17reM3GPM/edit?usp=sharing

https://docs.google.com/document/d/1hIF7L-jMsIHH6_Askf-Yz04eAQ4i4sPR3Ltu_DIaJoU/edit?usp=sharing

https://docs.google.com/document/d/1aNa13LMWkd9lv0nK7VLrjnOWwVPH3vR9l0fikY071hs/edit?usp=sharing

Questionnaire
https://docs.google.com/forms/d/138hWAz_omSI9VxOTm9TTANcGSCIwtzWcIX03HaKWMIQ/viewform

Questionnaire Responses
https://docs.google.com/spreadsheet/ccc?key=0AvIsHnhsuA4QdENhY1B3RlJPVFlGdzZYWENwalpJcnc&usp=sharing

Pictures:

P6_1 P6_2P6_3


P5: Group 14 (Team Chewbacca)

Group number and name
Group 14: Team Chewbacca

First names of everyone in your group
Karena, Jean, Stephen, Eugene

Project summary
Our project is a sys­tem con­sist­ing of a bowl, dog col­lar, and mobile app that helps busy own­ers take care of their dog by col­lect­ing and ana­lyz­ing data about the dog’s diet and fit-ness, and option­ally send­ing the owner noti­fi­ca­tions when they should feed or exer­cise their dog.

Tasks supported by prototype

1. Sending diet and activity data to vet
The user will be able to generate a detailed profile using the data collected over extended periods of time using our device, and send this report to their veterinarian. This report, although difficult on the backend to collect the data and send it, should be very easy to the user, and require clicking only a few buttons. This report will be useful for the veterinarian and other caretakers of the dog, to manage the dog’s overall fitness levels.

2. Monitoring diet
The user will be able to check whether or not they need to feed their dog based on when and how much their dog has been fed. Based on this information the user can see if they forgot to feed their dog, and ensure that their dog is not over or undereating. This is a medium-difficulty task for the user, as they might check both the homepage and the “diet” page of the app.

3. Monitoring activity
The user will be able to check whether or not they need to exercise their dog, based on their dog’s recent activity level. More detailed analytics on the dog’s activity level can be viewed real-time over bluetooth connection or with graphs that display long-term data. This is a difficult task for the user, as the user might check three different screens in order to see all of this information (the homepage, the general activity page, and the detailed activity graph page).


Changes in Tasks
We changed our tasks slightly from the previous step, based on our interviews and user feedback that we’ve received. From our understanding, the users wanted to the app to give them direct suggestions based on the collected data, such as whether or not they should feed their dog, or whether or not they should walk their dog at the current moment. Therefore, we made our tasks more directed towards finding information to guide the user’s actions: deciding whether or not to feed your dog, deciding whether to walk your dog, and generating and sharing your dog’s fitness report. In addition, we changed Task 1 from “Export data” (in P4) to “Send data”, as a user who tested our prototype said that they would not use an exported data document for themselves, and found it confusing that this task was for “exporting” when they would only use it for sending reports to the vet.

Discussion

Design Changes from P4
We made several changes to our design as a result of P4.  First, we got rid of the LED and time display on the dog bowl, as two out of the three users who tested our P4 prototype found the LED color display confusing, and these users also said that both the LED and the time display were superfluous (see the first paragraph of the “discussion of results” section of our P4 blog post).  We thought that this change also made our system much more unintrusive, and we want our system to be seamlessly integrated into a user’s normal routine.

In addition, we made several changes to the mobile app interface.  For example, we made the homepage of the mobile app display only suggestions to the user about their dog’s diet and activity (i.e. whether or not they should feed or walk their dog).  We changed this after two of our users in P4 said that they didn’t care about the detailed quantitative data and/or had trouble interpreting it (see the first paragraph of the “discussion of results” section of our P4 blog post).  As a result of this user feedback, we also created “secondary” pages in our app that displayed the most pertinent quantitative information about the dog’s diet and activity, and buttons on these secondary pages that take the user to the more detailed data.  This keeps the app from displaying an overwhelming amount of data to users who do not need it, and overall makes the app more user-friendly. Some other small changes we made to our app were to display the “health score” as a ratio out of 100 (which one of the users in our prototype test suggested), and to make the “export data” page specialized toward only sending the data to a veterinarian.

Storyboards

photo 2 (2) photo 3 (2) photo 4 (2)

Sketches

photo 4 photo 5 photo 3 photo 1 photo 1 (2) photo 2

Overview and discussion

Implemented functionality
We have currently implemented a dog bowl that detects the weight of food/water that is added to it and records the time at which the contents were added, a dog collar with an attached accelerometer that records the number of steps taken, and a mobile app that collects the data from these devices via bluetooth and displays the data in real time.  Currently, the app can display a real-time graph of number of steps taken over time and the total number of steps taken, and the time the bowl was last filled.

Unimplemented Functionality
We left some of the proposed functionality of the complete system out of this prototype.  For example, we left the “diet log” page of the app out of the prototype, as this is not a crucial part of the system and will not be accessed often by actual users.  In addition, the crucial data (time and amount of last feeding) are displayed already on the general activity page, and are sufficient for user testing.  We also did not implement the push notifications telling the user that they need to feed/exercise their dog, as these suggestions are already displayed on the homepage, and this is sufficient for testing.  Finally, we did not implement the settings page that would allow users to customize their experience on the app.  This settings page would also set their preferences for push notifications, i.e. what time they usually feed their dog/how long after this time they wish to be notified, and after what duration of low activity they wish to be notified.  Both the settings page and its interaction with the push notifications are complex for both the user and the back-end implementation, so we decided to leave this until the next stage of development.

Wizard-of-oz techniques
Though the hardware aspects of our project (dog bowl weight sensor and dog collar accelerometer) are currently implemented, we use several wizard-of-oz techniques in our mobile app.  For example, the suggestions displayed on the homepage of the app for whether or not the user should feed or walk their dog only change after the suggestion boxes have been pressed five times.  In the future, these suggestions will change based on the collected data and the deviation of this data from the suggested amounts of food/activity. However, we felt that this implementation still allowed the user to see how they could make decisions based on the suggestions given by the app.

In addition, we use a wizard-of-oz technique for the “send data” page of our app — currently, it allows the user to select options and type the email of their vet, and displays a success notification when the “send” button.  However, nothing is actually sent, and in the future we wish to actually create a document and send it via email.  We felt that the wizard-of-oz technique allowed the user to experience how they would interact with the app when they actually sent data. Finally, the activity graphs shown on the “activity” page of the app is wizard-of-oz, as we currently display graph images that are not updated or based on the actual collected data.  We used wizard-of-oz for this aspect of the app because we are not able to test our system over a long period of time and thus have no long-term data with which to create these graphs.

Code

For Arduino:
MeetAndroid.h library
Time.h library

For Android
From Amarino:
– Amarino libraries
– SensorGraph
– GraphView
– Amarino application to connect to bluetooth
DateFormat library
Date library

Tutorial for Sensor Graph
http://www.bygriz.com/portfolio/sensor-graph/#more-1344

Tutorial for Android Beginners
http://developer.android.com/guide/topics/ui/controls.html

Video and Image Documentation

Demo Video: http://www.youtube.com/edit?ns=1&feature=vm&video_id=jyTRtAOslgg

Screenshot_2013-04-22-21-09-33Screenshot_2013-04-22-21-09-43Screenshot_2013-04-22-22-58-04Screenshot_2013-04-22-21-10-57Screenshot_2013-04-22-21-11-06 Screenshot_2013-04-22-21-09-49 Screenshot_2013-04-22-21-10-30  Screenshot_2013-04-22-22-58-18 Screenshot_2013-04-22-22-58-25

P4: Team Chewbacca

Group number and name

Group 14, Team Chewbacca

Group members

Eugene, Jean, Karena, Stephen

Project Summary

Our project is a system consisting of a bowl, dog collar, and mobile app that helps busy owners take care of their dog by collecting and analyzing data about the dog’s diet and fitness, and optionally sending the owner notifications when they should feed or exercise their dog.

Description of test method

Procedure for obtaining informed consent

We gave a an informed consent page to peruse and sign before the test began. This consent page was based off of a standard consent page template. We believe this is appropriate because the consent form covers the standards for an experiment.  The consent form can be found here.

Participants

The participants in the experiment were all female college students who have dogs at home. They were selected mainly because they have experience actually caring for a dog, which allowed them to give valuable insight into how useful it would be. They were also busy, as they were in college, and away from their pet, making them optimal test subjects, as this app is particularly useful for busy pet owners who stay away from home for long periods at a time.

 Testing environment

The testing environment was always an empty study room. Two users were tested in a study room in the basement of Rockefeller College, and one in a study room in Butler College. We do not believe the environment had any specific impact on the subjects. In addition, using the same environment may have been problematic, as it would have felt familiar to two of the subjects, but unfamiliar to the third.

The prototype was set up as a metallic dog bowl and a bag of Chex Mix (as dog food) that sat on the table in front of the user. The “homepage” of the paper prototype was placed in front of the user, with the remaining slides set down as users interacted with the prototype.

Testing procedure

The scripts for each task can be found here.

Eugene introduced the system and read the scripts for each task.  He also asked the general questions we had prepared for the end of the testing procedure. Jean handled the paper prototype, setting down the correct mobile app slides and drop-down menus as users interacted with the app.  She also handled updating the “LED” and time display on the dog bowl after users filled the bowl.

Stephen completed the “demonstration” task using the mobile app prototype.  He also asked more user-specific questions at the end of testing, following up on critical incidents or comments the users had made during testing.  Karena served as a full-time scribe and photographer during the testing.

The tasks were performed in the following order:

1. Interaction with the Dog Bowl Over Two Days

2. Checking Activity Information and Choosing to Walk the Dog

3. Sending Data to the Vet

These tasks were chosen to loosely cover the entirety of the system (bowl, collar, and app), and to obtain specific information. They were completed in order of decreasing frequency of real-life use (we imagine that users will use this system primarily for feeding their dog/getting notifications when they forget to feed their dog, somewhat less frequently for checking its activity level, and occasionally for sending data to the  vet).  Task 1 was used to obtain user opinion on the dog bowl interface, the front page of the app, and the importance of notifications. Task 2 was used to obtain user opinion on the collar interface, the data panes of the app, navigation through the app, and how important they found this task in real life.  Task 3 was to obtain user opinion on the data exporting page of the mobile app.

hci_photo_1

User 1 completing task 1 with the prototype

hci_photo_2

User 2 completing task 2 with the prototype

hci_photo_3

User 3 completing task 1 with the prototype

Results summary

We noticed that there were several instances where the user did not know what to do with the data so they would just stare at the app. Most users thought there should be additional information explicitly recommending what they should do based on the data that was collected. Because they all eventually figured out what to do, this could be categorized as a minor usability problem. This occurred for two users when they looked at the line graph that mapped the activity level of the dog. Two users who were uncertain of what the LED on the dog bowl indicated; this can be categorized as a minor usability problem, as just the color of the LED was not enough to convey information to the user. In addition, users were always able to find the ‘time last fed’ information they were looking, but most tended to take unnecessary steps, as the information was right on the front page. This is a minor usability problem. There was a major usability issue with exporting data to the veterinarian. Two users expressed not knowing whether an e-mail or text message was sent to the veterinarian, and whether or not the information was sent at all. One user was confused by the fact that the export data button said “Create” instead of “Send” (when the task was to send the data to the veterinarian).  A minor usability problem was the fact that two out of three users did not see that the “time last fed” was on the homepage, and instead navigated to the “diet” page of the mobile app when we asked them to complete the feeding task.  One major usability issue that one user pointed out was that the “total health score” displayed on the app was just a number, and she didn’t know what scale it was on (it was out of 100, but that was not written on the app).  There were no significant usability issues with the dog collar — most users found the interface intuitive.

Discussion of results

The biggest takeaway from user testing was that the users wanted digestable bits of data. They didn’t want static information that told them how much their dog was walking, but rather a recommendation that would tell them exactly how much they should walk their dog based on its activity level. Because of this feedback, we will most likely redesign our interface to include fewer numbers and line graphs and more direct recommendations.  Furthermore, we also became more aware of the variability in our users. We found that our first user would be very comfortable with getting notifications that their dog had not been fed or was getting lower amounts of activity than usual, while our second user would not want to constantly be bothered by such notifications. This gave us the idea of introducing a settings feature which would allow users to choose whether they wanted notifications or not. From our observations, we also noticed that it would be a good idea to give the user confirmation that tasks were achieved–especially in the case of exporting data to the veterinarian.

Some small changes we will make as a result of our user tests is that we will redesign our “export data” page so that it is primarily geared toward sending the data to a veterinarian (with a “Send” button).  We will also use a more intuitive metric for the total health of the user’s dog (possibly a score out of 100).  In addition, because two out of three of the users found the LED on the dog bowl confusing, and the remaining user told us that the LED was redundant because of the time display, we will be getting rid of this feature on the dog bowl.   In addition, because our project goal is to create a useful but unintrusive system, we feel that getting rid of the LED would align with both the project goals and our test observation.  However, we will be keeping the time display, because one user said it was very useful.  We will be keeping the dog collar’s design the same, as the users did not have a problem with it.

Subsequent testing

We feel that we are ready to proceed without subsequent testing.  Two out of three parts of our system (the dog bowl and the collar) did not show any major issues during our tests.  The only usability issue that we encountered was that the LED on the dog bowl was confusing and redundant, so we will be removing that from our high-fidelity prototype.  However, none of the three users expressed any other problems with these two components of the system, so we feel comfortable proceeding to the high-fidelity prototypes.  In addition, we feel that that we are ready to proceed to a high-fidelity prototype of our mobile application, as all users seemed to have problems for the same parts of the mobile app, and the feedback we got in this initial round of testing gave us a clear plan for how to redesign our mobile app.  Finally, all of the problems that users faced were minor usability issues, and we think that they can all be fixed in our high-fidelity prototype.

 

 

Assignment 2: Jean Choi

Observation Notes:

I conducted my observations before my MUS 220 lecture (after arriving early), at Frist in the computer cluster/printer area from 1:20-1:30pm, and walking from Wu Dining Hall to the Friend Center between 7:20-7:30pm.
Student 1: MUS 220 lecture
Student 1 read a book for the ~5 minutes she had left before lecture started.  The flags in the book made it likely that she was working on class-assigned reading.  The student was very focused on the book, and continued reading until after the professor opened the class. She then hurriedly stored away the reading without using a bookmark, took out her laptop, logged in, opened up a word document, and started taking notes.
Student 2: Frist Printer
Student 2 arrived at the Frist 100-level printer around 1:20.  There were already a couple students at the printer, and he went to use one of the cluster computers to print his document. When he was done sending the document to the printer, there were 2 students in the line for the printer.  He asked the student immediately in front of the printer how many pages she had left to print, and when she didn’t have a definite answer, he asked, “More than 5 pages?”, and she said that was probably the case.  Instead of waiting, he ran toward the stairs, presumably to use the 200- or 300-level printer instead.
Student 3: Walking to the Friend Center
I walked with student 3 to the Friend Center from Wu Dining Hall.  After finishing dinner at a little after 7:20, we were pretty rushed.  While we walked, he took out his iPhone to check his emails, and quickly flipped through Facebook and some urban planning/architecture websites.  When I asked him, he said that he usually flips through the same few websites when he’s on his way to class (mostly Facebook and the Civil Engineering websites he was flipping through, and sometimes online news blogs).  Though after long breaks like lunch and dinner he is sometimes late to class, the rest of his classes are pretty close together, so he usually gets to class early.  He said he works on problem sets in that extra time, and actually makes pretty good progress on them throughout the course of the day.
Observation Insights
Observing these students helped me to identify some possible design focuses.  I thought that it was interesting that two of the students I observed did schoolwork between classes — I thought that the changing period would be too short to get anything done.  However, especially for Student 1, I thought that the work seemed to be disjointed, and the students might have problems remembering where they were and refreshing themselves before starting work. Maybe an app to “bookmark” their work for quick review would be helpful.  Student 2’s experience with printing was also similar to bad experiences I’ve had with printing, so I thought that this was a possible design opportunity.  Finally, Student 3 showed me what I initially thought most students would do — looking at websites and email.  I was surprised that he regularly visited civil engineering websites, and it made me think “outside the box” in terms of web browsing (I had previously thought of apps to only integrate Facebook and Gmail, etc).
Brainstorming
1. An app for printing documents for classes quickly and finding the nearest printer.
2. An app that learns over time how long it takes to walk to your classes, then gives you approximate times to help you avoid being late.
3. An app that helps you find where your friends are sitting in large lecture halls, possibly with an interactive map
4. An app for late students, that provides a transcript or notes on the first few minutes of lecture
5. An app that shows what you’ve missed on social media/news sites during the previous class
6. An app that allows teacher/student or student/student interaction after classes for questions and answeres
7. An app that keeps track of the distance, speed, and calories you burn walking to classes
8. An app that suggests easy ways to get more exercise out of your between-class walk (speed-walking, walking backwards, etc)
9. An app that suggests people to call and keeps track of how long/often you talk
10. An app for pre-lecture preparation: the professor can upload study questions or notes
11. An app that stores the questions of psets you are working on and allows you to think about them while walking
12. An app that keeps track of the number of people in different libraries and study spaces, allowing you to decide where to go
13. An app that combines sound bytes of breaking world news that plays for 10 minutes.
14. An app that lets you know which coffee machines on campus are empty.
15. An app where you can input your food/drink preferences, and it suggests places with menus you would like.
16. Consolidates events on campus that you would be interested in (you set the filters)
Prototyped Ideas
 
– “Quick-Printer” — I chose this idea because I’ve been frustrated by trying to print between classes many times, and I thought it was an idea that would appeal to many students and be very practical, allowing them to be more prompt and prepared for class.
Captions:
1. From left to right: The welcome screen for QuickPrint, which uses netid and password to sync with ICE and BlackBoard; The personal “home” screen, which lists your classes (from ICE) in the order of closeness to the current time and a button to the “find printer” map and interface; the Blackboard course materials list, which is the default location when the user picks one of his/her classes.
2.  The print pop-up window that meets users every time they print a document.  The options are similar to the options that come up when one tries to print a document from Blackboard using a normal internet  browser.
3. The sequence of screens that faces a user trying to obtain documents from a website other than Blackboard. From left to right: The user types the URL into the new tab (with the + sign); a pop-up window allows users to select whether they want that link to be “remembered” by that class (for example, remembering the COS 226 course website to print out lecture notes); the new website, with the printable links.
4.  The screens related to the map part of the application. From left to right, the main map screen that uses the user’s GPS location; the popup window with printer locations for Frist (accessed when the user presses on the picture of Frist on the map); the popup window with printer locations for Woolworth.
1.Print_1 2. Print_4 3. Print_3 4.Print_2
– “World News in 10 minutes” — I chose this idea because I’m often concerned about how little I hear about world events while in the Orange Bubble, and this app would not only be very informative, but also make my usually boring walks more interesting.
Captions:
1. The home page and login page for “World News in 10 Minutes”.  Login can also be synced with Facebook.
2. Left: The main page that includes the 10-minute podcast and allows users to comment. Right: Options for people who want to learn more: links to the original news stories, photos, and expert opinions.
3. The pages for the extra options: A slideshow of pictures from around the world from that day, and expert commentary on the news stories that are mentioned.
1.  News_1 2. News_2_better 3. News_3
Paper Prototypes
“QuickPrint”
QuickPrint is a mobile application that makes printing between classes faster and less frustrating.  Using data from ice, it finds which classes you are enrolled in and which ones you are most likely to want to print documents for at a certain time.  It allows you to navigate to these classes’ Blackboard Course Materials page at the click of a button, or go to another website (e.g. Piazza or a course website) where documents are located, then print the documents there. It “learns” the links of web pages other than Blackboard that contain class documents so it will be easy to print documents from that location at a later time.  Finally, it uses data from your phone’s GPS to locate the nearest buildings containing printers, then displays the locations of those printers.
World News in 10 Minutes
This is an application that makes a 10-minute summary of world news every day, for students to listen to on their way to class. It has a simple interface with a 10-minute story that updates every day.  Students can listen through their phones or iPods.  There is also a comment features that lets listeners express their opinions and engage more actively with the news.
User Testing
 
Rishita
Rishita Patlolla testing my prototype. She found navigation fairly intuitive, but sometimes looked for back buttons where I had not implemented them.  She also suggested that I add a “home” button, or make the top bar with the “logo” a link to the homepage.
Rishita Patlolla:
Rishita navigated through the user interface very quickly, and did not have many problems using the application.  She tried to click on the printer options, which I had not accounted for in my paper prototype.  She made the suggestion that it be able to sync automatically with various library printers, which are not on the same network with the rest of the campus printers.  She also suggested that I include a “recently printed” link on the homepage in case a user wants to print a certain documents (or different sections from the same document) multiple times.  A very immediate change that she suggested was that I add a “successful printing” popup window to confirm that users had printed a certain document.  Finally, we talked about whether or not it would be useful to make an internet browser accessible on the homepage (so you don’t have to first click on a class to access the internet) — we thought that though it was a useful feature, it would be good to think about where to draw the line with internet accessibility (at what point does it become too much like a regular web page?)
Yolanda_1
Yolanda navigates the popup windows from the map, which list the locations of the printers. She thought they were clickable, and suggested that I make them clickable with the printers’ status.
Yolanda_2
Yolanda starts to use my prototype
Yolanda Yeh:
Yolanda had many useful suggestions and insights about the prototype which gave me many ideas for future improvement.  She asked why I had a back button on certain screens and not others (such as the popup window for the locations of printers), and pointed out that Android apps automatically have back buttons built in, so I would only need to do this for an iPhone app.  She also said that it would be very helpful to let the user know whether certain printers were broken, as is done by the Point app currently implemented for Princeton.  Maybe I could sync my app with Point to add this feature. In addition, Yolanda commented on her experience with accessing Blackboard from her iPhone.  Blackboard is often slow and has a frustrating sign-in process, and the process it has for opening and printing documents is not very user-friendly.  I might want to find a different way to implement this for my app instead of just using Blackboard’s functions.  Finally, she asked if I had considered adding a print preview to the documents, and we talked about how feasible this would be given the small size of smartphone screens.  I think this is an important trade-off that comes up in mobile apps — more information vs more cluttered interface — and I think I should spend some time putting more thought into how I make this decision in my app.  A couple non-intuitive things about my interface that I noticed as she navigated was the setup of the tabs on each class page.  She seemed confused about what the tabs actually were (she asked if they were just internet browsers), and wasn’t sure what it meant when the app asked her if she wanted to “save” that location.  This made me think again about how similar/different my app is from a web browser (a similar issue as the one that came up when Rishita tested my prototype).  In addition, on the second page (Pick class, or find printer), she thought that the classes were “options”, and the “Find Printer” button was the only real button, when actually they are all buttons that lead to new pages.  I might want to make that more clear in my prototype.
Karena
Karena using my prototype. She found the alternation between “back” buttons and x boxes confusing, and tried to slide screens I had not expected would slide.
Karena Cai:
Karena also pointed out many opportunities for improvement in my app.  She said that though my app might make it faster for students to send documents to the printer and find close printers, sometimes the real problem occurred when the student actually got the printer and found a line of students there. She suggested that I make a feature on my app that sees how busy each printer is.  She also asked how my app would deal with different types of documents (word documents, pdfs, etc), and whether it would show the document before printing.  This had also come up when Yolanda tested my prototype, so it’s definitely a part of my app that I should think about.  Some things I noticed during my observations was that she used the “slide” feature of iPhones frequently, and I had not built it into some of my screens.  She also did not see some of the x boxes on the popup windows, and was looking for “back” buttons instead — I think I should be more consistent in using x boxes vs back buttons.

P1: Team Chewbacca

Brainstormed Ideas:

1. Posture helper—Detects and notifies you when you do not have the right posture.

2. Push-up monitor—detects motion along your body as you do push-up and ensures that your push-up form is at tip-top shape; can also add some sort of motivational game that helps improve your form

3. Built-in shoe sensor—help improve the monitoring of a runner; detects how many steps you have taken, how fast you have taken them, distance, and impact, and compares to previous runs/makes suggestions about how to avoid injury

4. Alarm clock stimulator—Uses a combination of features: tactical, sound, touch, and smell to guarantee that user wakes up

5. Sleep-movement regulator— detects lapses in the REM cycle; plays soothing sounds when movement is detected to help user sleep better

6. Classroom alarm clock— detects when a student is beginning to fall asleep and uses sensory stimulation such as vibration to keep the student awake

7. Pen that does not write, but senses pressure and shapes and identifies writing so you can write on any surface and it will compile it into a digital document.

8. Proximity reminder—signals to you what types or errands or tasks you should do when you’re within a certain area (or within proximity of something like the grocery store)

9. Office-hour scheduler—helps coordinate meetings with other people (allow them to see your location)

10. Parking-locator— Indicates where you have parked and sends you the location; also can issue an alarm that emits from the car if button is pressed (alarm button attached to key only works within small distance)

11. Item-locator—helps locate most valuable misplaced items simultaneously (cell-phone, wallet, keys, etc.)

12. Smart shower: before shower, collects data from skin contact and determines water
temperature and duration needed for skin’s dryness and dirtiness

13. Expanding on #12, during the shower, skin sensor that prevents skin from becoming too hot (proven to be unhealthy), gives you/shower feedback)

14. Small wearable sensor worn on skin that detects skin’s dryness and suggests when lotion should be applied.

15.  Better sensing techniques for a blind person—modify a cane with additional detection of distance and sound/touch feedback

16.  Breath detection—resolves the problem of bad breath; detects the chemical composition of breath and warns user whether or not they need mints or breath-refreshers

17.  Hydration detection— building off of #16, resolves the problem of bad dehydration; detects the chemical composition of breath and warns user whether or not they need to drink water

18.  Bacteria detecting utensils—prevent user from ever getting stomach-viruses by warning user whether the food they are picking up with their utensil has potentially harmful bacteria.

19. Interactive learning game (e.g. for multiplication tables) with sensory feedback such as vibration, sounds, or smells that correspond to the associations (helps user remember by using multiple senses)

20. Visual note-taking: Records what you write, detects circled terms, and creates a digital version of your notes with images of the terms you circled. Could also automatically detect keywords in your notes and create a “collage” for visual learners/quick refreshing.

21. Glasses that display notifications on the lenses, could be used for emails, schedule notifications.

22. Discreet texting device: personal projector/video camera that projects a keyboard onto the surface in front of you, detects which letters you press, and allows you to send texts discreetly during classes, meetings, etc.

23. Discreet device attached to arm or clothing that warns you when your stocks are going down with vibrations, includes a quick sell button or a more complex interface based on personal projector (see texting device).

24. An accelerometer or flex sensor for hand and arm motions that senses and remembers if you’ve done certain actions (locking doors, closing door, etc)

25. Commute to-do list — user uploads to-do list (quick calls, small articles/memos to read), and device detects traffic/stop-and-go motion of car and suggests appropriate to-do list for commute (may give warning/turn off when car is in motion)

26. Shoe/step sensor — worn inside shoe, gives information about step style, detects pronation and supination and recommends appropriate shoes/insoles

27. Smart closet — takes data about weather and suggests appropriate clothing (if complete integrated system, might move those pieces of clothing to front/middle of closet)

28. Habit-fixer — a small accelerometer device that is affixed to appropriate part of body and detects habits that you want to fix (biting nails, jiggling legs)

29. Cell phone as Mouse – attach your cell phone to your computer, and then use that as a mouse / menu options to reduce clutter on the screen, use ultrasonic information too.

30. Calorie detector – probes food items and determines the nutrition information.

31. Smart Room – changes music and TV upon entering a room according to predetermined tastes of the users who enter (using device carried by user or mobile data). If more than one user enters, it will figure out the common interests among them all.

32. Athletic-form helper – for any type of athletic activity, a certain technique is necessary; use some sensors that detect the motion of the user, and the movement can be shown through a video feed, and thus, the form of the athlete can be improved (ice skating, pitching a baseball, etc.)

33. Hand sensor for playing violin, guitar, etc, that detects hand posture and gives advice on improvement, also can download specific pieces and it can tell you which finger intervals were wrong

34. Orchestra section communication device: attaches to music stands, detects when changes are made to music of principal stand, and notifies rest of stands about type of change (bowing, etc) and location (measure number)

35. Building off of #34, Kinect-type device attached to music stand that detects when bowings within section are not coordinated.

36. Contextual music page-turner: detects the notes that are being played or sung and lights up the corresponding notes on the sheet music; mechanically flips the pages when the end of the sheet music page is detected

37. Instrument upkeep device — small device that can be run over stringed instruments to detect moisture of wood, rosin/oil buildup, and state of varnish to detect when humidifier should be used or cleaning/varnishing should be done (already analog devices fro humidity, but need more exact/holistic sensor)

38.  Urine characterization – takes input of color, chemicals, etc. and determines certain characteristics about your healthiness, and suggests improvements.

39.  Virtual mouse – acts like a mouse, you can move around empty space to control the mouse on a computer.

40.  Smart baking pan — detects which areas have been greased, so you can cover whole surface and grease enough

41. Smart beat – Creates music based on your interactions with the environment (if you are walking, takes up the beat of your steps)

42. Remote button pusher – you can have a button that will adapt to whatever device you want to click (camera so you don’t need a self-timer, turn off a light from far away, etc.)

43. Handheld scanner — scan a physical page (notes, etc), and uploads it with good design (checks alignment, spacing, etc)

44. Interactive scanner for physical books — looks like (and may act as ) clip-on reading lamp, but detects when words are underlined or notes are taken, and scans those pages for easy recap of significant passages

45. Individual temperature-sensing thermostat — has infrared sensor that detects the temperature of individuals inside the room and adjusts (if someone is exercising within the room, the heater would turn off/air conditioner would come on).  Right now thermostats only detect the overall temperature of the whole room, which adjusts very slowly to individuals’ movement or activities.

46.  Instant feedback for lecturers — projector-like device that can be mounted in front of blackboard, is given information about dimensions of lecture hall, and immediately gives feedback to instructor about whether they are writing too small for those in farthest seats.  Could also be implemented with device that measures microphone input/speaker output to tell if lecturer is speaking loud enough to be heard

47.  Light-detecting window blinds — detects brightness of sunlight and opens/closes automatically in order to create appropriate amount of light in room and prevent overheating in room (saves energy)

48.  Kinect-like device in doorway of retail stores, connects to mobile device to tell customers as they walk in what size they should look at for different articles of clothing (shirt, pants, etc), specific garments or styles that would be flattering to their body type (sizing for different stores vary greatly, would reduce time trying on different sizes, helps marketing for each store)

49.  Accompaniment-producer: Sound-detecting device that listens to the piece you are playing, identifies it, detects the speed at which you are playing it, and prepares/plays accompaniment at appropriate speed.  Works for singing (karaoke soundtrack) or instrumental pieces/ concertos (orchestral or piano accompaniment recording), and can speed up/slow down recording to match your speed.

50.  Plates/cups that detect content volume and give information to central device (can be used by restaurants to detect which diners need drink refills/are done with their courses)

51. Alternative to #50: Proximity sensors in restaurant tables that detect when devices worn/carried by waiters stop near the table for an extended period of time, gives data about which tables are being neglected/allows waiters to see which tables have already been visited by waiters who were not assigned that specific table

52. A wearable device that creates your own personal soundtrack – all of your movements are converted to sounds. For example, as you’re walking, your rate may be converted to the tempo of the song.


Selected Project:

Target Users:
Our target users are musicians, who often sheet music but have difficulties turning the pages at the appropriate times. This is particularly true for pianists, who often have people turn their pages for them during performances.  In addition, page turns often do not occur at appropriate times in the music, and are often during complex or fast passages that require a player’s full attention and do not lend themselves well to pauses.  Turning pages by yourself or using a human page-turner can introduce error into musical performances. Musicians would therefore benefit from an automated page-turning device, to eliminate the need for extra people during performances and make the page-turns more accurate and dependable.
Problem Description:
The problem is the lack of an automated way to reliably and robustly turn pages of sheet music.  This problem occurs most often for solo performers, who are currently forced to break the music’s rhythm and their concentration in order to flip these pages. Because this problem is most important for live performances, the solution must be discreet, and not disrupt the performance (quiet, be able to detect a single user’s instrument even when playing with accompaniment, and function even if performer does not follow music exactly).  Currently, there are some mechanical page-turners implemented primarily for organ players that respond to mechanical input (pushing pedal), but the process has not been completely automated.
Why the chosen platform/technology?
An Arduino-based solution would be ideal for this type of problem. Beforehand, the Arduino must have access to an online version of the sheet music, where it can read in the notes and know where the page breaks are for the given sheet music. The sound played by the musician would then be detected by an Arduino’s microphone, so that the Arduino could detect different pitches and speeds of notes played. It would then compare the given note to the sheet music stored in its online database, and identify where the user is currently playing the music. When the user reaches a note that is near the end of the current page, it will automatically turn the page for the musician.The Arduino is a good platform for these functionalities–specifically for powering the motor to mechanically flip pages, and also detecting analog inputs from sound changes.