Do You Even Lift?- P5

a. Group Information

Group 12
Do You Even Lift?

b. Group Names

Adam, Andrew, Matt, Peter

c. Project Summary.

Our project is a kinect based system that watches people lift weights and gives instructional feedback to help people lift more safely and effectively.

d. Project Tasks

Our first task is to provide users feedback in their lifting form. In this prototype, we implemented this in the “Quick Lift” mode for the back squat. Here, the user performs a set of exercises and the system will give them feedback about their performance on each repetition. Our second task is to Create a full guided tutorial for new lifters. In this prototype, we implemented this in the “Teach Me” mode for the back squat. Here, the system takes the user through each step of the exercise until the user demonstrated competence and is ready to perform the exercise in “Quick Lift” mode. Our third task is to track users between sessions. For our prototype, we implemented this tasking using the “wizard of oz” technique for user login. Our intention is to allow users to login so that they can reference data from previous uses of the system.

e. Changes in Tasks

We originally intended to allow users to log in to a web interface to view their performance from previous workouts. We thought there were more interesting interface questions to attack with the Kinect though, so we decided to forgo creating the web interface for our initial prototype. For now, we will use “Wizard of oz” techniques to track users between sessions but intend to develop a more sophisticated mechanism in further iterations of the system prototype.

f. Revised Interface Design Discussion

We revamped the design of the TeachMe feature to include user suggestions. Nearly all of the users who tested our system suggested that an interactive training portion would be useful, as opposed to the step by step graphics we had originally incorporated. We have implemented the interactive TeachMe feature, and are very pleased with it’s turnout. The previous testing was very helpful in reimagining the design.

Other small changes were made as well. We changed much of the text throughout the system, again based on feedback provided by the users. For example, we switched the text to read “Just Lift”, instead of “Quick Lift,” which we believe better describes the functionality offered by the accompanying page. Similarly, we changed “What is this?” to “About the system”, again resulting in a clearer description of what the resulting page does.

Original Sketches can been seen here: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2/

And our new interface can be seen below.

Updated Storyboards of 3 Tasks

Just Lift

1) OH YEAH! Squats today!
2) Haven’t lifted in a while…I hope my form’s OK…
3) Guy: WOAH!, System: Virtual Lifting Coach
4) Guy: Wow! It’ll tell me about my form!, System: Bend Lower!
5) How great!! (victory stars appear everywhere to help our friend celebrate.)

 

Teach Me

1) I want to lift weights!
2) But I don’t know how 🙁
3) And everyone here is soooo intimidating….
4) Guy: WOAH! System: Learn to Lift!
5) What a great intro to lifting!

 

User Tracking

1) January (some weight)
2) February (more weight)
3) March (even more weight)
4) If only I knew how I’ve been doing…
5 Guy: Just what I needed! System: Your progress

 

Sketches For Unimplemented Portions of the System

We still have to implement a mechanism for displaying a user’s previous lifting data. In our prototype we did login using “wizard of oz” techniques. We intend though, once the user is logged in, to have a “History” page that would display summary data of a user’s performance in a graph or in a table like the one below.


Additionally, we would like to go even further with the idea of making the “Teach Me” feature of the system more of a step by step interaction. We found in our personal tests of the system, that it was not very fun to read large blocks of text. We want to present users with information in small bits and then have them either wave their hand to confirm that they read the message or have them demonstrate the requested exercise to advance to the next step of the tutorial.

System gives user a small bit on information. User waves hand after reading.

System tells user to do some action 3 times. User does that action 3 times.

After doing the action 3 times, user waves hand to go to next step of tutorial.

g. Overview and Discussion of New Prototype

As described above, we radically changed the TeachMe section of the prototype. We originally implemented a presentation-style interface, with static images accompanied by text that the user read through. In the second revision of the prototype, we adopted a much more interactive style, where the steps are presented one at a time. After the presentation of each step, the user is then allowed to practice the step, and cannot progress until they have perfected their technique. We believe this style more engaging as well as more effective, and has the added benefit of not letting users with poor techniques (whether through laziness or lack of understanding) slip through the cracks. This emphasis on developing strong technique early on is important when safety is a factor; users can be severely injured if they use incorrect technique.

We have also implemented the Just Lift feature. The interface has been refined from the previous iteration to the current one, including streamlined text. We are pleased with how the interface has developed. The system automatically tracks and grades your reps, with sets defined by when the user steps outside of the screen. There was no precedent for this interface decision, but we believe we made a fairly intuitive choice: after watching how people lift in Dillon, we noticed that they will frequently step aside to check their phone or get a drink after a set.

As described above, we decided not to implement the web interface, and instead adopted an advanced user login feature. The web interface, because it required a new modality (web), would have required a lot of groundwork for a relatively small return. Additionally, we believe the user login feature (OCR recognition of gym card or prox) offers the chance to explore significantly more interesting and novel interface decisions than the web based interface would allow. We will include a “History” page in the Kinect system, to take the place of the web interface and allow user’s to see summary data of previous workouts.

For now, the OCR user login will be implemented using wizard of oz techniques. We are investigating into OCR packages for the Kinect, including open source alternatives. Again, this is a feature that’s a prime candidate for wizard of oz techniques. It provides a rich space to explore what “interfaces of the future” might look like, keeping in line with the spirit of the Kinect. OCR packages exist, but they may be difficult to connect with the Kinect. Until we have time to further explore the subject, the wizard of oz technique is a great way to allow us to explore the interface possibilities without being bogged down by the technology. In addition to login, our audio cues were generated with “wizard of oz” techniques. We typed statements into google translate and had them spoken aloud to us. We “wizard of oz-ed” this functionality because we believed it was non-essential for the prototype, believe that audio cues will not be too difficult to implement with the proper libraries, and wanted to tie in some of the audio cues with the system login, which is also yet to be implemented.

References/Sources

To get ourselves set up, we used the sample code that came with the Kinect SDK. Much of this sample code remains integrated with our system. We did not use code from sources outside of this.

h. Video and Images of prototype 

System Demo

*Note in the video we refer to “Just Lift” as “Quick Lift.” This is an error. We were using the old name which has been updated after feedback from user testing.

The first screen the user sees: Select and Exercise

The user then chooses between “Just Lift” and “Teach Me.”

“Teach Me” takes the user through a step by step tutorial. Instructions update as the user completes tasks.

The “Just Lift” page keeps track of each of the user’s repetitions. For each repetition, it dynamically adds an item to the list with a color corresponding to the quality of that repetition. When the user clicks each repetition, the interface displays feedback and advice to help the user improve. When the user exits and re-enters the view of the camera, the system creates a new set.

Do You Even Lift- P3

Group Number and Name

Group 12 — Do you Even Lift?

First Names of Team

Andrew, Matt, Adam, and Peter

Mission Statement

We are evaluating a system designed to monitor the form of athletes lifting free weights and offer solutions to identified problems in technique.

Some lifts are difficult to do correctly, and errors in form can make those lifts ineffective and dangerous. Some people are able to address this by lifting with an experienced partner or personal trainer, but many gym-goers do not have anyone knowledgeable to watch their form and suffer from errors as a result. Our system seeks to help these gym-goers with nowhere else to turn. In this regard, we want our prototype to offer an approachable interface and to offer advice that is understandable and useful to lifters of all experience levels.

Concise Mission Statement: We aim to help athletes of all experience levels lift free weights with correct technique, enabling them to safely and effectively build fitness and good health.

Member Roles: This was a collaborative process. We basically worked together on a shared document in the same room for most of it.

Adam: Compiled blog, wrote descriptions of 3 tasks, discussion questions…

Andrew: Drew tutorial and feedback interface, mission statement, narrated videos…

Matt: Drew web interface, took pictures, mission statement…

Peter: Mission statement, discussion questions, filmed videos…

Clear Description of the Prototype w/ Images

Our prototype is a paper prototype of the touch screen interface for our system. Users first interact with our paper prototype to select the appropriate system function. They then perform whatever exercise that system function entails. Finally, users receive feedback on their performance.

We envision our system consisting of a kinect, a computer for processing, and a touch screen display. Our touch screen display will be the only component with which users physically interact. If we do not have a touch screen computer for our prototype, we wil substitute an ordinary laptop computer.

This is our proposed startup page. From this page, users can select the exercise which they are about to perform. They also have the option to click the “What is This?” button which will give them information about the system.
After selecting an exercise, users can enter either “Quick Lift” mode or “Teach Me” mode. In “Quick Lift, “our system will watch users lift weights and then provide technical feedback about their form at the end of each set. In “Teach Me” mode, the system will give the user instructions on how to perform the lift the selected. This page of the display will also have a live camera to show users that the system is interactive.

In the top right corner of the display too, users can see that they have the option to log in. If they log in, we will track their progress so that they can view it in our web interface and so the system can remember their common mistakes for future workouts.
In “Quick Lift” mode, users have the option of receiving audio cues from our system (like “Good Job!” or “Keep your back straight!”). Users will then start performing the exercise (either receiving audio cues or not). Once they are finished with a set, we will show a screen like the one below. On the screen we will show users our analysis of each repetition in their previous set of exercises. We will highlight their worst mistakes and will allow them to see a video of themselves in action. This screen will also allow to see their result from previous sets. Likewise, if a user was logged in, this information would be saved so that they could later reference it on a web interface.

If a user selects “Teach Me”, they are taken to a screen like the one below. This screen gives a text description, photos, and a video of the exercise. After reading the page, the user can press the “Got it!” button. The system will then encourage the user to try the exercise themselves using the unweighted bar. After the user successfully performs the exercise a number of times, the system will prompt the user to try that exercise in “Quick Lift” mode.

The picture below is our web interface. Here, workout data is organized by date. By clicking on a date, users can unfold the accordion style menu to view detailed data from their workout such as weight lifted, number of sets and repetitions, and video replay. Users can filter the data for a specific exercise using the search bar at the top. Searching for a specific exercise reveals a graph of the users performance on that exercise.


Descriptions of 3 Tasks

Provide users feedback in their lifting form

We intend the paper prototype to be used in this task by having users select an exercise and then the “quick lift” option. The user will then navigate through the on screen instructions until he or she has completed the exercise. When the user performs the exercise, we will give them feedback by voice as well as simulate the type of data that would appear on the interface.

The opening page for “Quick Lift.” The kinect is watching the user and will give feed back when the user begins exercising.

The user begins exercising.

As the user exercises, the side menu accumulates data. A live video of the user is displayed in the video pane.

After the lift, users can navigate through data from each repetition from their set of exercises. The text box tell user’s their errors, the severity of the errors, and explanations as to why those errors are bad.

Track users between sessions

We intend the paper prototype type to be used in this task by having a user interact with the web interface.  First, a user will see the homepage of the web interface. The user will then click through the items on the page and the page elements will unfold to reveal more content.

The web interface is a long page of unfolding accordion menus.

In the web interface, users can look back at data from each set and repetition and evaluate their performance.

Accordion menus unfold when the users clicks to display more data.

Create a full guided tutorial for new lifters

We intend the paper prototype to be used in this task by having users select an exercise and then the “teach me” option. The user will then navigate through the on screen instructions until he or she has read through all the instructive content on the screen. Then, when the user is read to perform the exercise, he or she will press the “got it!” button.

The user selects the “teach me” option

As the can go through the steps of the exercise to see pictures and descriptions of each step.

Discussion of Prototype

i. How did you make it?

We made the prototype by evaluating tasks users perform with the system and coming up with an interface to allow for the completion of those task. It made most sense for use to create a paper interface that would display the information users would see on the display monitor. The idea is that users would use the paper interface to interact with the system as they would with the touch screen and we would use our own voice commands to provide users with audio feedback about their form if they wanted it.

ii. Did you come up with any new prototyping techniques to make something more suitable for your assignment than straight-up paper? (It’s fine if you did not, as long as paper is suitable for your system.)

We used a combination of the standard paper prototype with “Wizard of Oz” style audio and visual cues.

iii. What was difficult?

 It was difficult to compactly represent the functionality offered by the Kinect on a paper prototype. As described above, we were able to partially account for this by adopting “Wizard of Oz” style audio and visual techniques. However, our system relies on users taking advice from a computer and it was difficult to test how receptive a user would be to our advice.

iv. What worked well?

We think our paper prototype interface is pretty intuitive and makes it easy for users to choose the functionality they want. The design seems pretty self explanatory which is especially helpful when new users interact with the system. We also were pleased with the methods we chose to give users realtime feedback without distracting them from their lifts.

A3 – Score


Names: Adam Suczewski, Matt Drabick, Eugene Lee, Green Choi
I.

  • “Other Academics”
    • This seems like a violation of “recognition rather than recall.” This principle states that the interface should minimize the user’s memory load by making information visible. The “other academics” tab contains some of the most important on the page yet it is extremely difficult and unintuitive to find.
    • One way to fix this would be to rearrange the home page in order to showcase the most important information. Making important links for grades and enrollment more prominent would improve
  • Information overly pagified
    • This is a severe Efficiency of Use issue, information is broken up into too many pages, and in a non intuitive way. It’s seems like information such as grades or quintile information would intuitively be found on the “My Academics” tab. Instead it is a link on the enrollment->term information tab. The ‘term information’ subtab is not visible when the user is on the “My Academics” tab, where they would expect to find such information
    • Tabs could be merged together to display more information at once. For example, the ‘My Academics’ tab is just a single page, while ‘Enroll’ has several subtabs. Therefore, all of these tabs could be placed in one row, with My Academics highlighted to show its greater importance
    • Term information and My Academics could also be merged together. They are closely related by purpose.
    • While minimalism is important, it is equally important to not hide relevant information
  • Favorites doesn’t work
    • When you click the ‘add to favorites’ button (located in two places), on a subpage of the Student Center, it adds the Student Center main page to favorites.
    • Making pages like ‘My Grades’ able to be linked from favorites would provide a quick way to provide increased navigation efficiency without changing the underlying design.
  • “My Class Schedule”
    • The user is not given access to their own course schedule, while the front page shows a list of classes and times.
    • This seems to violate H9: Help users recognize, diagnose and recover from errors
    • It seems like the only way to fix this would be to actually allow access to the class schedule it shows during an enrollment period.
  • Enrollment appointments
    • When you click “View my enrollment dates” on “Term Information” in “Enroll” is displays an error message “You do not have access to enrollment at this time.”
    • This would again be a violation of H9: Help users recognize, diagnose and recover from errors
    • Instead it could actually tell you your valid enrollment dates or direct you to a Princeton website that shows when you can enroll in classes.

 

  • Lack of User Control and Freedom/Recovering from Errors
    • SCORE very frequently lacked simple, intuitive options for returning to previous menus or the main “Student Center” menu, especially when user interaction flow came to a halt in the case of errors or invalid requests.
      • This could be easily solved with simple “Return” or “Back” buttons on error screens or halts in user interaction stages.

II. Nielsen’s Heuristics

– The Help and Documentation problem was made much easier to find by Nielsen’s heuristics. We would probably not have thought to analyse this aspect were it not for Nielsen’s guidelines. We are all very accustomed to seeing unclear error messages on Score so we may not have second guessed them.

 

– Nielsen’s heuristics on Error Prevention as well as Recognition, Diagnosis, and Recovery in handling errors addressed issues that we may have easily overlooked, such as the ease of recovering from an error or issue on the site and the ease returning to normal interaction.

III. Non-Nielsen Problems
Privacy of information did not seem to really fall under any of the 10 heuristics. It seems like sensitive information (like SSN) was displayed on pages where the user would not expect that type of information to be displayed.

Likewise, the fact that Score is closed every night between 2am and 7am is not something a user should expect from a convenience store, not a website.

The site often has trouble allowing users to log in, giving the error message: “You are not authorized to log in to PeopleSoft HCM/CS”. This doesn’t seem to fit into an exact category, as one expects a website to give them consistent results when they try to log in and they are authorized.

IV. Useful Heuristic Questions
– A discussion of the most common and the most easy to fix heuristic problems may be useful. It may be useful to compile a sort of FAQ document for website projects, etc.
– A good final exam question may be to provide a screenshot of a website/app with some obvious and not-so-obvious heuristic errors, allowing students to highlight and explain them using Nielsen’s heuristics.
– Non-Nielsen problems may also be added for additional credit.
– A final exam question may be to provide improvable examples of interfaces and to ask for easy or creative ways to improve them given background knowledge of the user base and the nature of the service.

Links to our blogs:
Adam Suczewski: www.princeton.edu/~asuczews/A3.pdf
Green Choi: https://dl.dropbox.com/u/34418361/greenheuristics.pdf
Matthew Drabick: https://www.dropbox.com/s/8ozjibpzpdzq9y1/mdrabick%20-%20A3.pdf
Eugene Lee: http://www.princeton.edu/~eugenel/eugenel_A3.pdf

P2

GROUP NUMBER: 12
GROUP NAME: Do you even lift?
GROUP MEMBERS:

All of us observed people in Dillion Gym. We worked collaboratively on a google doc as we sat together in the same room so we overlapped on basically every task.

Adam Suczewski: Final story board sketches, interface sketch descriptions, 3 task descriptions, interface text description, compiled blog…
Andrew Callahan: Drew final interface sketches, wrote problem overview, contextual inquiry, task analysis, rewrote many parts to improve coherence…
Matt Drabick: Drafted story boards and interface sketches, interviewed people in Dillion gym, task analysis questions, interview transcripts…
Peter Grabowski: Interviews in Dillion gym, task analysis questions, storyboard idea compilation…

PROBLEM AND SOLUTION OVERVIEW:

Many people lift weights to build fitness and good health. Some lifts are difficult to do correctly, and errors in form can make those lifts ineffective and dangerous. Some people are able to address this by lifting with an experienced partner or personal trainer, but many gym-goers do not have anyone knowledgeable to watch their form and suffer from errors as a result. We aim to solve this problem by having a computer with a kinect watch trainees as they perform lifts, and point out errors in form and potential fixes.

DESCRIPTION OF USERS YOU OBSERVED IN THE CONTEXTUAL INQUIRY:

Our target user group is the gym-goer with an interest in lifting or learning how to lift. This seems like a valid choice, as we envision our system serving as an instructional tool for lifters of all skill levels.

We started out our contextual inquiry by going to the weight room in Dillon Gym and watching students lift weights, paying attention to how they and their friends monitored their form. Most people were all at the gym for personal gains (i.e. they were not compelled to be there by a team). These personal gains varied among people but included goals like losing weight or building muscle mass. People ranged in apparent expertise from beginners to very advanced, and were in groups of 1-3 (i.e. some were alone). More details about users are given below in the CI Interview Descriptions.

CI INTERVIEW DESCRIPTIONS:

We conducted interviews with acquaintances that we encountered while lifting at Dillon Gym. We asked a short list of questions about their history in weightlifting, whether they went alone or in groups, and how they went about keeping their form correct. We found that people sometimes lifted on their own, or sometimes with friends. People find going with friends useful for motivation, for getting feedback on form, and for spotting in certain exercises. However, this comes with the downside of having to change weights more frequently and of finding a mutually agreeable time to meet.

People lifting with friends will sometimes get feedback on their form from the partner, depending on their relative expertise at the lift (as well as how vocal the partner is). This will usually come in the form of the friend giving cues in the middle of a set (“keep the back in!”) or more detailed feedback after the set is over, often with friend attempting to demonstrate with their body what the problem was and how it should look instead. Trainees lifting by themselves do not get this feedback, and self-report ignoring minor problems in form, and noticing more severe problems when they sense discomfort/pain.

The biggest difference we noticed was that people who lifted alone were much less likely to be concerned about form than those that went in groups. It might be that people lift in groups because they want to be careful about their form, and people who are less concerned just lift alone. It could also be that not having friends around nagging you about subtle problems leads people to just let subtle problems persist.

We also interviewed people in Dillon who do not lift but use the machines and cardio equipment. We were interested in asking these people why it is that they do not lift. We found that the main reasons were that they do not know how to lift, they are afraid of getting too big (particularly girls), or they found free weights intimidating. Most people said that they would lift if they had someone to teach them.

ANSWERS TO 11 TASK ANALYSIS QUESTIONS:

1. Who is going to use system?
People lifting weights will use the system. Lifters of all experience levels can use the system to provide cues and feedback while lifting, and people new to a specific lift could receive a full guided tutorial from the system on that lift. Lifters encountering the system will range from eagerly seeking and heeding the advice of the system to ignoring and even being annoyed by cues from the system (preferring their conception of how the lift should be executed). We need to strike a balance in presenting crucial information to lifters while noticing when they want the system to stay out of the way.

2. What tasks do they now perform?
Users can be split into two groups – those who lift alone and those who lift in pairs/groups or with a dedicated trainer. Users who are alone do not receive any feedback on their form, and will either ignore their form, or look at themselves in a mirror when available to check their form. Lifters in a group will sometimes receive cues from their friends when their form is flawed. However, having a partner is no guarantee of useful feedback – partners are observed and self-report sometimes being too inexperienced, distracted, or misinformed to help.

3. What tasks are desired?
We would like trainees to be able to confidently achieve good form and know when they’ve made mistakes, even if they’re lifting alone. We would also like these users to be able to track their performance over time in detail, including being able to watch video of them doing a set from any point in the past.

4. How are the tasks learned?
Currently, our potential user receives instructions from a knowledgeable trainer, who will demonstrate a lift and then provide feedback about how their form was. Personal trainers are often very expensive, so users sometimes have friends teach them lifts. The friends might not have perfect form or be very critical of the user, so bad habits can develop from the beginning. Our system display will provide instructions for the user. Users will follow the prompts from the system to select the exercise they want to perform. The system will give accurate feedback and keep track of it between sessions. Lifters often learn to keep track of lifts in a notebook or on a website from others.

5. Where are the tasks performed?
The lifts we are focusing on are usually performed in a school, team, or commercial gym. Lifts are performed in various dedicated stations in the gym, and are usually done with few interruptions. We could have a system at each station dedicated to the lift, or place one or more systems on a movable cart that the user could position. Lifts can also be done in the home, if the user has the right equipment. Our system will be an addition to their home gym set-up.

6. What’s the relationship between user & data?
The system will collect data from the user’s lifts, including their repetitions, weights, date and time of workout, as well as any flaws in the users form. The system (if the user elects to pay for an account) will upload it to a companion site, and provide a detailed record of their history and flaws, as well as allowing the user to watch a wireframe.
Privacy may be a user concern, although information about users weight-lifting habits is certainly less sensitive than their health (HIPAA) or education (FERPA) data. Of course, there are always exceptions (such as professional weightlifters, who may want to keep their training data private), but a simple username/password system with basic encryption should provide more than enough security for online access. A more basic approach might be to have users log into the kiosk by holding their gym card up to the camera (combined with face matching). Users can share their data with other users at their discretion.

7. What other tools does the user have?
Users currently have few options available to them for acquiring reliable, high quality feedback on their weightlifting form. Methods include watching themselves in a mirror (although the very process of twisting their head to watch may negatively affect form). Users can also ask peers for feedback, although as mentioned above, users may be hesitant to ask strangers for help. Finally users think about their own body mechanics, although this method is far from accurate. The user can also take notes about their lifts and keep track of that as well as their reps and weights. Several applications make that easy, such Fitocracy, which has additional space for the user to enter notes relevant to the workout, although Fitocracy does not monitor your form.

8. How do users communicate with each other?
Many users go alone, in which case it’s unlikely they communicate with anyone else. From time to time, one user may ask another to spot them during a set, but it’s very rare for one user to ask a stranger to provide feedback on their form. If users go with a partner, they’ll occasionally provide spoken feedback to one another, either during or after a set. However, this feedback is of unknown quality. Users may also engage with trainers, whom they pay to provide feedback. In this case, the trainer provides frequent spoken feedback of high quality after every set, but the service is very expensive.

9. How often are the tasks performed?
As often as the user goes to the gym to lift. This could be anywhere from once a week to every day. Our “Just Lift” mode addresses those users who are in a rush, and allows them to get and out of the gym quickly, while still identifying major flaws and providing feedback. Our “Teach Me” mode provides more feedback to those users who need it, whether they use the system more infrequently or have more time to spend at the gym. Users can switch between each mode seamlessly, allowing them to pick the one that best suits that day’s needs. Users might look back over old workouts every month or two months in order to decide how to adjust their workouts or to make a whole new workout plan. This process might take a 15 to 20 minutes or as much as a few hours depending on how focused the user is on lifts.

10. What are the time constraints on the tasks?
As long as the user wants to spend at the gym. There are no set time constraints across all users, but each individual user may have their own constraints depending on their schedule. An average session at the gym is about 90 minutes, although this could range anywhere from 30-120 minutes depending on the user. Frequent constraints seen among users are needing to get to work/class on time (if lifting beforehand), or not wanting to get home too late (if lifting in the evening). As a result, the same task could be hurried or possibly wait, depending on the individual users time frame. There’s no timing relationship between tasks — users pick one of the available tasks, and complete it in their preferred order.

11. What happens when things go wrong?
Serious injuries across the entire body are some of the more grave potential problems, but bad form can also lead to reduced performance in lift. The only backup system would be a spotter that can “rescue” the lifter if they cannot complete the lift by helping them drop the weight safely. This is especially important in a bench press, where user can stand above and take some of the weight off the lifter. In an squat, the lifter is more responsible for being able to drop the weight and step away if it is necessary.

DESCRIPTION OF 3 TASKS:

Our first task is to provide users feedback in their lifting form. We will do this by capturing their lifts with a kinect, processing the data associated with their movements, and outputting the result. We expect this to be moderately difficult, but we are confident that we will be able to figure the kinect out and build an accurate, useful device.

Our second task is to track users between sessions. The idea here is that users will be able to log in by holding their id card up to the kinect camera. The system will then associate that user’s data with that user so it can track lifting history. Users who log in will likewise be able to log in to a web interface at home and view their lifting data. We expect this to be challenging but believe that getting the core functionality down should not be a problem. It may be hard to develop our entire system and then build a web interface on top of it, but it should not be a problem to incorporate some sort user recognition/history into the system.

Our third task is to create a full guided tutorial for new lifters. Here, we plan to show the user pictures, videos, and text descriptions of the exercise We will then encourage the user to try the lift themselves while we monitor there movements with the kinect and provide realtime feedback. After implementing the first task, we don not forsee too much difficulty with this one. It seems to only involve creating instructional content as well as creating a user experience better suited to a first time lifter.

Details of the implementations of these 3 tasks are described below.  

INTERFACE DESIGN:

Text Description:

Our system is an implementation of the 3 tasks stated above. Using a touch screen display, users will choose to either get feedback on their lifts or learn how to lift. Likewise, they will choose whether or not we will keep their data for future access by choosing whether or not to log in. Once they have selected what they want to do, they will either perform their lifts, or follow our tutorial on how to lift. This is the core functionality and the scope of the system. The benefit of this system is that we intend for it give the kind of advice people typically get from a personal trainer. By providing users with this advice, we can help them maximize their health by maximizing their workouts and helping them avoid injuries associated with bad form. There do not seem to be an similar automated systems in existence. While our system may not initially have the credibility of a human trainer, it has the advantage that it is always available to an person using the piece of equipment it is integrated with, gives objective feedback, and tracks user progress.

Story Boards:

1) Monday? More Like Squat Day!
2) Squats! All Right!
3) ?How’d I Do?
4) Monitor: Good… but you look like you’re leaning back a bit
5) Ahhhh. Thanks.
6) I’ll nail it in the next set. (Next set starts in 1:29).

1) Bob Here!
2) Kinect: Woah! You’re leaning back!
3) Later… What did I do today? Oh yeah! I had sploppy curls.
4) Better do my stretches!
5)Kinect: Hey Bob! Watch out for lean back on those curls today! Bob: Gosh! Thanks!
6) Kinect: Great Bob!

1) I want to lift but I don’t know how 🙁
2) Monitor: Learn to Lift!
Guy: !?What could it be?!
3) Woah! It’s teaching me!
4)1 month later… I feel so fit! So healthy!

 

Sketches:

We envision our system consisting of a kinect, a computer for processing, and a touch screen display. Our touch screen display will be the only component with which users physically interact. If we do not have a touch screen computer for our prototype, we wil substitute an ordinary laptop computer.

This is our proposed startup page. From this page, users can select the exercise which they are about to perform. They also have the option to click the “What is This?” button which will give them information about the system.
After selecting an exercise, users can enter either “Quick Lift” mode or “Teach Me” mode. In “Quick Lift, “our system will watch users lift weights and then provide technical feedback about their form at the end of each set. In “Teach Me” mode, the system will give the user instructions on how to perform the lift the selected. This page of the display will also have a live camera to show users that the system is interactive.

In the top right corner of the display too, users can see that they have the option to log in. If they log in, we will track their progress so that they can view it in our web interface and so the system can remember their common mistakes for future workouts.
In “Quick Lift” mode, users have the option of receiving audio cues from our system (like “Good Job!” or “Keep your back straight!”). Users will then start performing the exercise (either receiving audio cues or not). Once they are finished with a set, we will show a screen like the one below. On the screen we will show users our analysis of each repetition in their previous set of exercises. We will highlight their worst mistakes and will allow them to see a video of themselves in action. This screen will also allow to see their result from previous sets. Likewise, if a user was logged in, this information would be saved so that they could later reference it on a web interface.

If a user selects “Teach Me”, they are taken to a screen like the one below. This screen gives a text description, photos, and a video of the exercise. After reading the page, the user can press the “Got it!” button. The system will then encourage the user to try the exercise themselves using the unweighted bar. After the user successfully performs the exercise a number of times, the system will prompt the user to try that exercise in “Quick Lift” mode.

Assignment 2

1. Observations.

I conducted observations on Tuesday in the CS building room 102 at 9:50 am, Friday in the Friend Center Lobby at 10:50 am, and Monday at 1:20pm in Frist.

At Tuesday in CS 102, I observed people in Programming Languages before class started. I spoke with others too to ask about what they were doing. My observations were:

  • Most people were quiet. On interview, the most likely reasons for this were because:
    • “It’s early.”
    • Students do not know each other, in part due to the gradt/undergrad split of the class.
  • Many students were on their laptops checking email, performing other small tasks, or looking at our electronic textbook.
  • Many students were either texting, sending email, or reading the news on their phones

At Friday in the Friend Center Lobby, I observed traffic as it passed through and interviewed people about what they were doing at that time. Popular activities included:

  • Standing and talking with friends
  • Printing documents in the Friend center library
  • Using the bathroom
  • Checking texts/emails on phone

I also spoke with students and found that most had checked their phone for email/text in the last 10 minutes.

At Monday in the Frist, I observed traffic and interviewed people about what they were doing. I also walked around the building to see observe different parts. Common activities included:

  • Getting late meal (long line)
  • Picking up a package from the package center (long line)
  • Checking mailbox or looking up mailbox combination on computer.
  • Staff was cleaning full garbage cans
  • Checking texts/emails on phone
  • Working at tables upstairs

The outside of Frist was also crowded with pedestrian traffic.

From interviewing people and observing these 3 situations, I think that my interface should probably focus on the student who is in a hurry to do something between class. It would likewise make sense for it to be a web, desktop, or mobile application, as most students use a computer or phone in the 10 minutes between class. Speaking to students, I found that many are “trying to get little things done,” or trying “not to be late for class.” Students felt like time between class was “short” so an application that helped people streamline their activities during this time would make sense.

2. Brainstormed Ideas:

  1. An attendance taking application using prox scanning to check students into a class before that class starts. Takes advantage of earliness so class time can be more productive.
  2. Flashcard application for teachers to learn students names.
  3. System of lights (turn signals, brakes, etc) for bikes to help bike users better communicate with pedestrians in crowded areas. Implement interface on handle bars of bike.
  4. Lights that notify people on bikes/skateboards when there are people around a blind corner (like near the pillars in front of Frist, or the corner of the Dinky waiting area building)
  5. Bike interface that lets you know how fast to go to get to class on time.
  6. Map services application mounted onto bike/skateboard to create a route to location that avoids stairs.
  7. Map services application to map routes like quickest, most scenic, least crowded, most squirrels, etc.. between buildings
  8. Touch screen interface in lobby of building for indoor map and room availability.
  9. Mobile application to quickly order “take out” from Frist or dining halls to save time if trying to grab food between class.
  10. Touch screen interface to quickly look up Frist mailbox lock combination.
  11. Mobile application to check printer status and release jobs from print queue remotely to save time printing before class.
  12. Resource finder application to find objects like printers, pencil sharpeners, staplers that actually work
  13. System for Frist late meal workers to montior traffic in order to have more checkout stations during peak hours (which often coincide with the 10 mins between classes)
  14. Trash can sensors for facilities workers to respond to full trash cans at peak times between classes.
  15. Course oriented chat interface to allow classmates ask other classmates course related questions (what’s today’s lecture about?, where did we leave off last time?…)
  16. Platform for “highlight video” of previous lecture.
  17. Web interface for instructors to post course-related articles or cool things to look at before class.
  18. Mobile based course related quiz game before class that gives you feedback on how well you know the material in that class relative to your classmates.
  19. Locations based interface to help introduce you to people in your lecture before with similar interests
  20. To-do list interface that automatically opens applications that are needed to complete tasks (ex. “email Joe” on todo list would automatically open a new gmail message w/ Joe as the recpient). Application would proceed down list sequentially.
  21. Applications that learns user’s computer behavior to suggest tasks.
  22. Mobile device plays music based on mood/weather
  23. Change music selections with interface on bike handle bars or by hitting a large button in your jacket.
  24. Application to allow for collaborative selecting of room temperature / lighting to be set before class starts.
  25. Application that generates ideas of things to do at night so students can make plans

Two Favorites:
Item 11, an application to check printer status and release items from print cluster remotely

Item 20, a to-do list that automatically opens applications that are needed to complete task.


3. Reasons for selecting each prototype:

-This is both a feasible and useful app that solves two persistent problems: Princeton students often are in a rush to print before class and printers are often unreliable.

– This app would be a straightforward way to help streamline the process of completing the “busy work” Princeton students often do between classes.

4. Photos and descriptions of prototypes:

Printer App

The Printer App allows students to remotely release print jobs when they are nearby a printer. This saves them time both finding a worker printer as well as printing their documents. This applications seems like it would be best implemented as a mobile app.

 The main page of the remote printing app has 4 buttons. A “My Queue” button to view all of the items currently in your print queue, a “Printer List” button to view all of the printers nearby, listed in order of closest to farthest, a “Check Map” button to view printers in a map view, and “Print!” button that will prompt you to select a document to print and a printer to print to.

The Main Page

Three landing pages that follow the main page

From the main page, the buttons for “My Queue”, “Printer List,” and “Check Map” correspond to one of the three drawings in the column on the right. From the “My Queue” page, users can select documents to print from their queue and delete documents from their queue. From either the list or map view of the printers, users can select which printers they want to print from. After the user selects a document in the “My Queue” page, the user is directed to the “Select Printer” page. After the user selects a printer from the “Printer List” or “Map” pages, the user is directed to the “Select Document” page.

A dialogue appears if the user clicks on a printer in the map

If a user clicks on a printer in the map, the user is prompted to print from that printer.

 

A user can select documents or printers from these menus.

Below are the two menus from which the users can select a printer or a document. The user reaches these pages after selecting “Print!” on the home page or by landing here after viewing the “My Queue,” “Printer List,” or “Map” page and selecting the options described above.

The success menu.

A display appears to notify the user that their document is successfully printing.


The user has various ways of getting from the main page to the “Printing” page depending on how they interact with the intermediate menus.

A overview of the application

To-Do List App

The main feature of the To-Do list app is that helps students quickly navigate through the tasks that they typically complete between classes. When students add a task to the to-do list, in addition to the title, they must also give the name of the application needed to complete that task. For example, sending an email might use Mail or Thunderbird, while editing a document might use Microsoft Word. After selecting an application, the user must select the file or url of the resource they wish to modify. For example, if the task involved editing a document in Microsoft Word, he would enter the file path of the document. After creating a to-do list, the user presses “Start!” When the user presses start, the program traverses their todo list, automatically opening the necesarry resources. The todo list application maintains a small dialogue box in the corner of the user’s screen so that the user can move between tasks and can navigate back to the main page.

The main page of the todo list app consists of a list of the users tasks, with the options to add an entry or to start their to do list.

The main page

When users click the “New Entry” button, the are prompted with a form with which they fill in information about their task. The information requested is the title of the task, the application need to complete the task, and the file or url where the task can be accessed. Ideally the file/url entry section would be updated once the application section is filled in so the to-do list knows which resource it needs.

A form to create a new task

To choose an application, the user selects from a drop down menu. To choose a file/url, the user either enters a url directly or can browse their file system to find the correct address.

A drop down menu appears to allow the user to select an application.
A file system browser allows the user to select a file.

From the main menu, the user also has the option to edit an entry in the to do list. Likewise, certain local applications, like Mail, may not need a file/url inorder to start up.

This menu allows a user to edit an entry in the task list

Tasks Read: “Email Joe,” “Check Bank Account,” “Fill Out Survey,” “Proof Read HW”

When the user presses start, the correct application programs start. For example, the first drawing shows a new email window so the user can email Joe. The to-do list displays a small dialog box in the corner of the screen to remind the user that it is running and to allow the user to move to the next item in the list. Pressing the “next” button, for example, could automatically open Microsoft Word with the user’s homework so that they can edit it.

A view of the todo list application while the user is completing tasks.

5. Photos, descriptions, and detailed notes from user testing:

Test Structure: The structure of the test was that I presented the paper prototype to users telling them to interact with the paper as if it were a computer application. I tried to let them interact with it on their own at first, but offered help if help was needed. I started by giving them a card with a blank todo list queue. They then had to add tasks to the todo list and click “Start!” once tasks were added. I say they “had” to do this because this is really the only useful way to use the application.

1) Test 1

-Sarah clicking the “Application” button in the “New Entry” form.
-One common mistake was clicking “Start!” When the todo list was empty

For my first test, I demoed the application to Sarah. During her demo, the first thing she did was hit the “Start!” button. I then told her the the “Start!” button does nothing because there is nothing in her todo list queue. She then clicked the “New Entry” button. This brought up the new entry form. She then entered the title of the task, the application, and file path. After entering tasks, she was directed back to the todo list queue. She then clicked the “Start!” button. She was slightly confused when the email interface popped up, but she completed her first task, and clicked the “next” button. The second task then appeared and she clicked the “next” button. The simulation ended.

2) Test 2

-Mike attempting to start the to do list by clicking an individual task

For my second test, I demoed the application to Mike. Like Sarah, the first thing he did was hit the “Start!” button. I also told him that the “Start!” button does nothing because there is nothing in his todo list queue. After questioning the purpose of the application, he then clicked the “New Entry” button. This brought up the new entry form. He entered the title of the task and then entered the application. He asked about the “File/Url” field, for he was a bit confused as to why he would need to enter this information for a todo list.  He then entered the file path. After adding tasks to his queue, he was directed to the todo list queue. Instead of clicking “Start!” though, he clicked on an individual task, in an attempt to complete that task. Clicking an individual task, however,  does nothing. He then clicked the “Start!” button. His first task appeared, he completed the task, and he clicked the “next” button. The second task then appeared, he completed the task and he clicked the “next” button. The simulation ended.

At the end of the demo when I spoke with Mike he said he didn’t think he would personally use this application. He said he would “just do the task” rather than take time to add it to a list of small things to do later.

Test 3)

Kyle looking over the main page of the todo list app

For my third test, I demoed the application to Kyle. Like Mike and Sarah, the first thing he did was hit the “Start!” button. I also told him that the “Start!” button does nothing because there is nothing in his todo list queue. He then clicked the “New Entry” button which brought up the new entry form. He entered the title of the task, the application, and file path/URL. Like Mike, he also asked why the File/URL is needed. After adding tasks to his queue, he was directed to the todo list queue. He then clicked the “Start!” button. Like Sarah, he wasn’t exactly sure why the email interface appeared but after some direction, he completed the task. He, however, but did not immediately find the “next” button. I pointed out the “next” button and he clicked it. The second task then appeared, he completed the task and he clicked the “next” button. The simulation ended.

6. List of insights from testing.

  1. All 3 test users clicked the “Start!” button while the queue was empty
    1. My first thought to solve this problem is to eliminate the “Start!” button from the main page when the todo list is empty. I think this could work, although it’s possible that the word “Start!” could be confusing even when there are items in the list.
    2. A second solution would be to change the word “Start!” to “Execute To Do List”, or “Do!” or something more descriptive and straightforward than “Start!”.
  2. Users questioned why a file or url was necessary for a todo list
    1. To solve this problem I think it would be helpful the user to have more background information on the application. The user is supposed to go into this kind of test blind, so it might be helpful to add a more descriptive welcome page to the application to introduce its basic concept.
  3. Users were surprised when the page changed from the todo list queue to email when they pressed “Start!”
    1. Once again, I think it would be helpful for users to have more background of the purpose of the application. One idea would be to change the page sequence and make a “loading screen” appear when the user presses “Start!” that says something like “Now starting your todo list…opening HW.doc with Microsoft Word…”
    2. I think changing the word “Start!” to something more descriptive would be helpful to with problem as well.
  4. Users had trouble finding the “next” button or needed me to point out the small todo list dialogue box in the corner.
    1. I think I could make the dialogue box more prominent so that users are aware of its presence and can use the “next” button to go to the next task.
    2. After the user clicks “Start!” and the proposed loading screen appears, I could have the message in the loading screen make note of the fact that a dialogue box will appear.
  5. One user clicked on an individual task in order to attempt to complete that task
    1. I think that I should make clicking individual tasks have some functionality. I think it makes sense that users should be able to sort tasks in the todo list by dragging them.
  6. Users questioned the purpose of the application
    1. Though the application was somewhat confusing on it’s own, when I described the application to users myself, they saw its use. I think I should find a way to work my description of the app into the interface itself. Maybe some progress bar or instruction set like “Add Entries -> Sort Entries -> Execute List!” on the main page would be helpful.
    2. I think the fact that the application was made of paper was unusual for my users. I think this may have made the app seem more strange.

In all, I think my app was successful in that users were able to navigate through it reasonably quickly and understood its purpose with a little help from me. The biggest problem though is that it leaves much for explanation. I think the main goal of the revisions above are mainly to make the application and interface explain itself more clearly through better wording, button options, and page sequences.

Lab1

Group Members: Matthew Drabick, Adam Suczewski, Andrew Callahan, Peter Grabowski

High Level Description:

Access to inexpensive, portable medical equipment is a serious problem in many developing countries. There are currently solutions available, but many ignore the diagnosis of neurodegenerative disorders. We aim to provide a compact, easy to use package to aid in the diagnosis of such diseases using 3 basic tests.

First, our diagnostic suite includes a spirometer to measure lung capacity and expulsion rate. This is an important metric, for shortness of breath can be an indication of increased brain pressure, stroke, or tumors (Shortness of Breath).

Second, our diagnostic suite includes a measure of reaction time. Many patients with Parkinson’s disease have difficulty initiating movement, and as a result exhibit an increase in reaction time (Movement Disorders). This test can help diagnose some of those cases.

Third, our diagnostic suite includes a digital strength test. Difficulty with finger movements or strength can be an indicator of polyneuropathy (Physical Examination). Our strength test attempts to give a more honest evaluation of finger strength.

Physicians or volunteers can use our simple device to as a preliminary test for many neurological conditions. While the diagnosis is no means certain, it can be a useful tool to identify patients who are good candidates for further diagnostic testing. Below is an image of the system.

 

A look at the final product

 

Works Cited

Bozkurt, Biykem, and Douglas Mann. “Shortness of Breath.” Shortness of Breath. N.p., n.d. Web. 23 Feb. 2013.

“Movement Disorders.” American Association of Neurological Surgeons, n.d. Web. 23 Feb. 2013.

“Physical Examination: Diagnosis of Brain, Spinal Cord, and Nerve Disorders.” Physical Examination: Diagnosis of Brain, Spinal Cord, and Nerve Disorders: Merck Manual Home Edition. Merck, n.d. Web. 23 Feb. 2013.

Story Board:


High Level Design Process:

During the brainstorming stage we came up with a few ideas including:

-A system to control a lamp based on lighting in the room (turn the lamp off if the room is light, turn the lamp on if the room is dark)

-A system to detect different types of coins using an FSR.

-A flex sensor controlled etch a sketch that draws to a computer monitor.

We decided to go with the medical testing suite because it’s flexible, potentially useful, and uses a wide array of the resistive sensors in our arsenal.

Our initial design looked something like this:

The idea was to have the 3 tests stationed radially on a platform. We would then have the user select a test using the pot in the center of the 3 tests. After writing our code though and implementing the tests on breadboards, we soon encountered difficulty with this setup. The challenge was that without soldering, we could not connect wires to our circuit components. To overcome these design issues, we changed our design to something more similar to the sketches below using several breadboards. In this setup, we moved the pot off to the side, and gave each test its own independent “station.”

 

The tests are spread out as far as possible using 3 breadboards. The pot is used to select the test.

Another design sketch when were realized we needed to rearrange circuit components.

The reaction time test “station”

Though this implementation was not as smooth or intuitive as the one we imagined, we were satisfied with the implementation given the materials we were working with. It took a bit of tweaking and trying different layouts to get our system in order, but in the end the system was structurally stable, functional, and reasonably easy to understand.

In addition to the physical layout of the testting system, we also had to work out the design of the tests themselves. We had the most trouble getting the spirometer working. Sometimes, the spirometer would not register when a user started blowing, while other times, it would think the user had stopped blowing when the user had not. To overcome these problems we tried directing the user’s breath through tubes, changing the position and width of the spirometer fan, and playing with the delay right before the start of the test. In the end it took the right modification of each of these factors to get the test working. Aside from the spirometer, we also had some difficulties detecting when the user is not touching the softpot. These difficulties were mitigated to some degree by adding a 10K resistor between the ground and output.

Overall, we were pleased with the way they way the design process went. By testing a series of different layouts, we think that we’ve arrived at an interface that provides a straightforward method of interacting with the system. We think the project could use a little more polish in order to make it more intuitive for first time users, but that much of this polish would come with being able to solder and permanently install the components into some housing.

Technical Design Description:

Reaction Time Test:

In the reaction time test, the user places their finger in the center of a softpot. Then, one of two LEDs placed the right and left of the softpot turn on. The user must move their finger in the corresponding direction as quickly as possible. If the user is quick enough and passes the test, the green feedback light flashes. It the user is too slow and fails the test, the red feedback light flashes.

Basic information on softpots can be found here: http://bildr.org/2012/11/touch-sliders-with-a-softpot-arduino/

Spirometer:

In the spirometer test, the user blows on the flex sensor for as long as possible. Our device times how long the user can keep the flex sensor bent beyond some minimum threshold. If the user blows long enough, hard enough, and passes the test, the green feedback light flashes. It the user fails the test, the red feedback light flashes.

Basic information on flex sensors can be found here: http://bildr.org/2012/11/flex-sensor-arduino/ 

Strength Test:

In the strength test, the user squeezes the FSR as hard as possible and our device measures the the maximum reading. If the user squeezes hard enough and passes the test, the green feedback light flashes. It the user fails the test, the red feedback light flashes.

Basic information on FSR’s can be found here: http://bildr.org/2012/11/force-sensitive-resistor-arduino/

Test Selection:

The device detects which test the user selects by reading the value from the pot. The range of pot values is divided into 3 sections, each of which correspond to a test.

Basic information on a pot can be found here: http://www.arduino.cc/en/Tutorial/Potentiometer

How to:

1) Install the FSR, pot, softpot, and flex sensor on a breadboard using the tutorials linked to above. Test each sensor by printing its value to Serial output.

2) Install 4 LEDs to be used in the tests. Write test code to make sure each one is wired correctly. (Refer to the Blink tutorial if help needed).

3) Upload the code below, making sure the input pins of your sensors and LEDs correspond to the pins in the code.

4) Test the system. If everything is set up correctly, the system should work as it does in the video. If it does not work, check your wire connections. If all of your wires seem to be connected correctly, write Serial.print() statements in the code to help debug your system.

5) Make your interface more usable by constructing housing out of cardboard or other material, and adding any other features such as a fan for the spirometer.

The Code:

typedef enum {
  LUNG_MODE,
  REACTION_MODE,
  SQUEEZE_MODE,
} 
game_mode_e;

game_mode_e gameMode;
int time_msec;

int redLedPin = 2;
int greenLedPin = 3;
int slideRightLedPin = 4;
int slideLeftLedPin = 5;

int flexPin = 0;
int flexReading;

int maxDelay = 30;
int delay_ms;

int maxFlexThresh = 315;
int flexTarget = 315;

int middleOfRT = 600;
int minRT= 100;
int slideLeftThreshold = 100;
int slideRightThreshold = 900;
int targetRT = 150;

int slidePin = 1;
int slideReading;

int potPin = 2;
int potMax = 1023;
int numChoices = 3;

int randChoice;

int squeezeReading;
int squeezePin = 3;
int maxSqueezeThresh = 400;
int squeezeTarget = 400;

// flash a given led
void flashLed (int ledNum) {
  analogWrite(ledNum, 0);
  delay(50);
  analogWrite(ledNum, 255);
  delay(200);
  analogWrite(ledNum, 0);
  delay(50);
}

void readPot() { 
  int potReading = analogRead(potPin);

  if (potReading < 340) {
   gameMode = LUNG_MODE;
  }
  else if (potReading < 681) {     gameMode = REACTION_MODE;   }   else {     gameMode = SQUEEZE_MODE;   }   Serial.print("Game Mode: ");   Serial.println(gameMode); } void setup(void) {   // We'll send debugging information via the Serial monitor   Serial.begin(9600);   gameMode = REACTION_MODE;   flashLed(redLedPin);   flashLed(greenLedPin);   randomSeed(analogRead(0)); } void loop(void) {   readPot();   switch (gameMode) {   case LUNG_MODE:     flexReading = analogRead(flexPin);       Serial.print("Flex reading = ");     Serial.println(flexReading);     // detect blowing start     while (flexReading > maxFlexThresh){
      Serial.println(flexReading);
      delay(50);
      flexReading = analogRead(flexPin);  
    }
    time_msec = 0;
    while (flexReading < flexTarget){
      delay(100);
      time_msec += 100;
      flexReading = analogRead(flexPin);  
    }
    if (time_msec < 1000){
      flashLed(redLedPin);
    } 
    else {
      flashLed(greenLedPin);
    }
    break;
  case REACTION_MODE:
    flashLed(slideLeftLedPin);
    flashLed(slideRightLedPin);
    slideReading = analogRead(slidePin);  

    // wait for them to put their finger in the middle
    while (slideReading < minRT){       slideReading = analogRead(slidePin);         flashLed(redLedPin);     }     // ready to go     flashLed(greenLedPin);     flashLed(greenLedPin);     delay(500);     // calculate random delay     delay_ms = random(maxDelay) * 100;     time_msec = 0;     delay(delay_ms);       randChoice = random(2);              // based on previously calculated randomness       if (randChoice == 0){         flashLed(slideLeftLedPin);       // while their finger isn't all the way on the left       while (slideReading > slideLeftThreshold){
        delay(10);
        time_msec += 10;
        slideReading = analogRead(slidePin);  
      }
    } 
    else{
      flashLed(slideRightLedPin);

      // while their finger isn't all the way on the right
      while (slideReading < slideRightThreshold){
        delay(10);
        time_msec += 10;
        slideReading = analogRead(slidePin);  
      }
    }

    // if they beat the target time
    if (time_msec < targetRT){
      flashLed(greenLedPin);
      flashLed(greenLedPin);
    }
    else{
      flashLed(redLedPin);
      flashLed(redLedPin);
    }

    break;
  case SQUEEZE_MODE:
    squeezeReading = analogRead(squeezePin);  
    // detect blowing start
    while (squeezeReading < maxSqueezeThresh){       delay(50);       squeezeReading = analogRead(squeezePin);       }          time_msec = 0;     while (squeezeReading > squeezeTarget){
      delay(100);
      time_msec += 100;
      squeezeReading = analogRead(squeezePin);  

    }
    if (time_msec < 3000){
      flashLed(redLedPin);
    } 
    else {
      flashLed(greenLedPin);
    }
    break;

  default:
    Serial.println("Unsupported mode");
  }
  delay(250);
}

Do You Even Lift?

Team Name:

Do You Even Lift?

Group Members:

Adam Suczewski, Andrew Callahan, Matt Drabick, Peter Grabowski

Brainstorming:

  1. Live music aid — Music device that changes based on movement/attitude of the crowd. Uses Kinect to sense motion, adjusts lights/sound/fog accordingly.
  2. Skateboard odometer/speeder to monitor travel data for the curious skateboarder. Communicates to watch display, or alternatively to LCD display on board.
  3. Device to display airplane information (flight number, etc) by pointing to an airplane.
  4. Gesture based interface for surgeons in OR — they can’t touch anything during the operation for risk of becoming not sterile, so a hands free interface for browsing patient records would be useful — also useful for other professions like butchers, potters
  5. Emergency condom delivery system — when you’re in an intimate situation where it may be inappropriate to get up and leave, send out a (private message) to a trusted contact that you need a special delivery.
  6. Display/Orb that indicates what club to go to based on where your friends are going to. One dial (with marks for each of the clubs) that integrates GPS data from friends and points to the club with the most friends
  7. There are frequently used commands in photoshop (any program, really). Create a pedal based interface to map to these frequently used commands, to allow users to enter input with their feet as well and increase throughput
  8. Secret knock or password to open door without key or prox. – motor to pull down on down handle
  9. Arduino in a house hears fire alarm and shuts off the gas line
  10. Full, new and improved smoke detector. Chemical sensor/light trap detects smoke. Alerts security company, calls your phone, takes pictures inside for insurance company
  11. Kinect interface that allows you to make gestures to change the channel, up the volume, turn it on/off. Voice recognition for Netflix commands, etc.
  12. Hardware interface for convenient ordering from seamless – does online ordering for a particular. One button for your favorite chicken parm, one button for sushi, etc.
    1. We could use mechanical turk to map orders to buttons so that the user doesn’t have to dig through the API – he can just say “2 slices pepperoni pizza from Dominos” and an anon does the hard work for 10 cents
  13. Automated shocking when user bites nails to help break bad habit. Could be kinect based, although that might be overkill. May be too hard to differentiate between “hand near face” and “biting nails”
  14. Device to wear while running that vibrates on your left or right side to indicate a turn coming up. Useful for novel routes — think about downloading other’s routes from a site like runkeeper and following them for the first time.
  15. Kinect/webcam at the gym watches you lift and identifies problems in your form
  16. Monitor lifting technique using force sensors in hands and feet to check if force is equal. Also use wrist watch with sensors on each leg/arm. Choose which exercise you are about to perform and get feedback
    1. Could integrate with Fitocracy or other account. Keep track of your progress
  17. Weight lifting equipment with sensors built in to check the balance of the bar. Sensors detect balance and movement of the bar itself to give feedback.
  18. Previous points assume you already know proper form and are looking to improve it. Kinect based system could teach you step by step, overlaying a video of your exercise with what a correct exercise would look like. Also uses voice cues.
  19. Help runners with running form using a treadmilll with force sensors to detect foot striking. Display foot strike data on a monitor to help runner’s technique.
  20. Detect when runners have bad form when tired and using sensors on legs and arms that can detect when they pass each other.
  21. Arduino detects when it’s too bright/dark in room and adjusts shades on windows. Useful for classrooms
  22. Arduino raises/lowers blinds to act as an alarm clock
  23. Arduino works as a physical thermostat by opening windows when the room is too hot and closing them when too cold. Could also turn fan on and off.
  24. Combine above three ideas for room maintenance system. Potential for cross communication — for example, it might be ok for the room to be dark and cold if you’re sleeping, but not during the day.
  25. System that photographs people entering lecture, builds real-time attendance list, and can select a random attendee to answer a question. Solves problem of tracking attendance (students can sign in other students on sign in sheet), as well as wanting to pick a random student to answer a question
  26. When you’re low on some household supply, scan your barcode on device, and it gets added to an online shopping list/shopping cart, such as fresh direct or amazon.
  27. “Smart Container.” Have plastic containers that can be filled with water, flour, etc. that automatically adds that ingredient to your shopping list when the container is less than a quarter full.
  28. Integrate the above two into automated kitchen. Containers and to-do list converse, to answer more complicated questions like “Do I have what I need to make cookies tonight?” or “What do I need to pick up at the store if I want to make chicken parm tonight?”
  29. Arduino locks your door while you’re busy programming (while you have an IDE open or while you’re typing code etc). Or more realistically, illuminates a “Do no disturb” light
  30. Single button to open IDE, enable warning light, and initialize brew of favorite coffee. Enter “programming mode”
  31. Arduino turns off toaster when toast is done. Can be done with a light sensor that can detect the change in color as it browns, or a light trap/smoke sensor (depending on how dark you like your toast.
  32. Arduino listens while you make popcorn and turns off microwave when popcorn is done
  33. One better – Arduino uses netflix API to detect that you’re starting a movie, and makes popcorn for you.
  34. Automated smoothie making with arduino. Preset options to streamline process.
  35. Light controls for cyclists — button to signal for left right turns, as well as breaking lights
  36. Arduino measures out ingredients for you in the kitchen — you enter in a recipe, and it dispenses the correct amount of flour, sugar, etc for you.
  37. “Weasley clock” — location of various family members on dial, if they’re in predefined locations
  38. Location sensors on valuable items, with dial indicator of where they are in the house
  39. Device analyzes tempo of sex and plays appropriate music
  40. Arduino resets your WRT-54G (router). Have an infrared or radio remote so you can do it from the comfort of your couch upstairs

a. Also keeps from you from having to go under the desk or somewhere inconvenient.

  1. Device adjusts oven temperature based on meat thermometer and maybe color of steak
  2. Arduino power control for dishwasher. Monitors power grid data, and turns on dishwasher when power cost is lowest (i.e., middle of the night). Reduces peak (max) power usage, and more evenly distributes power usage over the course of the day.
  3. An Arduino based system that works as an air freshener that detects bad smells and activates a burst of delicious smell. Could be useful for public restrooms, and potentially more efficient than timer based systems.
  4. In a car, Arduino continually monitors the air coming through vents. If it detects bad air quality it rapidly shuts the vents. It could check for skunk or nasty north Jersey refineries or smelly truck brakes.
  5. Arduino brews tea for you by removing the teabag at the appropriate time. Useful if you need to brew many bags of tea at once, with specific brew times (consider Teavana or Infini-tea)
  6. Bartenders place bottles on LED stand. When they go to make a drink, the appropriate bottles light up. Decreases time required to find the bottles, increasing bar’s throughput. Especially good for new bartenders. Also looks pretty cool.
  7. Above could hook-in with DJ system, for song-themed drinks. For example, when Margaritaville comes on, you could offer a half price special on margaritas for the duration of the song. “Flash sale” mentality would potentially increase drink sales. Bartending system could flash or otherwise indicate ahead of time, so the bartender could prepare for the imminent rush.
  8. Place microphones around room or other venue to sense volume. If speaker is too loud, lower volume levels. If speaker is too soft, arduino aims microphone to maximize sound captured (or raises volume levels). Could use tracking system to make sure microphone is pointed optimally at face
  9. Glove for conductor of an orchestra. They can’t move or gesture while conducting, but may need to communicate during the performance with audio/house staff or parts of the orchestra. Glove has buttons to send predefined messages, or alternatively has a one handed keyboard system
  10. Washing machine status light (for the living room, kitchen, etc.). Make it easy to see if your permanent press load is finished, so it doesn’t wrinkle while sitting in the drier.

Sketches:

IMG_2095

1) Live music aid
2) Skateboard odometer/speedometer watch
4) Gesture interface for messy hands
7) Computer foot pedals for commonly used commands.

 

IMG_2100

11) Gestures to change tv settings
36) Auto measuring of kitchen indredients
41) Oven meat temperature adjuster

IMG_2098

Clockwise from top left:
29) Door lock when busy
32) Popcorn / Toast monitor
41) Auto adjust thermostat
45) Auto tea-brewing
46) LED bartender helper
Bike brake lights/ turn signals

IMG_2097

21) Window shade adjuster
25) Camera for attendance
26) Household barcode scanner
27) Smart containers

IMG_2096

Clockwise from top left:
6) Friend finding orb
9) Gas line cutoff in fire
13) Bad habit/nail biting breaker
20) Running form detector
15) Lifting technique montior
12) Convienient food ordering
8) Secret knock to get in room

IMG_2099

17) weight sensors in lifting equipment to check technique
19) Treadmill foot strike detector
42) Run appliances (dish washer) at low times on power grid.

Short description:

Our first choice is a kinect-based trainer for the weight room. This device would use a camera and body-tracking APIs to identify problems in form and point out fixes. It could also provide a full tutorial on how to do the lift, useful for a complete novice. We like the idea because:

  • We can approach the problem from a few different areas
  • Kinect APIs should make it easy to watch the lifter and identify problems
  • The advice would be very useful to the end-user, as it would be a very inexpensive (free, probably) way to check your form and avoid mistakes that could cause injury
  • Commercial gyms are always looking to buy the latest gimmicky things


If we can’t have a kinect, our second choice is to try using a standard webcam-based system to the same effect, or to just go and spend all of our budget on another kinect.

If none of the above pans out, another project we are interested in is an Arduino-based window thermostat, ideal for dorm rooms. The device could open and shut the windows when the room is too hot or cold, and could roll up the shades in the morning when it’s time to wake up. We think this could accomplished pretty efficiently and would be useful for your typical lazy college student. It has room for extra goals in the form of learning and adapting to the user’s patterns.

Full Description:

Target User Group – The system would aimed primarily at novice weightlifters. The system could start from scratch, assuming no user knowledge of the lift, and teach it in stages. The system’s ability to identify subtle problems in form could be useful for trainees of all skill levels, though, as some common problems in form are very tricky to notice on your own (e.g. not having the back in the right spot on the back in a squat, or allowing the lumbar to round in a deadlift).

Problem Description & Context – We are looking to solve the problem of inexperienced weightlifters doing lifts inefficiently or dangerously by compromising their form. This solution could offer guidance on how to correct these problems, as well as more general advice on routines and workout programming. We envision addressing this in the weightroom, right where the user does his/her lifts. Users will have varying experience levels, and we don’t want to interrupt experienced lifters who would rather not be annoyed. Some users might have different or mis-concieved opinions on how a lift should be done, and might choose to ignore some or all of the advice given. Some users might have friends or (human) trainers with them to help out.

Technology Platform – We imagine this working with a Windows PC (probably one of our laptops initially) hooked up to a Kinect. This gives us body-tracking APIs that will make it much easier to identify what the user is doing. Unless these saturate the gym, they could be placed on something like a cart with wheels, so that it can monitor different exercises or from different angles. If the idea takes off, the technology could be incorporated directly into exercise equipment like squat racks.

More Detailed Design Sketches:

IMG_2102

-Using a kinect, monitor lifting technique and give the user feedback
-System can evaluate technique and also give instructions to user
-Can be built in to lifting equipment or implemented separtately

IMG_2101

-User will be able to select from different lifts they want to perform.
-Visual and auditory display will give feedback (“Good Job!”, “Keep Arms Even”…) as well as act as the interface for selecting lifts or receiving instruction.
-Monitor/Audio can show “ideal” lifts teach user.

IMG_2103

-Kinect watches technique.
-Forces sensors check balance/evenness

 

Simon!

Group Members: Matthew Drabick, Adam Suczewski, Andrew Callahan, Peter Grabowski

High Level Description:

For part 3 of the lab, our group decided to build a “Simon” game. Our game setup uses 3 buttons and 4 LEDs. Each button corresponds to one LED and the 4th LED is used to indicate an error. The game starts with the arduino flashing one of the 3 lights, chosen randomly. The user must then press the button corresponding to that light. If the user’s input is correct,  the arduino extends the pattern by one and the user must then match that extended pattern. If the user’s input is incorrect, the error light goes off and the user loses the game. This process repeats until the pattern reaches length 7, in which case the user wins the game.

Our game is mounted a paper plate for support and uses origami balloons to diffuse light from the leds.

Here is a video of the game in action:


High Level Design Process:

We began with brainstorming. Our initial ideas included interactive and non-interactive designs. These ideas included pre-set light display patterns (Morse code or musical patterns), diffusers using various translucent paper and plastic covers, and a binary counter.

We decided to first make the binary counter, as we thought it would be both technically and visually interesting. We also would have the opportunity to use our origami balloon/lantern diffusers which we thought were pretty cool.

The binary counter consisted of two buttons (an increment and a decrement) as well four LEDs to display a 4 bit number. With those four LEDs, we could count from 0 to fifteen, and display a fun pattern on overflow or underflow.

We began by sketching our design and drawing the circuitry. Here are our initial brainstorming sketches:

binary_drawing_1

Drawing of the binary counter interface.

binary_drawing_2

Diagram of the binary counter circuitry.

We then assembled our circuit and wrote the code to power our binary counter (technical details given below). In the end we built this:


After completing the binary counter though, we considered our design choices and thought about what we could do to make our circuit better. After making modifications and iterating through different design choices, we decided that what our circuit was lacking was an interesting method of interacting with the counter.  We liked how the binary counter was interactive; however, it was limited to single presses doing the same thing every time. With this in mind, we considered various ways of expanding on our counter, such as using the counter to select and play one of 16 different pre-set light patterns (which could be Morse code messages or other interesting displays) or to play a game. In the end we decided to create the Simon game described above.

simon_drawing1

An initial drawing of the simon interface

Initial design decisions for Simon included how to organize the user interface and how many lights and buttons to include. We decided to use a paper plate as the body of our game as it was easy to manipulate but also gave sufficient support. We initially planned to make the game with 4 lights and 4 buttons, but reduced those numbers to 3 as we continued in the design process and faced limitations due to the availability of resources and bulkiness of alligator clips.

A look at the alligator clips connecting the leds and buttons to the bread board

A look at the alligator clips connecting the leds and buttons to the bread board

Once the basic layout of the game was implemented, we made gameplay decisions like how long to wait between flashes of light and how long the pattern should be for the user to win. We made these decisions by playing the game ourselves, and by having other people play our game. We also had to work out bugs such as a single button press being registered twice. After trying our game with different parameters, we arrived at our final design.

A top view of the final simon game

A top view of the final simon game

Technical Documentation / Technical Design Choices:

There were 2 main circuit components that we used to power our game: LEDs and buttons (these were used with resistors, as needed). In the first 2 parts of the lab, we became familiar with using LEDs. Helpful information about using LEDs with arduinos is found at http://arduino.cc/en/Tutorial/blink.

LEDs are implemented by creating a connection between an Arduion pin and ground. (Image from arduino.cc/en/tutorial/blink) 

We then looked up how to use buttons with arduino at http://arduino.cc/en/tutorial/button. To use a button, we needed to provide a path from ground to 5v (with a resistor) as well as a path to an input pin to sense when the button is closed.

Buttons are implemented by creating a path between ground, 5v, and an input pin. (Image from arduino.cc/en/tutorial/button)

With an understanding of buttons and LEDs, we were able to get started with the technical side of the design process.

We started by drawing the circuitry for our game. There are four LED’s and three buttons in the final implementation but only 3 leds in the diagram below. The fourth LED is an indicator light for when you incorrectly input a sequence.

simon_drawing_2

An initial drawing of simon circuitry

With these design sketches, we were then able to implement our design on the breadboard and mount the buttons and lights on the plate. Though the organization of wires in our pictures looks complicated, it is essentially 4 leds/resistors connected to a digital input pin and ground, and 3 button/resistors connected to a digital input pin, ground, and 5v.

The wiring of the simon interface.

We did not reach our final design immediately though. Our initial design had the LED’s on the inside; we eventually decided to move them to the outside of paper plate for the final prototype. This allowed the diffusers to fit better and also made room for our center failure indication LED. We also attempted to incorporate a 4th LED/button pair but found we were limited on resources.

With the circuitry in place, we then focused on the physical construction. For the led diffusers, we chose to uses the origami ballons/lanterns that we had used previously for the binary counter. We used these origami instructions to make four balloons out of dum-dum wrappers.

single_lantern

A single origami balloon used to diffuse light.

For the base of the game, we took a paper plate and poked four holes for each button (one for each leg) and a single hole for each LED. We then could insert the switches and LEDs and attach alligator clips to the bottom to make the circuits displayed in the tutorials and our diagrams. By following those design diagrams, we constructed what you see in the picture below.

Wiring of simon interface to arduino

Wiring of simon interface to arduino

As you can see, we have supported the plate with three stationary clamps (“helping hands”). This allows the alligator clips to hang down, making the circuits easy to connect and prevent them from touching accidentally. This also allowed us easy access during debugging. After some cleaning up of our wires we finished our simon design. Here is a walkthrough of the finished project:



How to:

1) After reading over the blink tutorial, connect LEDs to your bread board so they connect to digital input pins 9, 10, 11, and 12, a resistor, and ground.

2) After reading over the button tutorial, connect your buttons to your bread board so they connect to digital input pins 2, 3, and 4, ground and 5v.

Our simon breadboard.

Our simon breadboard.

3) Run some test code on you Arduino. Start by running blink on each led to check that you leds and connections are working. Then, try the button code  to test that each button is working. Finally, upload the code below to check that the simon game works.

4) Construct the base of the game using a paper plate or similar material. Arrange the buttons/leds in the manner shown below with the electrical contacts sticking through the plate.

5) Construct 4 origami balloons to act as diffusers.

6) Move the 4 leds and 3 buttons on your breadboard over to the plate setup one by one by running aligator clips from the plate setup to the breadboard.

7) Repeat set 3.

8) Cover each led with an origami balloon diffuser. Hot glue to balloons to the plate for support.

Hot gluing origami balloon diffusers to leds

Hot gluing origami balloon diffusers to leds

9) You’re done!

 

The Code:

Our code starts by initializing a random array, with length equal to the number of button presses needed to win. It flashes the first color in the sequence, and then falls into the main loop, continually checking for button presses. When a button is pressed, the Arduino checks if it corresponds to the next color in the sequence. If it is the expected color, it advances the position in the sequence by 1. Otherwise, it flashes the sad face led and resets. When the user has reentered the sequence correctly, the next element is added to the sequence, and the game repeats until the user has succeeded on the maximum level.

The most significant problem we had was single button presses being recognized as double presses. We had trouble eliminating this behavior completely, but found some success by introducing a delay of about 250 milliseconds after handling each press.

Here is the code we used for the project:

const int numButtons   = 3;
    const int buttonPins[] = {2,3,4}; // arduino input pins
    const int ledPins[]    = {9,10,11}; // arduino output pins
    const int errorLed     = 12;

    const int maxLength    = 5;

    int randSeq[maxLength]; // contains randomly gen'd game seq
    int level; // current level
    int numCorrect; // how many pins you've pressed correctly so far
                    // always less than level

    int lastStates[numButtons]; // to check button input

    void setup() {
      victory();
      delay(250);
      resetState();      
      randomSeed(analogRead(0));
    }

    //initialize randSeq with random values between 0 and numButtons
    int makeArray(void) {
      int i;
      for (i = 0; i < maxLength; i++) {
       randSeq[i] = random(numButtons); 
     }
   }

   // flash a given led
   void flashLed (int ledNum) {
    analogWrite(ledNum, 0);
    delay(50);
    analogWrite(ledNum, 255);
    delay(200);
    analogWrite(ledNum, 0);
    delay(50);
  }

  // handle input
  void checkInput(int i) {

    // wrong button was pressed
    if (randSeq[numCorrect] != i) {
      flashLed(errorLed);
      flashLed(errorLed);
      flashLed(errorLed);
      delay(250);
      resetState();
      return;
    }
    // correct button was pressed
    else {
      numCorrect++;
      // check for last button in level
      if (numCorrect == level) {
        // check for last level in game
        if (level == maxLength) {
          victory();
          delay(250);
          resetState();
        }
        // not last level in game
        else {
          delay(500);
          numCorrect = 0;
          level++;
          flashSeq();
        }
      }
      // not last button in level
      else {
        delay(100);
      }
    }
  }

  // determine which button was pressed
  void checkButtons () {
    int i;
    for (i = 0; i < numButtons; i++) {
      int state = digitalRead(buttonPins[i]);
      if (state != lastStates[i]) {
        if (state == HIGH) {
          checkInput(i);
          delay(100);
        }
        lastStates[i] = state;
      }
    }
  }

  // flash the sequence of leds up to current level
  void flashSeq(){
    int i;
    for (i = 0; i < level; i++){
      flashLed(ledPins[randSeq[i]]);
      delay(250);
    }

  }

  // turn all leds off
  void setAllOff() {
    analogWrite(ledPins[0], 0);
    analogWrite(ledPins[1], 0);
    analogWrite(ledPins[2], 0);
  }

  // turn all leds on
  void setAllOn() {
    analogWrite(ledPins[0], 255);
    analogWrite(ledPins[1], 255);
    analogWrite(ledPins[2], 255);
  }

  // flash all leds
  void flashLeds(){
    setAllOff();
    delay(100);
    setAllOn();
    delay(100);
    setAllOff();
  }

  // flash the leds in a circle
  void circle() {
    setAllOff();
    delay(100);
    analogWrite(ledPins[0], 255);
    delay(100);
    analogWrite(ledPins[0], 0);
    analogWrite(ledPins[1], 255);
    delay(100);
    analogWrite(ledPins[1], 0);
    analogWrite(ledPins[2], 255);
    delay(100);
    analogWrite(ledPins[2], 0);
    delay(100);
  }

  // special victory flash sequence
  void victory() {
    flashLeds();
    flashLeds();
    flashLeds();
    circle();
    circle();
    circle();
    flashLeds();
    flashLeds();
    flashLeds();
  }

  // reset for a new game
  void resetState() {
    makeArray();
    level = 1;
    numCorrect = 0;
      // delay(500);
    flashSeq();
  }

  // main loop
  void loop() {
    checkButtons();
  }