a. Group Information
Group 12
Do You Even Lift?
b. Group Names
Adam, Andrew, Matt, Peter
c. Project Summary.
Our project is a kinect based system that watches people lift weights and gives instructional feedback to help people lift more safely and effectively.
d. Project Tasks
Our first task is to provide users feedback in their lifting form. In this prototype, we implemented this in the “Quick Lift” mode for the back squat. Here, the user performs a set of exercises and the system will give them feedback about their performance on each repetition. Our second task is to Create a full guided tutorial for new lifters. In this prototype, we implemented this in the “Teach Me” mode for the back squat. Here, the system takes the user through each step of the exercise until the user demonstrated competence and is ready to perform the exercise in “Quick Lift” mode. Our third task is to track users between sessions. For our prototype, we implemented this tasking using the “wizard of oz” technique for user login. Our intention is to allow users to login so that they can reference data from previous uses of the system.
e. Changes in Tasks
We originally intended to allow users to log in to a web interface to view their performance from previous workouts. We thought there were more interesting interface questions to attack with the Kinect though, so we decided to forgo creating the web interface for our initial prototype. For now, we will use “Wizard of oz” techniques to track users between sessions but intend to develop a more sophisticated mechanism in further iterations of the system prototype.
f. Revised Interface Design Discussion
We revamped the design of the TeachMe feature to include user suggestions. Nearly all of the users who tested our system suggested that an interactive training portion would be useful, as opposed to the step by step graphics we had originally incorporated. We have implemented the interactive TeachMe feature, and are very pleased with it’s turnout. The previous testing was very helpful in reimagining the design.
Other small changes were made as well. We changed much of the text throughout the system, again based on feedback provided by the users. For example, we switched the text to read “Just Lift”, instead of “Quick Lift,” which we believe better describes the functionality offered by the accompanying page. Similarly, we changed “What is this?” to “About the system”, again resulting in a clearer description of what the resulting page does.
Original Sketches can been seen here: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2/
And our new interface can be seen below.
Updated Storyboards of 3 Tasks
Just Lift
Teach Me
User Tracking
Sketches For Unimplemented Portions of the System
We still have to implement a mechanism for displaying a user’s previous lifting data. In our prototype we did login using “wizard of oz” techniques. We intend though, once the user is logged in, to have a “History” page that would display summary data of a user’s performance in a graph or in a table like the one below.
Additionally, we would like to go even further with the idea of making the “Teach Me” feature of the system more of a step by step interaction. We found in our personal tests of the system, that it was not very fun to read large blocks of text. We want to present users with information in small bits and then have them either wave their hand to confirm that they read the message or have them demonstrate the requested exercise to advance to the next step of the tutorial.
g. Overview and Discussion of New Prototype
As described above, we radically changed the TeachMe section of the prototype. We originally implemented a presentation-style interface, with static images accompanied by text that the user read through. In the second revision of the prototype, we adopted a much more interactive style, where the steps are presented one at a time. After the presentation of each step, the user is then allowed to practice the step, and cannot progress until they have perfected their technique. We believe this style more engaging as well as more effective, and has the added benefit of not letting users with poor techniques (whether through laziness or lack of understanding) slip through the cracks. This emphasis on developing strong technique early on is important when safety is a factor; users can be severely injured if they use incorrect technique.
We have also implemented the Just Lift feature. The interface has been refined from the previous iteration to the current one, including streamlined text. We are pleased with how the interface has developed. The system automatically tracks and grades your reps, with sets defined by when the user steps outside of the screen. There was no precedent for this interface decision, but we believe we made a fairly intuitive choice: after watching how people lift in Dillon, we noticed that they will frequently step aside to check their phone or get a drink after a set.
As described above, we decided not to implement the web interface, and instead adopted an advanced user login feature. The web interface, because it required a new modality (web), would have required a lot of groundwork for a relatively small return. Additionally, we believe the user login feature (OCR recognition of gym card or prox) offers the chance to explore significantly more interesting and novel interface decisions than the web based interface would allow. We will include a “History” page in the Kinect system, to take the place of the web interface and allow user’s to see summary data of previous workouts.
For now, the OCR user login will be implemented using wizard of oz techniques. We are investigating into OCR packages for the Kinect, including open source alternatives. Again, this is a feature that’s a prime candidate for wizard of oz techniques. It provides a rich space to explore what “interfaces of the future” might look like, keeping in line with the spirit of the Kinect. OCR packages exist, but they may be difficult to connect with the Kinect. Until we have time to further explore the subject, the wizard of oz technique is a great way to allow us to explore the interface possibilities without being bogged down by the technology. In addition to login, our audio cues were generated with “wizard of oz” techniques. We typed statements into google translate and had them spoken aloud to us. We “wizard of oz-ed” this functionality because we believed it was non-essential for the prototype, believe that audio cues will not be too difficult to implement with the proper libraries, and wanted to tie in some of the audio cues with the system login, which is also yet to be implemented.
References/Sources
To get ourselves set up, we used the sample code that came with the Kinect SDK. Much of this sample code remains integrated with our system. We did not use code from sources outside of this.
h. Video and Images of prototype
System Demo
*Note in the video we refer to “Just Lift” as “Quick Lift.” This is an error. We were using the old name which has been updated after feedback from user testing.