Do You Even Lift? (Final Submission)

Group: Do You Even Lift?

Group Number: 12

Adam, Andrew, Matt, Peter

Our project is a Kinect based system that watches people lift weights and gives instructional feedback to help people lift more safely and effectively.

Previous Posts:



Video Demonstrations

Part 1 – “Teach Me” task

Part 2 – “Quick Lift” task

Part 3 – “Past Performance” task



Changes since P6

  • Addition of voice cues — voice cues were always part of the design, and we used the time we had after P6 to add them

  • Stricter TeachMe voice commands — users were frustrated by “false positives” in the voice recognition system, causing them to move to the next lesson too quickly. We made the voice recognition more stringent, reducing false positives and making users happy

  • The system now supports user login and user data is accurately displayed on the graph.

  • We’ve made aesthetic adjustments to the graph and fixed some small bugs.

  • When a non-authenticated user goes to “Just Lift,” a list of tips is displayed to the user to help instruct them on basic system functionality.

Evolution of goals and design

Our original goal was to provide gym users, both novice and veteran, with a safe and effective way to learn how to lift weights. This goal has remained largely constant throughout the course of the semester, although the methods we’ve used to address it have shifted. Many of the implementation details have stayed the same throughout the process, with only surface aesthetic changes. One of the areas that underwent the greatest number of iterations was the TeachMe feature. Originally, we had designed a system based around static pages, with moving gifs to illustrate the individual stages of the work out. This design would have worked, but did not engage the user or take advantage of the interactive nature of the kinect. After testing with the paper prototype, we conducted a drastic redesign of the TeachMe functionality. Our new design took advantage of the Kinect to track the users lift and provide real time feedback. We found that the redesign allowed users to interact with the system much more fluidly. After the redesign, testing showed that people made fewer mistakes after learning the squat. The information seemed to “stick” with them.

However, there were new pain points that were introduced with the redesign. Initially, the TeachMe sequence automatically progressed to the next step after the current step had been completed. This had two advantages. First, we believed this would be a convenient feature for users: they wouldn’t need to do anything to progress through the steps. Second, with heavy weight on their back, it’s important that users remain focused on their lift for safety reasons. However, the automatic step progression turned out to be incredibly frustrating for users. The steps would frequently change too quickly for the users to finish reading them, leading users to abandon using the system. We needed to find a way for users to indicate they had completed the current step without altering their form or re-racking their weight. We found a simple but elegant solution: we used used voice commands to allow the users to signify they were ready to move to the next step. This solution addressed our safety concerns while allowing users to maintain a finer grain of control over their interactions with the system.


Evaluation of Project

We believe that with further iteration, this system could be turned into a useful real-world system. The market capacity for such a system is large – many people are interested in personal fitness and in improving their weight lifting skills. This system provides a lightweight and unobtrusive way to further that goal. While we have not been able to test our system with actual weights, user feedback for testing out of the gym has been very positive.

We now have a much better understanding of our application space. First, we learned that subtle body movements can be difficult to identify- perhaps a reason why people value personal trainers. Additionally, difference in body proportions between trainees can make it difficult to judge form. Many parameters are important in back squat evaluation and it can be difficult to accurately cover all of these cases programmatically. We’ve also learned more about the use of the Kinect in an activity like weight lifting. The kinect is sensitive to user position; when the user is not standing 6 – 10 feet from the kinect, the kinect has trouble accurately picking up their skeleton. The kinect is also intolerant of mirrors or other users in the frame. These problems could make it difficult to use the system in a location like a crowded gym where people do not have a secluded lifting space.


Further Work

We are pleased with the overall proof of concept of our system. Our current prototype handles just one type of lift (Back Squat), and tracks only some aspects of form for that lift. Given more time, we would want to support a wider range of exercises, and to have an experienced lifter train the system on proper form. Along those lines, if the project was being scaled it could be fruitful to create a modular training mode, which could use PCA or other machine learning to recognize correct form.

In our original proposal, we also included a web interface with our system design. For our prototype, we implemented the core functionality of what would be on the web interface in the system interface itself. If we were to fully carry out this project, we would build a web interface so users can view their lifting data remotely.


Our Code (zip)


Third-Party Code Used


Demo Materials