Final Project – Team X

Group 10 — Team X
–Jun­jun Chen (junjunc),
–Osman Khwaja (okhwaja),
–Igor Zabukovec (iz),
–Ale­jan­dro Van Zandt-Escobar (av)

Description:
A “Kinect Juke­box” that lets you con­trol music using gestures.

Previous Posts:
P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-team-x/
P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-team-x/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/group-10-p3/
P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/09/group-10-p4/
P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-10-team-x/
P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/06/6085/

Video Demo:

Our system uses gesture recognition to control music. The user can choose a music file with our Netbeans GUI. Then, they can use gestures for controls such as pause and play. We also have functionality for setting “break­points” with a gestures. When the dancer reaches a point in the music that he may want to go back to, he uses a gesture to set a breakpoint. Then, later, he can use another ges­ture to go back to that point in the music easily. The user is also to be able to change the speed of the music on the fly, by hav­ing the sys­tem fol­low gestures for speed up, slow down, and return to normal. Every time the slow down or speed up gesture is performed, the music incrementally slows down or speeds up.

Youtube Video Link

Changes

  • Improved the weights for predefined gestures in Kinetic Space, as our testing from P6 indicated that our system struggled to recognize some gestures, and this was the main source of frustration for our users. By changing the weights Kinetic Space places on certain parts of the body (for example, by weighing arms more when the gesture is mainly based on arm movement), we can make the recognition better.
  • Finished and connected an improved GUI made in Netbeans to our MAX/MSP controller. We want the interface to be as simple and easy to use as possible.

Goals and Design Evolution:
While our main goal (to create a system making it easier for dancers to interact with their music during practices) and design (using a Kinect and gesture recognition) has not changed much over the semester, there has been some evolution in the lower level tasks we wanted the system to be able to accomplish. The main reasons for this have been technical: we found early on through testing that for the system to be useful, rather than an hinderance, it must be able to recognize gestures with a very high fidelity. This task is further complicated by the fact that the dancer would be moving a lot during practice.
For example, one of our original goals was to be able to have our system follow the speed of the dancer, without the dancer having to make any specific gestures. We found that this was not feasible within the timeframe of the semester, however, as many of the recognition algorithms we looked at used machine learning (so worked better with many examples of an gesture) and many required knowing, generally, the beginning and end of the gesture (so would not work well with gestures tied into a dance, for example).

Also, we had to essentially abandon one of our proposed functionalities. We thought we would be able to implement a system that would make configuring a recognizable gesture a simple task, but after working with the gesture recognition software, we saw that setting up a gesture requires finely tuning the customizable weights of the different body parts to get even basic functionality. Implementing a system that automated that customization, we quickly realized, would take a very long time

Critical Evaluation:
Based on the positive feedback we received during testing, we feel that this could be turned into an useful real-world system. Many of our users have said that being able to control music easily would be useful for dancers and choreographers, and as a proof of concept, we believe our prototype has worked well. However, from our final testing in P6, we found that the user’s frustration levels would increase if they had to repeat a gesture even once. Therefore, there is a large gap between our current prototype and a real world system. Despite the users’ frustrations during testing, they did indicated in the post-evaluation survey that they would be interested in trying an improved iteration of our system and that they thought it could be useful for dancers.
We’ve learned several things from our design, implementation, and evaluation efforts. Firstly, we’ve learned that while the Kinect was launched in 2010, there actually isn’t a great familiarity with it in the general population. Secondly, we’ve found that the Kinect development community, while not small, is quite new. Microsoft support for development, with SDKs, is many for Windows, though there are community SDKs for Mac. From testing, we’ve found that users are less familiar with this application space than with windows based GUIs, but that they are generally very interested in gesture based applications.

Next Steps:

  • In moving forward, we’d like to make the system more customizable. We have not found a way to do so with the Kinetic Space gesture recognition software we’re using (we don’t see any way to pass info, such as user defined gestures, into the system), so to do so, we may have to implement our own gesture recognition. The basic structure gesture recognition algorithms we looked at seemed to involve looking at the x,y,z positions of various points and limbs, and comparing their movement (with margins of error perhaps determined through machine learning). We did not tackle this implementation challenge for the prototype, as we realized that the gesture recognition would need to be rather sophisticated for the system to work well. With more time, however, we would like to do our gesture recognition and recording in Max/MSP so that we could integrate our music playing software, and then maybe imbed the Kinect video feed in the Netbeans interface.
  • We still like the idea of having the music follow the dancer, without any other input, and that would be something we’d like to implement if we had more time. To do so, we would need the user to provide a “prototype” of the dance at regular speed. Then, we might extract specific “gestures” or moves from the dance, and change the speed of the music according to the speed of those gestures.
  • As mentioned before, we would also like to implement a configure gesture functionality. It may even be easier after we moved to Max/MSP for gesture recognition, but at this point, it’s only speculation.
  • We’d also like to do some further testing. In the testing we’ve done so far, we’ve had users come to a room we’ve set up and use the system as we’ve asked them to. We’d like to ask dancers if we can go into their actual practice sessions, and see how they use the system without our guidance. It would be informative to even leave the system with users for a few days, have them use it, and then get any feedback they have.

Source Code:
Source Code

Third Party Code:

Demo Materials:
Demo Materials

A3 – Score

Names:

Osman Khwaja, Jae Lee, Prakhar Agarwal

Links to Individual Posts

Jae Lee – Jae’s Notes

Osman Khwaja – Osman’s Notes

Prakhar Agarwal – Prakhar’s Notes

Discussion:

Question 1:
– Menu system is convoluted and not representative to the high volume usage. The most often used thing (the pulldown menu) takes up very little space whereas things that one almost never considers (contact info, hold, more services) takes up a larger majority of the screen.

– Navigation is wacky. The student center is listed under multiple spots (Favorites, self service, and main menu, and has its own link). It is extremely redundant and not intuitive to try to navigate through this site without practice.

– Fix the display usage problem. The pull down menu (which is used most of the time) should be displayed prominently on the screen while things like holds and campus connections should be minimized or something.

– Clean up the navigation by including a top-level menu which is highlighted. It would allow you switch into categories that makes your sub-options easier to find. For example, you should be able to select Courses which would then let you select add, swap, and drop. You should also be able to switch into Payroll, which would give you access to the variety of options that are currently hard to find.

– Get rid of the home page, and potentially make the Student Center the home page, after making some UI updates.

Question 2
For the most part, we found issues which we then matched to Nielsen’s heuristics. However, in some cases it did help to have the list in front of us. In particular, I only thought of the problem with the error message in the add/drop/swap section after reading about how a proper error message should be constructed.

Question 3
The standards are pretty solid, but we would have some improvement for those. Some of the heuristics are pretty general, and they apply to a wide variety of different errors. For example, the consistency and standards is a pretty broad option. One way to break it down would be to make new heuristics that say “Follows current, successful trends for layouts of presented options” as well as a “Presented options operate as described” to replace the “Consistency and Standards” one.

Question 4
– Does the heuristics system properly describe all the errors of all systems or do the nature of some application interfaces correspond only to a certain set of issues?

– Do you think the heuristics are too broad or too specific? Explain.

Assignment A2: Osman Khwaja

Observations:

On Thursday, I stood inside and around the Friend Center for two different class change periods to observe how different people use the 10 minutes. Of the various people I saw, three particular people interested me and I took their actions and generalized them to groups of people for whom I could design a product.

Candidate 1: Hurried bicyclist

This speed demon is trying to bike as fast as he can through a bunch of people who crowd the walkways during class change. He gets stuck behind a crowd of people and is forced to slow down significantly which probably bothers him. His motivation is unclear (could be late to a class, forgot something somewhere, or just has the need for speed), but his frustration with slow pedestrians isn’t. The one I saw struggled to navigate the walkway between Fisher and Friend, got stuck behind a group of walkers, and nearly hit someone trying to weave through. Maybe I could design something that allows him to navigate crowds better.

Candidate 2: The Early Bird

This individual is the one you see trying to kill time outside the classroom. The one that I observed came out of the Friend Library, went downstairs to the tables, and pulled out his phone. Eventually two students walked into one of the classrooms, and then our subject followed them a minute or two later. My guess is that he didn’t want to be the first into the classroom (it might have been a little awkward to be alone with the professor). Maybe we could design something for this candidate that lets him kill time or even let’s him know if people are in the classroom.

Candidate 3: The Kobayashi (google it if you don’t know!)

This student is the one who unfortunately scheduled class such that she can’t enjoy a proper lunch break on certain days. Walking and eating quickly proves challenging as his person struggles to juggle the bunch of things in her hands. The one I saw walking into the Friend Center was trying to eat from a takeout tray, hold a water bottle against her side with her arm, and open the door. Needless to say, she had to wait until someone came by to get into the building. Maybe there’s some tool that will better enable her to enjoy her quick lunch or receive her lunch more quickly or better interact with her surroundings hands-free.

Brainstorm:

1. Real-time pedestrian traffic monitor and route suggester
2. Bike horn that sounds when it senses proximity to pedestrian
3. Pedestrian avoidance system with sensor and intelligent controller
4. Optimal path navigator based on location and end destination
5. A handle bar shrinking system to enable better weaving
6. Fellow class student locator to see if there’s an empty classroom
7. Refresher material application based on classroom proximity
8. A betting application based on which students arrive to class earliest
9. A scenic route suggesting app to kill time walking to class
10. An estimator app that predicts time to eat given meal
11. A app to order lunch for an eating to be picked up at a given time
12. An app that can suggest how to optimize how to hold your objects
13. An automated backpack zipper opener for handsfree opening/storing
14. A help signaler device that notifies people to help open doors
15. A food carrying tray that gently heats food as you walk

Favorite Ideas:

– I chose the pedestrian avoidance system (#3) because it has the most upside (help bikers everywhere, avoid accidents, etc.) and doable given current technology (see Google cars).

– I also chose the student locator (#6) because I thought it’s pretty neat and potentially doable given the prevalence of smart phones and OIT’s registered data base of devices

Quick Prototypes:

Pedestrian Avoidance

Description: The above picture shows the screen of the device that you’d attach to the front of your bike. The horizontal dashes with arrows shows the detected obstacles and their trajectories. You are represented by the arrow with your direction shown as the arrow. Using an intelligent system that takes in the velocities of the sensed obstacles, the device displays a suggested route through the crowd, signified by the dotted line.

DSC00451

Description: The above picture shows the app interface that you’d open on your phone. People, including yourself, are represented as dots against the map layout of the building or area that you are in. By looking at the map, you can see if anybody is in the classroom or on their way to the room. In this picture, two people are already in the classroom and 4 people are on their way to the building.

Testing and Feedback:

I chose to test the pedestrian avoidance system because it’s my personal favorite and I was really interested to see what people would think of it. I managed to catch up with three people who were extremely kind and gave me 5 minutes of their time.

– Person 1: Jason – I managed to meet up with Jason in the Prospect House garden. I put the device on his bike, as shown in Picture 1, and asked him what he thought he should do. He was a little confused at first, but after I told him to imagine the horizontal lines as people, he quickly figured out that he was the arrow and he go on the projected path. Clearly, the graphic wasn’t intuitive enough for him to pick up without a simple nudging. He also said, “It looks ugly. I would throw it away”.

– Person 2: Stephen – I managed to meet up with Stephen outside Brown Hall. I put the device on his bike and asked him how he would use it. Unlike Jason, Stephen immediately knew what to do and commented that he was familiar with this type of interface from GPS devices. Unfortunately, Stephen didn’t see the need for the device, saying something to the effect of  “why would I let the device guide me when I can do it better with my own eyes?” He also commented that it wasn’t pretty looking.

– Person 3: Roy – When I showed Roy the device and asked him to use it without telling how, he was initially confused. But soon figured out that he was the arrow, but couldn’t figure out what the horizontal lines were. Once I finally told him, he thought it was a cool idea and started telling me about how he could use it. He also asked some pretty insightful questions about safe this would be if multiple people were using it or whether or not this device promotes safety if it encourages weaving through traffic.

Pictures of Testing:

I took some pictures (some of them staged) of the user testing process to show how the testing was conducting and how the prototype was used.

Picture 2

This picture gets at the essence of the problem. Hurried bicyclists often struggle to navigate through pedestrians on walkways, especially when they’re crowded during class change. I designed a tool that I hoped would make that experience less frustrating.

Picture 1

This picture shows how I mounted the prototype to the bicycle for user testing. Typically, I had the tester sit on the bike and I held it there with my hand and asked them to interact with the device. They started to think through it, ask some questions, and eventually figured out how it worked. Then, I got their feedback.

Picture 3

Here’s the ideal usage of the prototype in action. Given a set of obstacles, the prototype maps out an optimal course through them in real-time, and the bicyclists follows the path until he clears the obstacles. When I had my users try out the prototype, I made sure we waited until the walkway was crowded and then asked them to use the bike with the prototype.

Insights:

– Using simple symbols to represent complex objects adds a layer of abstraction that can take away from the intuitiveness of your design. In my example, using horizontal lines to represent people or objects caused two of my users to initially struggle to figure out how to use the device. In my next iteration, I could create a representative symbol, like a stick figure, to show an incoming person, and something else to show an inanimate object. Horizontal lines, while easy to draw, received pretty negative feedback.

– Aesthetics are extremely important. As two of my users noted, my device was definitely not the prettiest interface they’ve ever seen. In my next iteration, I would look to create something much more enjoyable. Color-coded objects and a 3D looking arrow are just some of the things I could use to improve how my device looks.

– Provide something unique. As Stephen pointed out, my device simply does something that a conditioned human con do pretty easily. While there is some value in that, it isn’t as likely  to be as successful as a product that can produce something that humans can’t easily do. Maybe adding some small features to the device, like a flashlight, a speedometer, or a video camera to take some cool footage, could help push my device over the top. While that may stray away from the original purpose of the device, these changes could make it a great product.