Group 10 – P4

Group 10 – Team X

Group Members

-Junjun Chen (junjunc)

-Osman Khwaja (okhwaja)

-Igor Zabukovec (iz)

-av)

One Sentence Project Summary

A “Kinect Jukebox” that lets you control music using gestures.

Test Method

i. We read the informed consent script to the users and asked for verbal consent. We felt that this was appropriate because the testing process did not involve any sensitive or potentially controversial issues, and because the only photos we took did not reveal the participants’ identities, avoiding any issues with that. LINK

ii. One of our participants was a participant that we had conducted previous research on, and therefore we felt that it was appropriate to see how she interacted with a prototype that her feedback had played a part in developing. The other two were new: a man and a woman who have a more amateur interest in dance than our previous testers. We decided to use them because they had previously expressed interest in using the system, but also would provide a slightly different perspective from the much more experienced subjects that we had used before.

iii. The testing environment was in Alejandro’s room, because he had a large clear area for the dancers, as well as a stereo for playing the music. The prototype setup was mostly just a computer (with software for slowing down and playing music) connected to a stereo. We also had some paper prototypes for configuration. One person was behind the computer, controlling the audio playback. The placement of the computer also represented the direction the Kinect was pointing in.

iv. For testing, we had each user come into the room and do the tasks. Osman made the scripts, Alejandro was the “Wizard of Oz”, and Junjun and Igor observed/took notes. First we read the description script to the user, then showed them a short demo. We then asked if they had any questions and clarified any issues. After that, we followed the task scripts in increasing order of difficulty. We used the easiest task, setting and using pause and play gestures, to help the user get used to the idea. This way, the users were more comfortable with the harder tasks (which we wanted the most feedback on).

Links: DEMO. TASK 1. TASK 2. TASK 3.

Results

All of our users had little trouble using our system once we explained the setup. They were a little confused about what the “system” actually was at the beginning, as our prototype was basically just wizard of oz. However, after doing the first task, they quickly became comfortable with the setup.

For two of our users, we had to explain the field of view of the kinect and that it wouldn’t be able to see gestures if they were obscured. For our real system, there would be less confusion about this, as users would be able to see the physical Kinect camera. When we visualized our gestures, we thought of them as being static (“holding a pose”). However, all of our users used moving gestures instead, which suggests that that may be more natural. Additionally, the first gestures our users selected were dramatic, and as one user later commented on, “not very convenient”, suggesting that they did not exactly understand what the gestures would be used for (that they would have to repeat it every time to play/pause, etc). Their later gestures were more pragmatic. Our second user asked for several additional functionalities during the test, such as additional breakpoints and more than one move for “follow mode”. He also commented on how the breakpoints  task would be useful in a dance studios and for dance companies.

Discussion

The Kinect field of view problem was something we knew about, but we had not considered this a major issue before now. However, the fact that two of our testers tried to set gestures that including moving a hand behind their body suggests that this may be an issue. We had originally thought to use static gestures, which would be easier to capture, but since all of our users used moving gestures, it may be best to allow that. We had originally thought that allowing only one breakpoint would be enough, but after testing that task with our second user, he immediately asked about how to set more than one. This suggests that we should include the ability to set multiple breakpoints in our system. For follow mode, all of our users were confused when we asked them to perform only one single move, and they felt awkward repeating that move many times. In the words of user two, “You can’t just isolate one move.” Ideally, we would be able to follow a series of moves, but the implementation of this may prove difficult. Therefore, we are considering changing this functionality to just allow a gesture for fast forwarding/slowing down.

Plan

Given that the usability of our prototype would depend on what functionality the Kinect can give us, we feel that it would be best to start building a higher-fidelity prototype with the Kinect.

Images of Testing

[kaltura-widget uiconfid=”1727958″ entryid=”0_vd25l755″ width=”400″ height=”360″ addpermission=”0″ editpermission=”0″ /]