Group 10 – P6

Group 10 — Team X

-Junjun Chen (junjunc),

-Osman Khwaja (okhwaja),

-Igor Zabukovec (iz),

-(av)

Summary:

A “Kinect Jukebox” that lets you control music using gestures.

Introduction:

Our project aims to pro­vide a way for dancers to inter­act with recorded music through ges­ture recog­ni­tion. By using a Kinect, we can elim­i­nate any need for the dancers to press but­tons, speak com­mands, or gen­er­ally inter­rupt their move­ment when they want to mod­ify the music’s play­back in some way. Our moti­va­tion for devel­op­ing this sys­tem is twofold: first of all, it can be used to make prac­tice rou­tines for dancers more effi­cient; sec­ond of all, it will have the poten­tial to be inte­grated into impro­visatory dance per­for­mances, as the ges­tural con­trol can be seam­lessly included as part of the dancer’s movement. In this experiment, we want to use three tasks to determine how well our system accomplishes those goals, by measuring the general frustration level of users as they use our system, the level of difficulty they may have with picking up our system, and how well our system does in recognizing gestures and responding to them.

Implementation and Improvements

P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-10-team-x/

Changes:

  • Implemented the connection between the Kinect gesture recognition and music processing components using OSC messages which we were not able to do for P5. This means that we did not need to use any wizard of oz techniques for P6 testing.

  • Implemented the GUI for selecting music (this was just a mock up in P5).

Method:

Participants:

The participants were selected at random (from Brown Hall). We pulled people aside, and asked if they had any dancing experience. We tried to select users who had experience, but since we want our system to be intuitive and useful for dancers of all levels, we did not require them to have too much experience.

Apparatus:

The equipment used included a Kinect, a laptop, and speakers. The test was conducted in a room in Brown Hall, where there was privacy, as well as a large clear area for the dancers.

Tasks:

1 (Easy). The first task we have chosen to support is the ability to play and pause music with specific gestures. This is our easy task, and we’ve found through testing that it is a good way to introduce users to our system.

2 (Medium). We changed the second task from “setting breakpoints” to “choosing music” as we wanted to make sure to include our GUI in the testing. We thought that the first and third task was adequate for testing the gesture control (especially since the gesture control components are still being modified). We had participants go through the process of selecting a song using our graphic interface.

3 (Hard). The third task is to be able to change the speed of the music on the fly. We had originally wanted to have the music just follow the user’s moves, but we found that the gesture recognition for this would have to be incredibly accurate for that to be useful (and not just frustrating). So, instead, the speed of the music will just be controlled with gestures (for speeding up, slowing down, and returning to normal).

Procedure:

For test­ing, we had each user come into the room and do the tasks. We first read the general description script to the user, then gave them the informed consent form, and asked for verbal consent. We then showed them a short demo. We then asked if they had any ques­tions and clar­i­fied any issues. After that, we fol­lowed the task scripts in increas­ing order of dif­fi­culty. We used the eas­i­est task, set­ting and using pause and play ges­tures, to help the user get used to the idea. This way, the users were more com­fort­able with the harder tasks (which we wanted the most feed­back on). While the users were completing the tasks, we took notes on general observations, but also measured the statistics described below. At the end, we had them fill out the brief post-test survey.

Test Measures:

Besides making general observations as the users performed our tasks, we measured the following:

  • Frustration Level (1 to 5, judged by body language, facial expressions, etc.), as the main purpose of our project is to make the processes of controlling music while practicing as easy as possible.

  • Initial pickup – how many questions were asked at the beginning? Did the user figure out how to use the system? We wanted to see how intuitive and easy to use our system was, and whether we explained our tasks well.

  • Number of times reviewing gestures – (how many times did they have to look back at screen?) We wanted to see how well preset gestures would work (is it easy to convey a gesture to a user?)

  • Attempts per gesture  (APG) – The number of tries it took the user to get the gesture to work. We wanted to see how difficult it would be for users to copy gestures and for the system to recognize those gestures, since getting gestures to be recognized is the integral part of the system.

  • Success – (0 or 1) Did the user complete the task?

Results and Discussion

Here are statistical averages for the parameters measured for each task.

Task 1

Task 2

Task 3

Frustration Level

1.67

0.33

3

Initial Pickup

0

0

0.33

Reviewing Gestures

1

1.33

Success

1

1

0.66

Play APG

3

Pause APG

1.33

Normalize Speed APG

1.5 on success
10 on failure

Slow Down APG

2.33

Speed Up APG

4.33

From these statistics, we found that APG for each tasks was relatively high, and high APGs correlated with higher frustration levels. Also, unfortunately, the APG didn’t seem to go down with practice (from task 1 to task 3, and with the 3 different gestures in tasks 3). The tasks with the highest APG (10+ on normalize speed, and 8 on speed up) both happened towards the end of the sessions.

These issues seem to stem from misinterpretation of the details of the gesture on the screen (users tried to mirror the gestures, when in fact the gestures should have been performed in the opposite direction), as well as general issues in the consistency of the gesture recognition. All three users found the idea interesting, but as users 1 and 3 commented on, gesture recognition needs to be robust for the system to be useful.

Changes we plan to make include a better visualization of the gestures, so that users have less trouble following them. Based on the trouble some of our users had with certain gestures, we’ve decided that custom gestures are an important part of the system (users would be able to set gestures that they are more comfortable with, and will be able to reproduce more readily). This should alleviate some of the user errors we saw in testing. On the technical side, we also have to make our gesture recognition system more robust.

Appendices

i. Items Provided / Read to Participants:

ii. Raw Data

Links to Participant Test Results:

Individual Statistics:

Participant 1:

Task 1

Task 2

Task 3

Frustration Level

-2

0

4

Initial Pickup

0

0

1

Reviewing Gestures

3

2

Success

1

1

0

Play APG

5

Pause APG

2

Normalize Speed APG

10 (failure)

Slow Down APG

3

Speed Up APG

2

Participant 2:

Task 1

Task 2

Task 3

Frustration Level

1

0

2

Initial Pickup

0

0

0

Reviewing Gestures

0

1

Success

1

1

1

Play APG

1

Pause APG

1

Normalize Speed APG

1 (success)

Slow Down APG

2

Speed Up APG

3

Participant 3:

Task 1

Task 2

Task 3

Frustration Level

2

1

3

Initial Pickup

0

0

0

Reviewing Gestures

1

1

Success

1

1

1

Play APG

3

Pause APG

1

Normalize Speed APG

2 (success)

Slow Down APG

2

Speed Up APG

8

Video

 

Group 10 – P4

Group 10 – Team X

Group Members

-Junjun Chen (junjunc)

-Osman Khwaja (okhwaja)

-Igor Zabukovec (iz)

-av)

One Sentence Project Summary

A “Kinect Jukebox” that lets you control music using gestures.

Test Method

i. We read the informed consent script to the users and asked for verbal consent. We felt that this was appropriate because the testing process did not involve any sensitive or potentially controversial issues, and because the only photos we took did not reveal the participants’ identities, avoiding any issues with that. LINK

ii. One of our participants was a participant that we had conducted previous research on, and therefore we felt that it was appropriate to see how she interacted with a prototype that her feedback had played a part in developing. The other two were new: a man and a woman who have a more amateur interest in dance than our previous testers. We decided to use them because they had previously expressed interest in using the system, but also would provide a slightly different perspective from the much more experienced subjects that we had used before.

iii. The testing environment was in Alejandro’s room, because he had a large clear area for the dancers, as well as a stereo for playing the music. The prototype setup was mostly just a computer (with software for slowing down and playing music) connected to a stereo. We also had some paper prototypes for configuration. One person was behind the computer, controlling the audio playback. The placement of the computer also represented the direction the Kinect was pointing in.

iv. For testing, we had each user come into the room and do the tasks. Osman made the scripts, Alejandro was the “Wizard of Oz”, and Junjun and Igor observed/took notes. First we read the description script to the user, then showed them a short demo. We then asked if they had any questions and clarified any issues. After that, we followed the task scripts in increasing order of difficulty. We used the easiest task, setting and using pause and play gestures, to help the user get used to the idea. This way, the users were more comfortable with the harder tasks (which we wanted the most feedback on).

Links: DEMO. TASK 1. TASK 2. TASK 3.

Results

All of our users had little trouble using our system once we explained the setup. They were a little confused about what the “system” actually was at the beginning, as our prototype was basically just wizard of oz. However, after doing the first task, they quickly became comfortable with the setup.

For two of our users, we had to explain the field of view of the kinect and that it wouldn’t be able to see gestures if they were obscured. For our real system, there would be less confusion about this, as users would be able to see the physical Kinect camera. When we visualized our gestures, we thought of them as being static (“holding a pose”). However, all of our users used moving gestures instead, which suggests that that may be more natural. Additionally, the first gestures our users selected were dramatic, and as one user later commented on, “not very convenient”, suggesting that they did not exactly understand what the gestures would be used for (that they would have to repeat it every time to play/pause, etc). Their later gestures were more pragmatic. Our second user asked for several additional functionalities during the test, such as additional breakpoints and more than one move for “follow mode”. He also commented on how the breakpoints  task would be useful in a dance studios and for dance companies.

Discussion

The Kinect field of view problem was something we knew about, but we had not considered this a major issue before now. However, the fact that two of our testers tried to set gestures that including moving a hand behind their body suggests that this may be an issue. We had originally thought to use static gestures, which would be easier to capture, but since all of our users used moving gestures, it may be best to allow that. We had originally thought that allowing only one breakpoint would be enough, but after testing that task with our second user, he immediately asked about how to set more than one. This suggests that we should include the ability to set multiple breakpoints in our system. For follow mode, all of our users were confused when we asked them to perform only one single move, and they felt awkward repeating that move many times. In the words of user two, “You can’t just isolate one move.” Ideally, we would be able to follow a series of moves, but the implementation of this may prove difficult. Therefore, we are considering changing this functionality to just allow a gesture for fast forwarding/slowing down.

Plan

Given that the usability of our prototype would depend on what functionality the Kinect can give us, we feel that it would be best to start building a higher-fidelity prototype with the Kinect.

Images of Testing

[kaltura-widget uiconfid=”1727958″ entryid=”0_vd25l755″ width=”400″ height=”360″ addpermission=”0″ editpermission=”0″ /]

Group 10: Lab 3

Group Number and Name

Team X – Group 10

Group Members

  • Osman Khwaja (okhwaja)
  • Junjun Chen (junjunc)
  • Igor Zabukovec (iz)
  • (av)

Description of System

We built a crawler that propels itself forward by “punting”. We were inspired by punt boats which are propelled by a punter pushing against the bottom of a shallow riverbed with a long pole. Our system works by having the crawler push against the surface of the table with a short pole (attached to a servo motor), sliding itself forward as a result. The crawler does not actually move in the direction that we expected to when we made it, and problem we had is that there was no way to do the equivalent of lifting the pole out of the water in order to bring it back to the initial position. However, these two problems combined resulted in a crawler that worked differently from how we first intended, but nevertheless managed to scuttle forward fairly effectively. To improve this, we might try mounting the servo motor on a wheeled cart, which would help its movement be more consistent with our intention.

Brainstorming

  1. Ballerina – DC motor that spins a pencil wearing something similar to a dress
  2. Rocking Chair – Uses servo motor to rotate between end positions, just like a rocking chair
  3. Unicycle – wheel surrounds a DC motor and the wheel rotates as the DC motor does
  4. Yarn Ball Retracer – DC motor spins up along a string and moves eventually until all the string is wound up again
  5. Waddle Walker – Uses the servo motor to awkwardly waddle/walk in a semi-straight line by alternating leg movements
  6. Log roller – DC motor attached to a log=shaped object that is connected to the spinning axle of the motor
  7. Reversed roller – DC motor spins a gear which is connected to an object that moves along the ground. The direction of rotation of the DC motor is reversed due to the gear connection, and the object rolls in the reverse direction
  8. Hamster Ball – A DC motor is attached to a wheel (or small ball) inside a ball (the wheel is not connected to the ball, but pushes it).
  9. Wheelbarrow – A DC motor moves the front wheels of the vehicle, but it only moves if something is holding up the back.
  10. Lurcher – A two legged robot, one leg is static and being dragged by the other, which takes steps using a servo motor.
  11. Propellor Robot: a robot on wheels that moves by using a propellor attached to the back that blows it along.
  12. Punt: We attach an punting pole to a servo motor, and have it in a wheeled cart. The motor pushes the pole into the ground periodically, pushing itself forward.

Sketches

Circuit Sketch (taken from Arduino tutorial on Servo Motors):
some_text

 

Crawler Sketch:

sketch

 

Demonstration Video

Parts Used

  • Servo motor
  • Arduino
  • Cardboard Arduino box
  • Empty spool (to raise servo motor)
  • Straws (for punting pole)
  • Electrical tape
  • Jumper wires

Instructions

  1. Tape the empty spool vertically onto the empty Arduino uno box.
  2. Tape the servo motor horizontally on top of the spool
  3. Fashion an oar out of some small sticks by taping them together. At one end of the oar (the base), create a round end with electric tape
  4. Tape the oar to the rotating part of the servo motor such that the oar will hit the ground during rotation. Attach the oar such that it provides forward locomotion by angling it away from the box.
  5. Connect the servo motor to the breadboard and the arduino as shown in the picture from the adafruit learning system tutorial
  6. Connect the arduino to power and let it propel forward

Source Code

#include <Servo.h>
int potPin = 0;
int servoPin = 9;
Servo servo;

int angle = 0;

void setup()
{
  servo.attach(servoPin);
}

void loop()
{
  angle++;              // 0 to 180-ish
  servo.write(angle);
}

Group 10: P3

Group Number and Name

Group 10 – Team X

Group Members

  • Osman Khwaja (okhwaja)
  • JunJun Chen (junjunc)
  • Igor Zabukovec (iz)
  •  (av)

Mission Statement

Our project aims to provide a way for dancers to interact with recorded music through gesture recognition. By using a Kinect, we can eliminate any need for the dancers to press buttons, speak commands, or generally interrupt their movement when they want to modify the music’s playback in some way. Our motivation for developing this system is twofold: first of all, it can be used to make practice routines for dancers more efficient; second of all, it will have the potential to be integrated into improvisatory dance performances, as the gestural control can be seamlessly included as part of the dancer’s movement.

Prototype Description

Our prototype includes a few screens of a computer interface which would allow the user to setup/customize the software, as well as view current settings (and initial instructions, gestures/commands). The rest of the prototype depends heavily on Wizard of Oz components, in which one member of our team would act as the kinect and recognize gestures, and then respond to them by playing music on their laptop (using a standard music player, such as itunes).

Our prototype will have three different screens to set-up the gestures. Screen 1 will be a list of preprogrammed actions that the user can do. These include stop, start, move to last breakpoint, set breakpoint, go to breakpoint, start “follow mode”, etc.

     set_gesture_main

Once the user selects a function, another screen pops up that instructs the user to go make a gesture in front of the kinect and hold it for 3 seconds or so.capturing

Once the user creates a gesture, there will be a verification screen that basically reviews what functionality is being created and prompts the user to verify its correctness or re-try to create the gesture.

save_gesture

Tasks

(So that we can also test our setup interface, we will have the user test/customize the gestures of each task beforehand, as a “task 0”. In real use, the user would only have to do this as an initial setup.)

The user selects the function that they want to set the gesture for:

choose_gesture

The user holds the position for 3 seconds:

pause_gesture

The user confirms that the desired gesture has been recorded, and saves:save_osman

An illustration of how our prototype will be tested is shown in the video below. For task 1, we will have the users set gestures for “play” and “pause”, using the simple menus shown. Then we will have them dance to a recorded piece of music, and pause / play it as desired. For task 2, we will have them set gestures for “set breakpoint” and “go to breakpoint”. Then they will dance to a the piece of music, set a breakpoint (which will not interrupt the playback), and then, whenever desired, have go to that breakpoint. For task 3, we will have the users set a gesture to “start following”, and record a gesture at normal speed. We will then have the users dance to the piece of music, set the following when desired, and then adapt the tempo of the music playing according to the speed of the repeated gesture.

Our “Wizard in the Box”, controlling the audio playback:

wizard_in_box

Discussion

i. We made our initialization screens using paper, but the bulk of it was “Wizard of Oz”, and just responding to gestures.

ii. Since our project doesn’t really have a graphic user interface, except for setup and initial instructions, we relied heavily on the Wizard of Oz technique, to recognize and respond to gestures and voice commands. Since what the user would mostly be interacting with is music and sound, which can’t be represented well on paper, we felt it was appropriate to have our prototype play music (the “wizard” would just push play/pause, etc on a laptop).

iii. It was a little awkward to try to prototype without having the kinect or even having a chance to get started creating an interface. Since users would interface with our system almost completely through the kinect, paper prototypes didn’t work well for us. We had to figure out how to show interactions with the kinect and music.

iv. The Wizard of Oz technique worked well, as we could recognize and respond to gestures. It helped us get an idea of how tasks 1 and 2 work, and we feel that those can definitely be implemented. However, task 3 might be too complicated to implement, and it might be better to replace it with a simpler “fast-forward / rewind” function

A3 – iTunes Group

Sam, Yared, Connie, Alejandro

i. Major problems that we found include:

  • Inconsistencies in the left-hand menus (H4). We can fix this by keeping a left-hand menu present the whole time, rather than changing it constantly (this would also be consistent with the Finder).
  • Difficulty finding the Help file in Windows, and Help doesn’t occur while not online (H10). Have a question mark icon in the upper left-hand bar.
  • Inconsistency between top-bar menus (Music, Videos, etc.) (H4). Have “List” do the same thing in both, add an “Unplayed” option under Music, parallel to “Unwatched”.
  • The status window is used for all statuses, and will block out other information (H1). This can be fixed by splitting the status window into  several windows with specific purposes.
  • The user’s can’t stop the “Up Next” playlist from by default adding the next songs on a list to it. (H3). We could simply have this be something that the user can deactivate through the Preferences menu.

ii.

  • We wouldn’t necessarily have looked for “help” if we hadn’t been told to.
  • Most of the problems we found were issues that we encountered throughout extended use of the program in a normal setting. The challenge was then to see under which Nielsen heuristic these issues fit. In that sense, the Nielsen heuristics made us think more about what category these problems fit under and how we could change them.

iii.

  • Many users found “problems” related to the system changing form generation to generation. These are more about getting used to the changes than real “problems”, and do not fit within the Nielsen heuristics.
  • A heuristic we could use is “Generational Consistency” and “Cross-Platform Consistency”. 

iv.

  • Name five Nielsen heuristics.
  • How important do you think it is to keep programs consistent from one generation to the next? Should this be considered when judging the heuristics of a program?

Links to our PDFs:

Alejandro: https://www.dropbox.com/s/qobyn9i3zmoe59k/av_notes.pdf

Connie: https://docs.google.com/file/d/0B6ZddJLaXAx7N3duWUx5WVhRVTQ/edit?usp=sharing

Yared: https://www.dropbox.com/s/9g7frrclv2wc9hi/Heuristic%20Evaluation.pdf

Sam Payne: https://www.dropbox.com/s/9qgkodjdrnasy9u/SamPayneA3.pdf

P2 – Team X

Group 10:  “Team X”

Osman Khwaja (okhwaja)
Junjun Chen (junjunc)
Igor Zabukovec (iz)
(av)

Group Members:

We all observed dance rehearsals together, and afterwards met and discussed what features we’d want our project to have. Alejandro did some more in depth interviews of dancers, Junjun and Osman took care of the storyboarding part, Igor sketched out a user interface.

Problem and Solution Overview

We are addressing the problem of a dancer having to go back and forth to whatever device they are using to play music while practicing or choreographing, which results in inefficiency. Our system will recognized user-defined gestures to start or stop the music, to loop a section of the music, and to speed it up or down, so that they can control these parameters remotely, while retaining complete control over their movement.

Users Observed in Contextual Inquiry

Our target user group is dancers who practice or choreograph using previously recorded music. Obviously, if dancers do not use recorded music, then they do not need to control it. Furthermore, we did not feel the need to limit our system to specific styles of dance, because it can apply to all dance where recorded music is used. We conducted contextual inquiry on three different dancers. Dancers 1 and 2 are seniors in the Comparative Literature department and both are pursuing Certificates in Dance. They both do contemporary styles of dance and Dancer 2 also works as a choreographer. Dancer 1 is interested in the relationship between dance, theatre and music.  Dancer 3 is a graduate student in political economy who danced professionally for several years before going to college, primarily hip hop and some contemporary, and continues to dance for her own enjoyment. All three seem very interested in improvised dance and contemporary choreography, and as described they have different levels of commitment to dance.

Interview Descriptions

We observed Dancer 2 in rehearsals in New South Dance Studios for a thesis production, Dancer 3 practicing and improvising to music by herself, and Dancer 1 practicing for a dance sequence in a play. We watched these rehearsals taking place, asked the dancers how they felt that their movements connected to the music that they were using, watched how often they felt the need to change how the music was playing, and watched how they undertook that task. We interviewed them about their general practice routine, with a variety of questions that aimed to have them propose ways in which they thought it could be improved. We then described our design idea to them and asked them more specifically what they thought of it and how they thought it might fit into their practice and choreographing routines.

A lot of what we observed and were told by the dancers was fairly unexpected. Although it was clear that starting/stopping music was necessary, overall it seemed that they used those moments to rest or think more carefully about their moves, or to discuss what they were doing with someone else if they were not dancing alone. Furthermore, when asking them they said they felt that this was only somewhat annoying, and that it was a natural and important part of their practice routine. It seemed obvious to us that, although the system might improve efficiency, this wasn’t something crucial for dancers and therefore would not necessarily interesting them enough in order to have them adopt a new system.

When we explicitly discussed the system with them, all three of the dancers agreed that it would be a useful and interesting system to use. However, they seemed to be interested in using it in different ways. Dancer 1 thought that it would be most interesting as an actual performance tool, something with which a dancer would interact with in a live setting, rather than a practice setting. Dancer 2 thought it would a good tool for practicing but did not think it would be interesting for choreographing, as she prefers to choreograph without music, and then see how the music will fit with the dance (this is the same that we observed when watching her rehearsal: it was clear that the music was added afterward, and it didn’t seem like our system would be particularly applicable in that case). Like Dancer 1, she thought it would be most interesting when applied to a live setting, which would allow for dancers to interact with a computer playing recorded sound in a similar way that they could interact with live musicians. Dancer 3 suggested that it would be good both as a both a practice and choreographing tool, and thought that it could even be extended to be used to edit music on the fly when trying to develop both the dance and musical aspects of a performance.

In conclusion, these observations led us to see our project’s development in the following way: we will start from a practical perspective, where a dancer can use the system to practice, and the system will accomplish the very specific tasks that we will describe later. However, we hope that this will be the basis of what can then become an interactive system which can be introduced into improvised dance performances, allowing dancers to interact with the music on the fly, something that they seemed to think would be even more valuable than the improvement of the practice routine that we have in mind as a starting point.

Task Analysis Questions

  1. Who is going to use system?
    Dancers and people who are trying to teach dancers who are trying to practice or choreograph their dance to a recorded song.
  2. What tasks do they now perform?
    Currently, dancers use some sort of jukebox/stereo/computer to play their recording. They start the song from the beginning or some relevant point and then get into position. The music plays at its normal pace and they dance. When they mess up, they have to head over to the computer and start it again. This process repeats
  3. What tasks are desired?
    Dancers want to be able to play a relevant part of music that they’re trying to learn and practice until they mess up. If they do, they want an easy way to go back and restart the music from that point. Additionally, if the music or moves are too fast to learn at the normal pace, they could use some way to slow down the music.
  4. How are the tasks learned?
    The process of creating a methodology of learning how to dance basically comes down to one’s personal experience trying to learn dances. There isn’t a formalized approach that all dancers adhere to.
  5. Where are the tasks performed?
    Normally performed in a dance studio
  6. What’s the relationship between user & data?
    There isn’t necessarily user data created by the process of learning to dance. Optionally, one can create a video of the dance and watch it, but that’s not always necessary because dance studios usually have mirrors
  7. What other tools does the user have?
    Normally the user has a stereo, iPod, computer, or something that can play music and allows them to start and restart it
  8. How do users communicate with each other?
    There isn’t a relevant user to user interaction that needs to be addressed
  9. How often are the tasks performed?
    Normally, Dancers practice dances with a show upcoming. Often, they also just do it for fun. Dancers practice as often as necessary to learn the dance.
  10. What are the time constraints on the tasks?
    Dancers normally have a limited amount of time to learn a dance (before a show or something) so they want to have the most efficient practice.
  11. What happens when things go wrong? (Pretty sure this question is not relevant)
    It is rarely the case that there is some sort of problem with the music playing that causes the dance practice to come to a halt. I guess if something happens, then they have to reboot, find a new way to play music, or just reschedule the dance practice. I don’t think this question is very relevant to our specific inquiry

 Description of three tasks

  1. (Easy) The user would be able to stop and start the music. Currently, the user would have to either: have someone else control the music, walk over to a computer or other sound system and push start/stop (with a delay to get into back into position), or control the music with a remote (which they would have to carry with them, hindering their movements). With our system, there would be gesture and voice commands for pause and play. Ideally, the system would also know to pause the music when the user stops dancing, and start when the user starts. This would make this task incredibly easy with the proposed system, as user would not have to think about the task at all.
  2. (Medium) The user would be able to repeat sections of the music. When practicing anything, repetition is key, and dancers often have to repeat sections of a dance to learn it. Currently, to go back to a section, the user would need to press buttons to skip to the proper section, or if the music isn’t divided into sections, they may have to fast forward or backward to seek to the proper place in the song. It is difficult to stop the music, seek to the right place, and then start it. With our proposed system, the dancer would be able to set “breakpoints” in the music with gestures while running through it, and then go back to that point later. This would make the process much easier.
  3. (Hard) The speed of the music would change to follow the dancer.  Currently, there is no way to do this on the spot. Either the speed has to be changed beforehand, with some software, or the dancer has to follow the music (perhaps stopping and rewinding a section many times until they can follow that tempo). With our system, the speed of the music would changed based on the dancer. This would give dancers who are just learning the choreography an easier, and perhaps less frustrating, way of practicing with music, and it would give a dancer who knows the choreography more freedom of expression. It would be very easy to do on our system, as it could happen automatically (the user would indicate beforehand whether they want this to happen).

Interface Design

TEXT DESCRIPTION

Our system would provide a way for dancers to interact with music without having to think about it. The system would provide defined gesture and voice commands for the user to use to stop, start, and repeat a section of music, but additionally, system would also try to automatically follow the dancer’s intentions without additional gestures (the music will stop when the dancer stops moving, etc). The system would also allow the user to set the pace of the music, by following the dancer’s steps. All the user would have to do is tell the system to enter this “follow” mode, and the rest would be automatic. This allows the user to basically focus on their practice without having to think about the technology. There are existing systems for interacting with music, including keys on a computer, remotes, and even voice commands, but what makes our idea different it would be more “intelligent” and responsive, making the routine more fluid and efficent. Some others systems similar to ours are those which use the connect to interact with music. One example is a system that translates a user’s moves into music (http://www.mendeley.com/catalog/creating-musical-expression-using-kinect/). Our system would be different in that it focuses on practicing: the user group and aim is different.

STORYBOARDS

TASK 1:

IMG_20130311_154800 IMG_20130311_154756 IMG_20130311_154750_1

 

TASK 2:

photo 1 photo 2 photo 3 photo 4 photo 5

TASK 3:

photo 1 photo 2 photo 3 photo 4 photo 5

SKETCHES

photo photo2 photo3

Team X – Lab 2

Group 10:

  • Osman Khwaja (okhwaja)
  •  (av)
  • Igor Zabukovec (iz)
  • Junjun Chen (junjunc)

Prototype 1: The Buzz Drum

The piezo element picks up knocks (of varying loudness), causing the buzzer to buzz.

This prototype uses the Piezo sensor and a buzzer. When the piezo sensor senses a vibration, like when we knock on the surface, it sends a signal to the Arduino which then causes the buzzer to buzz. So, essentially, when you drum it, it buzzes.

 

Prototype 2: The Flex Buzz

The flex sensor changes the frequency of the buzzer tone.

This prototype uses the buzzer and the flex sensor. When the sensor is flexed, the tone of the buzzer was changed. In theory, it sounded like a good idea. In reality, we didn’t have much control over the flex sensor’s output values and the change in the buzzer was not as apparent as we had hoped.

Final Prototype/Prototype 3: DJ Breadboard

Our last prototype uses a potentiometer, accelerometer, and a button all located on the breadboard to control the output sound from the computer.

We thought this would be really cool because, using a couple of sensors, we could control many different aspects of the sound. The potentiometer controls frequency of the synthesized sound (just a triangle wave and a sine wave together). The x and y axis of the accelerometer control the left-right panning of the triangle and sine wave, respectively.  Lastly, the button fades the sound in or out. We used Arduino and Processing’s Sound Module (Minim)
Final System Parts:

  1. Arduino + USB cord + Breadboard
  2. Wires
  3. 1 10 kohm resistor
  4. 1 ADXL335 accelerometer
  5. 1 Button
  6. 1 Rotary Potentiometer

Instructions:

Connect the accelerometer to the Arduino (Vin to A0, 3Vo to A1, Gnd to A2, Yout to A4, Xout to A5). Note that we don’t connect Zout to the Arduino, as we won’t be using the Z axis, and we need A3 for the rotary pot. Connect the button to digital pin 2 (as well as 5V and the 10kohm resistor to ground).  Connect the rotary pot to A3 (and 5V and ground). Run the following code in Arduino and Processing

Arduino Code:

// these constants describe the pins. They won’t change:

const int groundpin = A2; // analog input pin 2

const int powerpin = A0; // analog input pin 0

const int xpin = A5; // x-axis of the accelerometer

const int ypin = A4; // y-axis

// const int zpin = A3; // z-axis

const int pot = A3;

const int button = 2;

void setup()

{

// initialize the serial communications:

Serial.begin(9600);

// Provide ground and power by using the analog inputs as normal

// digital pins. This makes it possible to directly connect the

// breakout board to the Arduino. If you use the normal 5V and

// GND pins on the Arduino, you can remove these lines.

pinMode(groundpin, OUTPUT);

pinMode(powerpin, OUTPUT);

pinMode(button, INPUT);

pinMode(pot, INPUT);

digitalWrite(groundpin, LOW);

digitalWrite(powerpin, HIGH);
establishContact();

}

void loop()

{
if (Serial.available() > 0) {
// print the sensor values:
double b = (digitalRead(button) == HIGH) ? 1 : 0;

double a = analogRead(pot) / 1023.0;

double x = (343.0 – analogRead(xpin)) / (413.0 – 343.0);

double y = (341.0 – analogRead(ypin)) / (411.0 – 341.0);
Serial.print(x);

// print a tab between values:

Serial.print(“,”);

Serial.print(y);

Serial.print(“,”);

Serial.print(b);

Serial.print(“,”);

Serial.println(a);

}
}

// Establish contact with processing.
void establishContact() {
// Send an initial string until connection made
while (Serial.available() <= 0) {
Serial.println(“0,0,0”);
delay(300);
}
}

Processing Code:

import ddf.minim.*;
import ddf.minim.signals.*;
import controlP5.*;
import processing.serial.* ;

Serial port;

Minim minim;
AudioOutput out;
SineWave sine;
TriangleWave tri;
ControlP5 gui;

public float triAmp = 0.5;
public float triPan = 0;
public float triFreq = 880;

public float sinAmp = 0.5;
public float sinPan = 0.5;
public float sinFreq = 440;

void setup()
{
size(512, 400, P3D);
minim = new Minim(this);
// get a stereo line out from Minim with a 2048 sample buffer, default sample rate is 44100, bit depth is 16
out = minim.getLineOut(Minim.STEREO, 2048);
// create a sine wave Oscillator, set to 440 Hz, at 0.5 amplitude, sample rate 44100 to match the line out
sine = new SineWave(440, 0.5, out.sampleRate());
// set the portamento speed on the oscillator to 200 milliseconds
sine.portamento(200);
// add the oscillator to the line out
out.addSignal(sine);
// create a triangle wave Oscillator
tri = new TriangleWave(880, 0.5, out.sampleRate());
tri.portamento(200);
out.addSignal(tri);

// set up the gui
gui = new ControlP5(this);
gui.addKnob(“triFreq”, 40, 5000, 10, 250, 60);
gui.addKnob(“triAmp”, 0, 1, 0.5, 85, 250, 60);
gui.addKnob(“triPan”, -1, 1, 0, 165, 250, 60);
gui.addKnob(“sinFreq”, 40, 5000, 280, 250, 60);
gui.addKnob(“sinAmp”, 0, 1, 0.5, 355, 250, 60);
gui.addKnob(“sinPan”, -1, 1, 0, 430, 250, 60);

/* Get serial data. */
println(“Available serial ports:”);
println(Serial.list());

port = new Serial(this, Serial.list()[0], 9600);

port.bufferUntil(‘\n’); /* Read in one line at a time. */
delay(1000);

}

void draw()
{
background(0);
gui.draw();
stroke(255);
// draw the waveforms

}

void stop()
{
out.close();
minim.stop();

super.stop();
}

void serialEvent(Serial port) {

String myString = port.readStringUntil(‘\n’); /* Read in a line. */
myString = trim(myString); /* Remove leading / tailing whitespace. */
float sensors[] = float(split(myString, ‘,’));

if (sensors.length == 4) {
tri.setFreq(sensors[3] * 1000);
tri.setAmp(triAmp);
tri.setPan(sensors[0]);
sine.setFreq(sensors[3] * 1000);
sine.setAmp(sinAmp);
sine.setPan(sensors[1]);

if (sensors[2] < 0.5) { tri.setAmp(0); sine.setAmp(0); } if (sensors[2] > 0.5) {
tri.setAmp(1);
sine.setAmp(1);
}

// Print out sensor values:
for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {
print(“Sensor ” + sensorNum + “: ” + sensors[sensorNum] + “\t”);
}
println();
}

port.write(“A”); // Send byte to get more data from Arduino program.
}

P1 – Team X

Group 10:

  • Osman Khwaja (okhwaja)
  • (av)
  • Igor Zabukovec (iz)
  • Junjun Chen (junjunc)

Brainstorming List

  1. Fingerprint sensing bike lock opener.
  2. Password gesture for opening a door.
    s4
  3. Darkness detecting light system for dorm rooms.
  4. Mood Llghting based on music.
  5. Device that automatically scrolls Facebook based on your eye position.
  6. Kindle that flips the page based on your eye position.
  7. Switch computer windows with gestures.
  8. Shades that close based on how bright it is outside.
    s5
  9. Shades on a timer for taking naps.
  10. Alarm clock that opens blinds.
  11. Alarm clock that senses if you’re still in bed.
  12. Drawers that open pants/shorts, t-shirts/long sleeve based on the weather.
  13. Golf swing analyzer sunglasses.
  14. Smart foot stool that softens when you put your foot on it but stays rigid when other things are on it.
  15. Automatic coiling headphones that coil when you tug on them 3 times above a certain tension threshold.
  16. A mouthguard type sensor that tells if, after brushing your teeth, your mouth has an acceptable amount of plaque.
  17. A laptop monitor sensor that adjusts the angle of the screen based on your positioning in bed.
  18. A smartphone keyboard that adjusts to how you’re holding it (i.e. if holding it with one hand, will make a viable keyboard that caters to your one-handedness).
  19. When connecting external monitors to your laptop, you have to set your screen’s relative position. How about creating a system that that adjusts the external screen based on your position relative to the screen?
  20. A running shoe step monitor that tells a runner if they’re pronating their foot properly and recommends what type of shoe to buy.
  21. A gadget that lets you compare prices of a certain good while you’re at the store. Imagine you walk into Walmart and want to know if you’re getting a good deal on these headphones. Pull out smartphone/gadget, take picture, and a list of competitor prices shows up. (Kayak for real life).
  22. A toothbrush that tells you if you’re brushing your teeth too hard.
  23. Shoes that, as you’re jumping in the air, brace for impact by increasing cushion (I think laptops have something similar).
    s6
  24. A phone, that when placed on your bed, senses that you’re asleep and automatically moves calls to voicemail.
  25. Developing an alternative for knobs for music production programs (knobs are terrible to use with the mouse … we could just use physical knobs, but is there something more interesting?).
  26. Turn a laptop into a touch screen.
  27. Water dispenser that measure the amount of water in your cup (so it can stop pouring).
  28. Refrigerator that tracks what you have (and how much) and tells you what you need to buy.
  29. Sensor that reminds you to water your plants when the soil is dry
  30. Clothing that tells you if you’re slouching
  31. Clothing that indicates when it is stained / wrinkled / untucked, etc.
  32. Glasses that indicate whether or not you are straining your eyes too much (for example if you spend a lot of time in front of your computer screen).
  33. Headwear to help blind people navigate (senses when there is something in their path)
  34. Sound visualizer: Turn sounds to colors (for deaf people, or just to visualize sounds)
  35. System for dancers to learn moves that is synchronized to music. i.e. if they practice moves slowly, the music will play slowly, if they stop, it stops, etc.
    s2
  36. Similarly, an interactive system that responds to a dancer’s movements by generating both sound and visuals, allowing the dancer to control an entire multimedia performance (not necessarily just for practice).
  37. A similar system that compares a dancer’s moves against a previously recorded prototype (say by a teacher), so it can show mistakes in practice.
  38. A similar system for musicians to practice with an accompaniment track (slows down when they do, etc.)
  39. System that tells you if you have everything you need when you leave your room
  40. System that helps you find stuff in your room
  41. Live responsive software to music for a party : visuals are created according to the music played by the band/dj : sound, patterns, volumes. Possibility to use machine learning algorithms so that the software could recognize patterns in music
  42. Live responsive software to people’s movement in space for party: create visuals according to their movements: combining micros and kinects and use effects.
  43. Combine 40 and 41 : create a whole integrated systems for parties, where the experience keeps changing.
  44. Tools on shopping carts that reads your shopping list on a USB key. Then minimize the distance in the shopping mall and tells you where to go exactly on a screen placed on the shopping cart.
    s3
  45. Shopping cart as above, but instead of just telling you where to go, drives itself following the calculated path.
  46. Bike padlocks that have heating function (using a battery) so that the lock does not get stuck when it freezes (this happened to me a few times and it is very annoying).
  47. Electronic remote that can be used for all sorts of appliances: control your coffee machine, tv radio, etc.
  48. Same idea, except using voice and gesture recognition:  by saying “coffee”, “TV”, etc. turn things on or off, and by raising or lowering your arm change the volume (for example).
  49. Tool to plug and heat the coffee machine and make your coffee when your alarm clock goes off or when you go out of bed (in connection with idea 11) so that you do not have to wait in front of your coffee machine.
    s1
  50. System to drive a car using only eye movements, for paraplegics.
  51. Program that allows your to chance the source code file that you are working on without using the keyboard, so that you separate the functions of writing code and of choosing the file.
  52. A new way of accessing different directories on the computer that simulates files organized in a 3d space.

Idea Chosen

We chose idea #35: System for dancers to learn moves that is synchronized to music (i.e. if they practice moves slowly, the music will play slowly, if they stop, it stops, etc). We chose this because this (more than many of our other ideas) applies to a specific target user group. We also think that learning moves is a real world problem that our system could realistically help solve. Our system wouldn’t interfere much with users’ normal practice, which means it wouldn’t be hard for them to use. Also, since there are many dancers on campus, we would have access to the target user group for testing. It seems feasible in terms of budget, as it wouldn’t require very many parts. It would be a Kinect-based application, but as a second idea, we could also use tethers and other sensors (large flex sensors, etc). It also seems feasible in terms of work, and it is also a good starting point from which we can build on if we have time to extend the project (see ideas #36 and #37).

Target User Group

Dancers are our target user group. Although this system would ideally be useful for all dancers, it will be particularly interesting to work with dancers who want to introduce non-traditional practices in their performance, where their craft consists of an interaction with the music, instead of being treated as a response to the music. A very important aspect of our system is to minimize any limitations to their movement, and to allow the system to work in a large space.

Project Description and Context

A system to allow dancers to practice moves without disturbing their practice. Currently, if  dancers want to change their music during practice, they must stop and go to a device that controls the music (like a computer or iPod). This would disrupt their practice, as it would require a lot of back and forth movement unrelated to the piece that they wish to perform. Additionally, since there is no easy way of controlling the tempo of the music on the spot, a dancer must adjust their moves to the predefined tempo. This may work for performances, but is not good for practice, when the dancer might want to practice some moves slower (but still to the music). This whole process might be improved with a technical solution that makes the music follow the person, instead of the other way around. Dancers must spend a lot of time practicing and they generally have a set practice location. This means that the problem solution does not have to be mobile, and that the user group would be motivated to use it. One related solution is using a remote control, but this means that the dancer must carry the remote with them. There are also other gesture based control systems, but they require the dancer to interrupt their practice in order to display these gestures to control the system. Our aim to develop a more “intelligent” and responsive system, to make the practice routine as fluid and efficient as possible.

Technology Platform

We chose to use several sensors :
– Kinect would probably be the best and most useful since it can easily capture movements. This would capture the whole body movement of the dancer, as well as allow the dancer to move around the room. Also, since we can use gesture controls with the Kinect, this also means that the dancer wouldn’t have to walk from their practice location to say a computer or other physical interface to control the music. A Kinect platform would be the least disruptive to a dancer’s normal practice.
– In case no kinect are available we could use tethers: thanks to them it is possible to determine the position of the dancer and  their movements. However, the dancers would have to be attached to physical objects on stage which is not as practical, but could create interesting performance possibilities
– Flex sensors with an Arduino can also be used as well, though seem more difficult.

Sketches

sk1sk2

Lab 1 – DigiSketch

Group 10:

  • Junjun Chen (junjunc),
  • Igor Zabukovec (iz),
  •  (av),
  • Osman Khwaja (okhwaja).

Description: 

This DigiSketch is intend to do renew the famous Etch-a-Sketch, providing a new way to write on the computer. The principle is quite simple : use the two potentiometers to draw lines of circles and the photo-sensor to change the color of them. You can basically write and draw whatever you want. Start by writing letters, basic forms, and continue by increasing the complexity of your drawings and to test how skillful you are, try to draw a circle. If the user is adventurous, the Processing program can be modified in order to use a different color palette, a different sized canvas, or a different pen shape.

Our system overall achieved its goal. However, the calibration of the photocell could be improved, in order to allow the user more precise control over the color. Perhaps a different sensor could be used for this purpose. To make the system more sophisticated, we could have three different sensors controlling red/green/blue, rather than a single sensor controlling the shade of one color, as we have now. In general, our system proves how easy it is to replicate the Etch-a-Sketch concept; the possibilities for refinement are vast.

Pho­tos of sketches:

A system which allows you to draw in Processing using two rotary potentiometers to control  position, and using a photocell to control color.

photo2

A system which allows you to control the playback speed of a sound file by flexing your finger.

photo

An LED night-light system: turns on the night light when it is dark.

photo3

Sto­ry­board: photo

DigiSketch in Action:

The Arduino and the circuit:

photo7

An example of different colors that can be obtained by varying input to the photocell:

photo6

An attempt at drawing a circle:

photo5

List of parts

  • Arduino, wires, USB cord, breadboard
  • 10 k ohm resistor
  • Photocell
  • 2 rotary pots

Instruc­tions for Recreation

Connect a photocell to the bread board, with a 10kOhm pull down resistor from one leg of the photocell the ground. Connect the other leg to 5V. Connect the point between the photocell and the resistor to analog pin on the Arduino (we used A0). Connect a rotary potentiometer the the breadboard so that one other pin is connected to 5V and the other outer pin is connected to ground. Connect the middle pin to another analog pin (A1). Do the same for the other rotary pot to connect it to A2. Use the attached source code for Processing and Arduino – the Processing program receives messages from Arduino in order to draw on the screen.

Circuit diagram:

photo4

Arduino Source Code:

/* COS 436 - Lab 1 - Group 10 */
/* A Processing drawing system using two rotary potentiometers
   and a photocell. */
/* The Arduino program communicates with Processing with Serial port. */

int photoPin = 0; // the photocell is connected to A0
int pot1Pin = 1; // the 1st pot is connected to A1
int pot2Pin = 2; // the 2nd pot is connected to A2

int photoReading; // the analog reading from the photocell
int pot1Reading; // the analog reading from pot1
int pot2Reading; // the analog reading from pot2

// minimum and maximum values for calibration of photocell
int photoMin = 0;
int photoMax = 0;

void setup(void) {

  Serial.begin(9600);

  // Calibrate photocell during the first five seconds
  while (millis() < 5000) {     // Read in value from photocell     photoReading = analogRead(photoPin);     // Record the maximum sensor value     if (photoReading > photoMax) {
      photoMax = photoReading;
    }
    // Record the minimum sensor value
    if (photoReading < photoMin) {       photoMin = photoReading;     }   }      // Establish contact with Processing   establishContact(); } void loop(void) {   // Only send messages when there is a connection   if (Serial.available() > 0) {
    // Read input from Arduino's analog pins
    photoReading = analogRead(photoPin);
    pot1Reading = analogRead(pot1Pin);
    pot2Reading = analogRead(pot2Pin);

    // Map sensor value to desired range
    photoReading = map(photoReading, photoMin, photoMax, 0, 255);
    photoReading = constrain(photoReading, 0, 255);

    // Send sensor reading as comma separated values.
    Serial.print(photoReading);
    Serial.print(",");
    Serial.print(pot1Reading);
    Serial.print(",");
    Serial.println(pot2Reading);
  }
}

// Establish contact with processing.
void establishContact() {
  // Send an initial string until connection made
  while (Serial.available() <= 0) {
    Serial.println("0,0,0");  
    delay(300);
  }
}
Processing Source Code:
/* COS 436 - Lab 1 - Group 10 */
/* The Processing program recieves input from sensors which the 
   Arduino sends by serial port. */

import processing.serial.* ;

/* Serial port for message communication. */
Serial port;

/* Drawing parameters */
float ypos;
float xpos;
float fgcolor;

void setup() {
  size(500, 500); /* Canvas dimensions */
  background(255); /* White background. */

  println("Available serial ports:");
  println(Serial.list());

  port = new Serial(this, Serial.list()[0], 9600);  

  port.bufferUntil('\n'); /* Read in one line at a time. */
}

void draw() {  
  fill(fgcolor, fgcolor, fgcolor); /* Set color. */
  ellipse(xpos, ypos, 20, 20); /* Draw circle with a center determined by pots. */
}

void serialEvent(Serial port) {
  String myString = port.readStringUntil('\n'); /* Read in a line. */
  myString = trim(myString); /* Remove leading / tailing whitespace. */

  int sensors[] = int(split(myString, ',')); /* Split CSV input into int array. */

  // Print out sensor values:
  for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {     print("Sensor " + sensorNum + ": " + sensors[sensorNum] + "\t");   }   println();      if (sensors.length > 1) { /* else input is incorrectly formatted */
    fgcolor = 255 - sensors[0]; /* Set color of circles */
    xpos = map(sensors[1], 0, 1023, 0, width); /* Set x-position of circle center. */
    ypos = map(sensors[2], 0, 1023, 0, height); /* Set y-position of circle center. */
  }
  port.write("A"); // Send byte to get more data from Arduino program.
}