P6: AcaKinect

Your group num­ber and name

Group 25 — Deep Thought

 First names of every­one in your group

Neil, Har­vest, Vivian, Alan

1-sentence project summary

AcaKinect is voice record­ing soft­ware that uses a Kinect for gesture-based con­trol, which allows content creators to record mul­ti­ple loops and beat sequences and orga­nize them into four sec­tions on the screen, which is a more effi­cient and intu­itive way of presenting a music record­ing interface for those less experienced with the technical side of music production.

Introduction

We are evaluating the ability of musicians with various levels of technical and musical expertise to use the AcaKinect system to make music. AcaKinect’s functionality is a subset of what most commercial loop pedals and loop pedal emulators are capable of; it is intentionally designed to not provide a “kitchen-sink experience,” so that musicians – many of which are not also technicians or recording engineers – may easily jump in and learn how to use looping and sequencing without having to climb the steep learning curve involved in using commercially available looping products. Thus, this experiment is good for modelling what real users would do when faced with this system for the first time; we are much more concerned with allowing users to jump straight in and start making music than enabling experienced users to use advanced functionality.

Implementation and Improvements

The implementation can be found here. We have fixed several instabilities and exceptions that may cause the program to crash; we have also added prototype indicators so that the user knows how many loops are in each column.

Method

Participants:

Participant 1 is an English major who has some technical music production background, and has used Garageband and Audacity to record raps. Participant 2 is a literature major with formal classical training in violin and piano, but no prior experience in the technical aspects of music production. Participant 3 is an ORFE major with formal training in flute and piano and also no prior experience with technical music production. Participants 2 and 3 were chosen to be a baseline user; how well can musicians with no background in recording or producing music use this system without any prior training? Participant 1, who has some recording experience, allows us to determine whether a little bit of prior technical knowledge helps make the learning process much quicker.

Apparatus:

A laptop running Processing is connected to a Kinect and a microphone, and optionally a set of speakers; the user stands an appropriate distance from the Kinect and sings into the microphone. Note that the microphone should ideally be highly directional, such that the live-monitored output from the speakers does not generate feedback. Most ideally, the user would be monitoring the sound using a pair of headphones; this would eliminate the feedback issue. However, in our tests, we simply used the laptop’s speakers, which were quiet enough not to be picked up excessively by our microphone, which was an omni mic. Experiments were all conducted in a music classroom in Woolworth, so that testers could feel free to sing loudly.

Tasks:

Task 1: Record a simple loop in any one of the sections. It can have whistling, percussion, singing, etc. (This is meant to test the user’s ability to grasp the basic mechanics of recording music using this system; calibrating and then signaling to record a track are tested here.)

Task 2: Record any number of loops in three different sections and incorporate the delete feature. (This adds in the core functionality provided by the column abstraction; we would like to see if users structure their recordings by grouping similar parts into the same column. Deletion is added into the core gestures.)

Task 3: Create organization in the loop recording. For example, record two percussive sections in the first section, one vocal section in the second section, and maybe a whistling part in the third section. (This encourages users who may not have used columns to structure their recordings to do so; we want to see whether they pick up on this design choice or ignore it entirely.)

Procedure:

First, the users read and signed the consent form, and then filled out the questionnaire. We then demonstrated the main workings of the system, including the concept of columns and the requisite gestures to interact with the system. Next, we read the description of each task, and allow the user to try out the task uninterrupted; if the user has questions, we will answer the question, but we do not prompt if the user is struggling with some aspect of the task but has not explicitly asked for assistance. Finally, when the user has completed the tasks, we ask a few general questions about the experience to get overall opinions.

The test setup. Here, you can see the positions of the laptop, Kinect sensor, user, and microphone.

The test setup. Here, you can see the positions of the laptop, Kinect sensor, user, and microphone.

We tested in Woolworth music hall, to allow users to be loud in recording music.

We tested in Woolworth music hall, to allow users to be loud in recording music.

Test Measures:

  • Timestamps of significant events (e.g. starting and stopping recordings): we want to gain a general picture of the amount of time users spent in certain states of the system, to see if they are dwelling on any action or having trouble with certain parts of the system.
  • Timestamps of significant observations (e.g. struggling with the delete gesture): we want to track how often and how long users have problems or otherwise interesting interactions with the system, so we can identify potential areas of improvement.
  • Length of time needed to perform a “begin recording” gesture. Since this is one of the most fundamental gestures needed, we want to make sure that users are able to do this quickly and effortlessly; long delays here would be catastrophic for usability.
  • Length of time needed to perform a “delete track” gesture. This isn’t quite as central as the recording gesture, but it is one that may still be frequently used, especially if users want to rerecord a loop that wasn’t perfect; we also want this to be fast and accurate, and if the user has to repeat the gesture multiple times, it will take too long.

Results and Discussion

  • We discovered that a primary problem was text visibility: users simply were unable to read and distinguish the various messages on the screen indicating when a recording was about to start, so in certain cases, when the screen read “get ready…”, the student would start singing immediately after the recording gesture. In some ways, this is a good problem to have, since it is fairly simply fixable by providing more contrast and possibly more prominent textual cues that are more visible against the background and the rest of the UI, so the users know exactly when to start recording. This also applies to the beat loops, which are currently just shown as numbers on the screen; once the block metaphor is implemented, this should be less of a problem as well.

  • There were several issues with using the Kinect to interact with the system. We found that in order for gesture to be accurately recognized, the user must be standing reasonably upright and facing the Kinect; tilting of the body can cause gestures to not be recognized. While we do provide the skeletal outline onscreen, it seems that the users either did not recognize what it was, or just did not look at it to figure out whether gestures would be picked up or not; thus, some gestures were performed repeatedly (like deletes) in ways that would just not be picked up by the Kinect. We found that slow delete gestures worked much better than fast ones, but it did not seem that the users realized this immediately after a few attempts. In order to fix this, we could provide some indication that the Kinect saw some sort of gesture, but did not know what to make of it; a message on screen along the lines of “Sorry, didn’t catch that gesture!” might go a long way in helping the users to do gestures in a way that is more consistently recognizable. In addition, it seemed that users had some issues moving side to side through columns, as there is actually a considerable distance to move physically if the user is standing far enough back from the Kinect. We do not really consider this a problem, but rather just something that the user needs to acclimate to; perhaps clearer indication of which column the user is in would help in sending the message that recording in various columns is highly dependent on the user’s physical location, which also helps to reinforce the idea that different structural parts of the music belong in different spatial locations.
  • A more fundamental issue we have currently is that our code and gestures do not deal with the addition of the microphone properly; in this case, we didn’t use a mic stand (which is a reasonable assumption, since most home users who have never touched music recording software would probably also not have a mic stand). Thus, when the user holds the microphone naturally, the recording gesture is less intuitive to perform, but still possible, since it is a one-handed gesture; the two-handed delete gesture is by far much more difficult to perform while simultaneously trying to hold a microphone attached to a long cable. Thus, a reasonable idea would be to try to adapt the gestures to be one-handed such that one hand can always be holding the microphone in a reasonable location, and so that the presence of the hand holding a mic in front of a face does not confuse the Kinect’s skeleton tracking abilities.

  • We also have to work out some technical issues with the Kinect recognizing the user; one problem we saw was that the calibration pose can easily mistaken for a record gesture if the user has already calibrated. Similar problems also occur if the user ducks out of the frame and then reenters, or to a lesser extent when the user’s skeleton data is temporarily lost and then reacquired; however, these are purely technical issues, not design issues.

  • We believe that repeating these tests with a larger population of testers will not produce vastly different results, since most of the problem spots and observation we found were suggested by multiple users and were acknowledged to be potential areas for improvement. In addition, our test users approximated very well the type of users we want to be using this system, having a good amount of musical experience but very limited technical experience; their inputs were cross-corroborated with the other testers, which would suggest that their suggestions and problems are going to correspond fairly well with what we would have seen given a larger population.

Appendices:

Consent form:

PROTOTYPE TESTING CONSENT FORM

PRINCETON UNIVERSITY

TITLE OF PROJECT: Aca-Kinect

INVESTIGATING GROUP: Group 25 (Neil C., Harvest Z., Alan T., Vivian Q.)

        The following informed consent is required by Princeton University for any research study conducted at the University.  This study is for the testing of a project prototype for the class “Human Computer Interaction” (COS 436) of the Spring 2013 semester.

Purpose of Research:

        The purpose of this prototype test is to evaluate the ease of use and the functionality of our gesture-based music recording software. The initial motivation for our project was to make simpler software for beat sequencing, allowing one person to easily make complex a capella recordings. We will be interviewing three students for prototype testing. You are being asked to participate in this study because we want to get more insight on usability and how to further improve our gesture-based system.

Procedures:

You will be asked to listen to our basic tutorial on how to use the gesture-based system and perform three tasks using the prototype. The tasks require that you sing and/or make noises to simulate music recording. We expect your participation to take about 10-15 minutes of your time, and you will not be compensated for your participation but will earn our eternal gratitude.

Confidentiality:

Your answers will be confidential. The records collected during this study will be kept private. We will not include any information that will make it possible to identify you (such as name/age/etc). Research records will be kept in my desk drawer and only select individuals will have access to your records. If the interview is audio or video recorded, we will destroy the recording after it has been transcribed.

Risks or Discomforts/Benefits:

The potential risks associated with the study include potential embarrassment if the participant is not good at singing, however we will not judge any musical quality. Additionally since we are using a gesture-based system, the participant may strain their muscles or injure themselves if they fall.

Benefits:

        We expect the project to benefit you by giving the group feedback in order to create a simpler, more efficient system. Music users and the participant themselves will be able to use the system in the future. We expect this project to contribute to the open-source community for Kinect music-making applications.

I understand that:

        A.     My participation is voluntary, and I may withdraw my consent and discontinue participation in the project at any time.  My refusal to participate will not result in any penalty.

        B.     By signing this agreement, I do not waive any legal rights or release Princeton University, its agents, or you from liability for negligence.

I hereby give my consent to be the participant in your prototype test.

______________________________________

                                                                                                                                 Signature

______________________________________

Date

Audio/Video Recordings:

With your permission, we would also like to tape-record the interview. Please sign below if you agree to be photographed, and/or audio videotaped.

I hereby give my consent for audio/video recording:

______________________________________

Signature

______________________________________

Date

 Demographic questionnaire:

Here’s a questionnaire for you.

 

Age  _____

 

Gender  _____

 

Education (i.e. Princeton) __________________

 

Major (ARC, FRE, PHI…) _________

 

 

Have you ever in your life played a musical instrument? (Voice counts) List the instruments you have played.

 

__________________________________________________________________________________________

 

 

Tell us about any formal musical training you’ve had or any groups you are involved in (private lessons, conservatory programs, Princeton music classes, music organizations on campus, etc).

 

__________________________________________________________________________________________

 

 

Have you ever used music recording or live performance software (Audacity, Garageband, Logic, ProTools, Ableton, Cubase, etc)? List all that you’ve used and describe how you’ve used them.

 

__________________________________________________________________________________________

 

 

Have you ever used music recording/performance hardware (various guitar pedals like loop/delay/fx, mixer boards, MIDI synthesizers, sequencers, etc)? List all that you’ve used and describe how you’ve used them.

 

__________________________________________________________________________________________

 

 

Are you a musical prodigy?

 

Yes ______      No ______

 

 

Are you awesome?

 

Yes ______

 Demo script:

Your Job…

 

Using  the information from our tutorial, there are a few tasks we’d like you to complete.

 

 

First, we’d like you to record a simple loop in any one of the sections. It can have whistling, percussion, singing, etc.

 

Next, we’d like you to record any number of loops in three different sections and incorporate the delete feature.

 

Finally, we’d like you to create an organization to your loop recording. For example, record two percussive sections in the first section, one vocal section in the second section, and maybe a whistling in the third section. The specifics don’t matter, but try to incorporate structure into your creation.

 

Additionally, we’d like you to complete the final task another time, to see how quickly and fluidly you can use our system. We’ll tell you exactly what to record this time: 2 percussive sections, 1 vocal (singing section), and 1 whistling section.

Post-task questionnaire:

AcaKinect Post Trial Survey

 

1. What is your overall opinion of the device?

 

2. How did you feel about the gestures? Where they difficult or confusing in any way?

 

3. Are there any gestures you think we should incorporate or replace?

 

4. What about any functionalities you think our product would benefit from having?

 

5. Do you feel that with practice, one would be able to fluidly control this device?

 

6. Any final comments or questions?

Raw data:

Lessons learned from the testing:

  • Main problem — text visibility! Need to make the text color more visible against the background and UI clearer, so the users know exactly when to start recording and can see the beat loops.

  • Realized our code did not deal with the addition of the microphone properly, such as how the user naturally holds the mic (interferes with the recording gesture), or the delete function (when holding the mic, hands do not touch)

  • Person’s head needs to be perpendicular to the Kinect or their gestures won’t record as well

  • People didn’t utilize the skeletal information displayed on screen, seems like they didn’t understand how the gestures translated

  • Slow delete gestures were best, compared to fast gestures which were hard to track by the system

  • Calibration pose mistaken for a record gesture if it is not the first time calibrating (user leaves screen and then returns), need to check for that.

  • We found out that the MINIM library we used requires us to manually set the microphone as audio input, so during the second subject test we determined that we were actually using the laptop’s built in mic, which accounted for the reduced quality of sound and feedback from previous loop recordings

 

 

Test Subject #1:

 

Demographics:

  1. 20

  2. M

  3. Princeton

  4. English

  5. Nope

  6. N/A

  7. Yes; garageband, Audacity (Recording Raps)

  8. No

  9. No

  10. No

 

00.00 Begin reading of the demo script

00:13 Subject agrees current music recording interfaces are hard to use

02:30 Question about why there are different boxes. We explain that it is a way to organize music.

04:09 Begin testing for Task 1

04:25 Not clear about how to calibrate, asks a question

04:45 Tries to start singing before the recording starts, can’t see messages or numbers on the screen very well, doesn’t think the blue text color is visible

05:01 Begin testing for Task 2

05:37 Attempt to do the same delete feature six times. Subject thinks the delete function is hard to use. The quick motions of the subject are hard for the Kinect to track. Also, the subject was holding the mike so their hands were not perfectly together. This caused our gesture detection to not register the control request.

06:45 Subject says, “I can’t see when it’s recording” referring to the text. Color is hard to distinguish against the background.

06:50 Begin testing for Task 3

07:00 Gestures are harder to detect on edges, subject’s attempt to delete on the edge needs to be repeated several times

07:42   Recording did not register when holding the mike up at the same time. Need more flexibility with the gesture command.

Time testing (Recording tracks): 5s, 6s, 8s, 9s

Time testing (Deleting tracks):  20s, 10s, 15s, 11s,

 

 

Test Subject #2:

 

Demographics:

  1. 18

  2. F

  3. Rutgers

  4. Literature

  5. Yes; violin and piano

  6. Private lessons

  7. No

  8. No

  9. No

  10. Yes

 

00:00 Begin reading of the demo script

02:26 Begin testing for Task 1

02:35 Ask about how to configure (get the Kinect to recognize their body)

02:48 Subject says that can’t see the information on the screen

03:28 Subject starts recording and begins singing immediately, because the “Countdown…” message was not clearly visible.

04:07 Subject repeats that the information on screen is hard to see

04:46 Begin testing for Task 2

05:03 Subject missed the start of the recording because they couldn’t see the on-screen text

05:17 Second attempt to record in the same column

05:46 Delete action was not successful

05:51 Delete succeeded

06:25 Begin testing for Task 3

07:44 Needed to stop and reconfigure because the Kinect lost tracking of body

09:32 The subject did not put their arm high enough so the recording command was not detected

Time testing (Recording tracks): 5s, 4s, 5s, 5s

Time testing (Deleting tracks):  7s, 10s, 15s, 11s,

Test Subject #3:

 

Demographics:

  1. 20

  2. F

  3. Princeton

  4. ORFE

  5. Yes; piano and flute

  6. Private lessons, high school orchestra

  7. No

  8. No

  9. No

  10. Yes

 

00:00 Begin reading of the demo script

02:10 Begin testing for Task 1

02:11 Subject didn’t remember to calibrate before attempting to record

02:20 Calibration

02:22 Subject records

02:54 Begin testing for Task 2

03:30 Subject easily uses the delete command

03:41 All recorded tracks deleted

Summary: not much trouble, only calibration

04:42 Begin testing for Task 3

04:50 Records first (leftmost) section

05:11 Records second section

05:30 Records third section

05:56 Records fourth section. Problem in the last section because subject only used right hand to signal the recording command, but in the last section the right hand was off the screen. The hand was out of the image and not picked up, so after a few tries the subject had to switch to the left hand.

Time testing (Recording tracks): 7s, 8s, 8s, 8s

Time testing (Deleting tracks):  7s, 8s, 11s, 12s,

 

Deduprinceton: Deduplication of learning

Observations

Student 1: Xavier treks to Algorithms

Friend Center is a good walk from most of the rest of campus, so Xavier needs about seven or so minutes — five if walking quickly and looking a little silly — to make it to 11 am class on time. This isn’t really helped by the fact that the 10 am class often overruns its lecture time, so by the time Xavier is out the door of Frist it’s already 10:56 and he’s going to be a little late no matter what. There is no listening to music, checking mail, or talking to friends. There is only speedwalking like an Olympic speedwalker (sorry) to get to class just as lecture begins. I noticed that this sort of thing happens a lot, especially with unfortunately spaced out classes across campus, so far from having the time to do things between classes, most people are just rushing not to be late.

Student 2: Yancey waits for Graphics

Yancey gets to class a good 15 minutes early, mostly because the classroom is close to lunch. Sits down, pulls out laptop, starts looking through emails. Writes one and sends it. Deletes a bunch, sorts a bunch more. Checks out Hacker News for the latest buzz. Yancey is going to pay attention in lecture, so when the class starts the laptop closes. Triaging email seems to be a pretty popular task to do in the ten or fifteen minutes before class starts; since everyone seems to get 50-60 emails a day, triaging between classes is generally a good idea to avoid an inbox explosion later on.

Student 3: Zeus shows up to Sociology

Xavier gets to a 10 am class at 9:59, looking tired but not rushed. Probably didn’t have a class at 9. Probably just woke up ten minutes ago, actually. Flips open laptop, gets out notebook, starts looking at a problem set that has maths on it and is therefore probably not sociology related. Has a minute to ask a neighbor about one of the problems, but then the professor starts teaching on time and Xavier turns back to work. Appears to only marginally be paying attention to the sociology lecture, is mostly working on his work for another class. This happens quite a lot in this large lecture, which has material that is either uninteresting or identical to the assigned readings.

15 ideas

  1. Mailfree: an app that delays delivery of email, text messages, Facebook notifications, and anything else that rings or buzzes during lectures, to reduce distraction. It delivers them all in a bunch right after class ends, to allow students to triage between classes.
  2. Something that lets friends coordinate walking schedules to make those ten minute walks a little more interesting. If friends are both going from Lewis to EQuad, the app might suggest being walking buddies.
  3. Optimization of walking routes in order to figure out the fastest way to get from one building to another.
  4. Database of times it takes to go from buildings to other buildings, socially sourced from experience. Gets more accurate over time and tailors itself to your walking pace.
  5. Something to set an x-minute reminder that lets you know when to leave lunch in order to get to class on time.
  6. A visualization app for the professor, displaying how many people are going to be late to their next class if the lecture runs over. Knows student schedules.
  7. A way for friends who have a few bikes between them to coordinate bike sharing in order to cover long distances between classes more efficiently with fewer bikes.
  8. An app to help students who have no time for lunch and students coming from lunch to coordinate bringing bagged lunches to class.
  9. A social tool that rates classes for usefulness in order to enable students to make informed decisions about whether to attend lecture.
  10. A collaborative synopsis and summary of each lecture that students write up after class, to aid in review.
  11. An app that updates students on the latest and upcoming campus events and happenings, to add a more modern way of advertising to students than posters on lampposts.
  12. An attendance utility for the professor that scans the seats in an auditorium and tallies how many people attended lecture
  13. A questions tool, so that students can immediately write down a bunch of questions from lecture before forgetting so that they can later ask them either on Piazza or office hours.
  14. Announcement time – before relevant classes (WWS for a politics talk, COS for NVidia, for example), students promoting these events can give a 30-second spiel on each event in the front of the room.
  15. A way for students to provide feedback on workloads to the professor, so that assignments and due dates can be tweaked if the professor sees heavy imbalances.

Mailfree:

Mailfree (which probably shouldn’t be called that, since it handles more than just mail) is an app that delays delivery of email, text messages, Facebook notifications, and anything else that rings or buzzes, until after lecture is over. It delivers them all in a bunch right after class ends, to allow students to triage between classes — and the schedule is ideally pulled from Google Calendar or ICE, so the user won’t even have to worry about inputting schedules manually. Exceptions can be made for certain people, from whom you’d want to receive notifications regardless of whether you were in class. This would dramatically cut down on the amount of buzzing in classes, which — despite not making as much noise as ringtones — is still distracting.

Left: what we have now. Lots of bzz and vrrrm and so on in lecture, and people being distracted by various Facebook messages and so on. Right: with mailfree, you could hear a cricket in the classroom (not that there would be one, that would be weird). Since students use in-between time for triaging emails anyway, why not just dump the previous hours' worth of emails and messages at the end of lecture, and keep silent during class?

Left: what we have now. Lots of bzz and vrrrm and so on in lecture, and people being distracted by various Facebook messages and so on. Right: with mailfree, you could hear a cricket in the classroom (not that there would be one, that would be weird). Since students use in-between time for triaging emails anyway, why not just dump the previous hours’ worth of emails and messages at the end of lecture, and keep silent during class?

Mailfree is meant to be invisible -- it recedes into the background, automating everything, so there is no daily user interface. This is the settings panel, from which users can add calendars for Mailfree to use to determine whether it should delay notifications, and it also allows for setting exceptions (calls, emails, messages for each contact) so that if you don't want to miss a call from your summer internship recruiter offering you a job, you can set that.

Mailfree is meant to be invisible — it recedes into the background, automating everything, so there is no daily user interface. This is the settings panel, from which users can add calendars for Mailfree to use to determine whether it should delay notifications, and it also allows for setting exceptions (calls, emails, messages for each contact) so that if you don’t want to miss a call from your summer internship recruiter offering you a job, you can set that. There’s also a big button that’s always present in the settings that allows you to either activate or deactivate it. The hope is, of course, that you’ll leave it activated most of the time.

Deduprinceton:

This is an app meant to optimize the time students spend working and in lecture. Using data collected from previous years a course is offered, Deduprinceton compiles data on whether a lecture is redundant (repeats a lot of the same stuff in the assigned reading), interesting or not, and possibly other factors that determine whether a student on the fence about attending lecture (for any number of reasons — a lot of other work, boring lectures, did the reading, or just plain lazy) should go to a particular lecture. After lecture, students can give their take immediately by dragging a few sliders and optionally adding a couple comments, while the content is still fresh in mind. Professors could potentially also see the aggregate data, in order to see if lectures are effective or not and possibly to adjust future years’ curriculums to better engage with students and boost lecture attendance.

(1) This is the main interface you look at when deciding whether to attend lecture or not. Afterwards, you click the appropriate button to give feedback about the lecture. (2) The flyout menu gives a list of all the  classes you're taking, pulled from SCORE and authenticated. If you're just sitting in or auditing classes, or if you don't want to give feedback, you can delete or add more classes, but your results won't be aggregated with the authenticated students' results for accuracy purposes.

(1) This is the main interface you look at when deciding whether to attend lecture or not. Afterwards, you click the appropriate button to give feedback about the lecture. (2) The flyout menu gives a list of all the classes you’re taking, pulled from SCORE and authenticated. If you’re just sitting in or auditing classes, or if you don’t want to give feedback, you can delete or add more classes, but your results won’t be aggregated with the authenticated students’ results for accuracy purposes.

(1) You attended lecture! Therefore, you know how it went: was it a repeat of what was in the book? Was it super interesting and engaging? Was it new material, but so boring that you fell asleep? Drag the sliders to match, and optionally add a comment. (2) You didn't go. But that's still useful data -- why didn't you go? If it was just because you overslept because you were too tired, that may not be a negative for the lecture itself. If, on the other hand, you didn't attend because the previous reviews were poor, then that says something else.

(1) You attended lecture! Therefore, you know how it went: was it a repeat of what was in the book? Was it super interesting and engaging? Was it new material, but so boring that you fell asleep? Drag the sliders to match, and optionally add a comment. (2) You didn’t go. But that’s still useful data — why didn’t you go? If it was just because you overslept because you were too tired, that may not be a negative for the lecture itself. If, on the other hand, you didn’t attend because the previous reviews were poor, then that says something else.

 

Sometimes, a few numbers and bar charts just won't cut it. A comment or two can go a long way in detailing exactly what was great (or what wasn't) during lecture.

Sometimes, a few numbers and bar charts just won’t cut it. A comment or two can go a long way in detailing exactly what was great (or what wasn’t) during lecture.

User testing

Users AA and SY just finished up with lunch, and is off to class in the EQuad. She always goes to this particular class, but pretended that it was a class that she might consider skipping.

As it turns out, she would also ask her friend (who's going to the same class) whether it's worth going. Suggestion: Maybe a realtime updating chart of who's going or not might help, especially if the course is significantly different from previous offerings (different professor, different schedule of topics) and some students are aware of that.

As it turns out, she would also ask her friend (who’s going to the same class) whether it’s worth going. Suggestion: Maybe a realtime updating chart of who’s going or not might help, especially if the course is significantly different from previous offerings (different professor, different schedule of topics) and some students are aware of that.

The bar chart format appeared to be quite effective at visualizing the various parameters, although sometimes they looked a little bit like they should also be sliders. After finding the slider interface, it was noted that the mirroring of the sliders and the bars for viewing and adjusting the ratings helped a lot in allowing for accurate feedback.

The bar chart format appeared to be quite effective at visualizing the various parameters, although sometimes they looked a little bit like they should also be sliders. After finding the slider interface, it was noted that the mirroring of the sliders and the bars for viewing and adjusting the ratings helped a lot in allowing for accurate feedback.

User NP also goes to just about every class, but tried it out anyway.

While she probably wouldn't actively use the app all that much, since she goes to class, she noted that it would make sense for those who treat lectures as optional. Also, the information provided does give a good overview of whether the lecture will be interesting or engaging, which is a good indicator of whether it is a class in which homework can be done without missing any of the material.

While she probably wouldn’t actively use the app all that much, since she goes to class, she noted that it would make sense for those who treat lectures as optional. Also, the information provided does give a good overview of whether the lecture will be interesting or engaging, which is a good indicator of whether it is a class in which homework can be done without missing any of the material. The interface didn’t provide any significant struggle, and navigation (designed to be similar to most iPhone apps) was fairly easy and discovered without much explanation.

User AS tried out the app while heading to class in Friend.

AS got quite confused about why the "Did you attend?" Yes/No" was presented on the same screen as the ratings. I hadn't thought of that, and he's completely right -- you'd want to present the ratings before class, and the "Did you attend?" screen after class.

AS got quite confused about why the “Did you attend?” Yes/No” was presented on the same screen as the ratings. I hadn’t thought of that, and he’s completely right — you’d want to present the ratings before class, and the “Did you attend?” screen after class.

 

AS also cited an incentive issue: at the beginning, there will be very little data in the app, so people won't be inclined to contribute. One way to perhaps increase user engagement and input is to not show the ratings for the next class until the user rates the current one (either rating the class if attended, or explaining why if did not attend). Still doesn't quite solve the bootstrapping issue, but over time, this will become quite useful.

AS also cited an incentive issue: at the beginning, there will be very little data in the app, so people won’t be inclined to contribute. One way to perhaps increase user engagement and input is to not show the ratings for the next class until the user rates the current one (either rating the class if attended, or explaining why if did not attend). Still doesn’t quite solve the bootstrapping issue, but over time, this will become quite useful in keeping user engagement and input high.

A disembodied hand (belonging to KO) has just pressed the "More information" button next to "Redundancy" for class XYZ123, at which point we are now examining comments from previous years. Since comment threads can get very long and annoying to look through quickly, an upvote system or some way of promoting useful comments was suggested to only show the top few, most accurate comments. Also, this once again demonstrates that students trust each others' judgment regarding whether to attend class or not; a common situation that was mentioned was the text - "Are you going to XYZ 123 tomorrow?" which influences whether the asker decides to attend as well.

A disembodied hand (belonging to KO) has just pressed the “More information” button next to “Redundancy” for class XYZ123, at which point we are now examining comments from previous years. Since comment threads can get very long and annoying to look through quickly, an upvote system or some way of promoting useful comments was suggested to only show the top few, most accurate comments. Also, this once again demonstrates that students trust each others’ judgment regarding whether to attend class or not; a common situation that was mentioned was the text – “Are you going to XYZ 123 tomorrow?” which influences whether the asker decides to attend as well.

Distilled insights:

  • Social does work in the context of attending class. If a bunch of people didn’t attend a previous year, there was probably a good reason for it, and ratings help elucidate that. Comments provide the nuance that is sometimes missing from a number.
  • Comments get really overwhelming really quickly. They need to be limited to only the most important, most relevant, most accurate few. This requires the creation of a very robust reputation system and a good recommendation system.
  • A lot of people always try to attend class. This is a very good thing, but it doesn’t really bode well for incentivizing the use of this app; those who go all the time probably don’t see a good reason to use it, so those who don’t go won’t have as much data off of which to base their decisions.
  • The temporal factor is confusing. Since comments and ratings made this year will be seen by students next semester or next year, it is unclear in the current interface what lecture I’m looking at and which year rated it this way. This also raises issues when classes are taught by different professors, or the syllabus is changed, or the professor updates the lecture material in response to user feedback. There needs to be an accurate way to account for all this without getting too complicated; otherwise, this app will just be very unreliable for any class that isn’t exactly the same every single year (at which point conventional wisdom and word of mouth work quite well also).
  • The original idea was to allow students to add/drop classes to rate without authentication, which was quickly pointed out as a horrible idea. There needs to be some sort of authentication with SCORE, so that you can only rate classes you’re actually enrolled in. You should, however, be able to view ratings for other classes, but at this point functionality starts to overlap with ICE and course ratings done by the registrar, so that isn’t really the main focus of this app.

It’s the Arduino Carabinieri!

Group members: Alan Thorne, Dylan Bowman, Harvest Zhang, Tae Jun Ham

Sketches:

Three sketches. The first, (and the one we built,) a police siren complete with flashing lights and wailing speaker, with no input other than a button (used like a switch) to turn it on or off. The second idea was a whack-a-LED game, where we use light sensors along with LEDs poking out of a grid; to play, you cover up the grid hole where the LED is lit, and the light sensor placed there detects that and gives you points. The final idea is a traffic light that  responds to a walk signal, but is otherwise a normal traffic light that is programmed to hold the same durations for red and green every cycle.

Three sketches. The first, (and the one we built,) a police siren complete with flashing lights and wailing speaker, with no input other than a button (used like a switch) to turn it on or off. The second idea was a whack-a-LED game, where we use light sensors along with LEDs poking out of a grid; to play, you cover up the grid hole where the LED is lit, and the light sensor placed there detects that and gives you points. The final idea is a traffic light that responds to a walk signal, but is otherwise a normal traffic light that is programmed to hold the same durations for red and green every cycle.

The Blues and Tunes Police Siren:

The Blues and Tunes police siren has an LED light bar, a single auxiliary LED flasher, and a speaker in order to flag down speeding Arduinos. It executes two types of flashing and wailing — one in the style of American police vehicles and the other more European (the European siren mode was added primarily because we ran out of blue and red LEDs, and used two yellow ones, which do show up in some European police vehicles but almost never in US vehicles). It has a pushbutton that starts and stops the siren.

Visible here is the large breadboard, containing the main LED light bar as well as the auxiliary one, mounted on the plastic blister packaging that also serves as a crude sort of resonator for the piezo speaker underneath.

Visible here is the large breadboard, containing the main LED light bar as well as the auxiliary one, mounted on the plastic blister packaging that also serves as a crude sort of resonator for the piezo speaker underneath.

Visible here: the red breadboard containing the on/off button, the Arduino, and the main breadboard.

2013-02-11 17.30.10

Here we see the lights in action. This particular implementation features two separate light and sound patterns.

 

For the most part, it works pretty well; the siren isn’t particularly loud, but it is annoying enough to earn looks from other lab groups if left on. The diffusers are acceptable but not very nice. A better light bar with a bunch of fresnel lenses would be ideal, but no one had miniature police light bar diffusers lying around. We could have also programmed the pushbutton to cycle through various types of sirens, but in this implementation we just hardcoded two alternating patterns.

It’s the Arduino Carabinieri! (video)

The components:

  • (1) Large blue LED
  • (2) Small red LED
  • (2) Small yellow LED
  • (1) RGB LED
  • (6) 330 Ohm resistors
  • (1) Pushbutton
  • (1) Piezo speaker
  • (1) Arduino
  • (1) Large breadboard
  • (1) Small breadboard
  • (n) A bunch of jump wires
  • (1) Translucent pen cap or similar diffuser
  • (1) Blister packaging strip covered in scotch tape for the main light bar

Make one yourself:

  1. Arrange all but the RGB LED in a visually pleasing manner on the large breadboard in the style of a police light bar.
  2. Add one 330 Ohm resistor between each LED and the Arduino. Connect the yellow RGBs to the same I/O pin and the red RGBs to the same I/O pin. Give the blue one its own pin. These are all digital I/O pins.
  3. Place the RGB LED in a prominent location on the large breadboard. We will be using only the red and blue pins (not the green one) from the RGB LED. Connect those (through resistors) to two I/O pins.
  4. Mount the pushbutton on the small breadboard and wire it up to the 5V pin. (You could also put it on the large breadboard with everything else, but there’s a lot of stuff on that one already.)
  5. Wire up the piezo speaker to an I/O pin.
  6. Make an appropriately light-bar-styled diffuser out of something translucent, and make another one for the RGB LED. Put them over the LEDs.
  7. The piezo speaker also gets much louder if you give it something larger to vibrate, so tape the piezo onto a blister package or something similar to make it sound more like a siren and less like a singing holiday card.
  8. Write up the code, and play the siren!

The code:

// ENUMS
int OFF = 0;
int ON = 1;
int TRANSITION = 2;

// Names for things
int button = 2;
int y = 3;
int r = 5;
int b = 6;
int tri_r = 9;
int tri_b = 10;
int speaker = 11;
int state = ON;
int current_state = ON;

// the setup routine runs once when you press reset:
void setup() {                
   pinMode(button, INPUT);     
}

// the loop routine runs over and over again forever:
void loop() {
  button_mode();

  if (current_state == ON) {
    for (int j = 0; j < 4; j++) {
      for (int i = 0; i <= 255; i++) {
        digitalWrite(speaker, HIGH);
        delayMicroseconds(300 - i);
        digitalWrite(speaker, LOW);
        delayMicroseconds(300 - i);
        analogWrite(speaker, 255);

        analogWrite(y, i);
        analogWrite(r, 255-i);
        analogWrite(b, i);
        analogWrite(tri_r, i);
        analogWrite(tri_b, 255-i);
        button_mode();
        delay(1);
      }

      for (int i = 0; i <= 255; i++) {
        digitalWrite(speaker, HIGH);
        delayMicroseconds(i + 45);
        digitalWrite(speaker, LOW);
        delayMicroseconds(i + 45);
        analogWrite(speaker, 255);

        analogWrite(y, 255-i);
        analogWrite(r, i);
        analogWrite(b, 255-i);
        analogWrite(tri_r, 255-i);
        analogWrite(tri_b, i);
        button_mode();
        delay(1);
      }
    }

    for (int j = 0; j < 4; j++) {
      button_mode();

      for (int i = 0; i <= 200; i++) {
        digitalWrite(speaker, HIGH);
        delayMicroseconds(100);
        digitalWrite(speaker, LOW);
        delayMicroseconds(100);
        analogWrite(speaker, 255);
        button_mode();
        delay(1);
      }

      analogWrite(y, 0);
      analogWrite(r, 255);
      analogWrite(b, 0);
      analogWrite(tri_r, 255);
      analogWrite(tri_b, 0);

      for (int i = 0; i <= 1802; i++) {  
        digitalWrite(speaker, HIGH);
        delayMicroseconds(300);
        digitalWrite(speaker, LOW);
        delayMicroseconds(300);
        analogWrite(speaker, 255);
        button_mode();
        delay(1);
      }

      analogWrite(y, 255);
      analogWrite(r, 0);
      analogWrite(b, 255);
      analogWrite(tri_r, 0);
      analogWrite(tri_b, 255);
    }

  }
  else {
    analogWrite(tri_b, 0);
    analogWrite(tri_r, 0);
    analogWrite(y, 0);
    analogWrite(r, 0);
    analogWrite(b, 0);
  }
}

void button_mode() {
  if (digitalRead(button) == HIGH) {
    state = TRANSITION;
  } else {
    if (state == TRANSITION) {
      if (current_state == OFF) {
        current_state = ON;
      } else if (current_state == ON) {
        current_state = OFF;
      }
      state = OFF;
    }
  }
}

Now all you need to do is add wheels and a motor and some more sensors and and a body and you’ll have an Arduino-based police car!