P2 – Name Redacted

Group Number: 20

Members:

Brian conducted one interview on his own and partnered for another interview and helped write the paragraphs on the interviews.

Matt conducted one interview on his own and partnered for another interview and helped write the paragraphs on the interviews.

Josh answered the 11 task analysis questions and helped Ed write the three storyboards and answer the interface design questions.

Ed helped Josh answer the 11 task analysis questions and wrote the three storyboards and answered the interface design questions.

Problem and Solution Overview
Currently, there is very little teaching of computer science fundamentals taught in middle school and high schools. Often, schools are prevented from teaching CS because they have insufficient computer resources to provide each student with a computing environment. Even in college, some students have difficulty learning the basic concepts such as binary number representation, memory, and recursion. The barrier of entry to learning CS is just too high. We are hoping to create an interactive method for teaching the fundamentals of computer science that does not rely on coding software on a computer. We wish to extend the idea of whiteboard programming. Using computer vision and a projector, we want to let users place “blocks” of code on the board and our tool will return visual feedback. Our solution could provide a low cost and interactive interface that could revolution technology teaching to the masses.

Description of Users Observed
Our target user group is divided into two main groups: teachers using our tool for pedagogy and students learning from our tool. We decided to select three users across both of these groups to get the most balanced perspective.

  • The first user that we observed created and teaches an introductory CS course targeted at students with almost no technical background (often students from humanities departments). We think that he was a good choice for a user group not only because he has had years of teaching experience, but also because he is often recognized by his students as being an extremely effective and engaging educator. He is extremely interested in CS education although his background and interest tend to be focused on a slightly older age group than we are targeting.
  • The second user that we interviewed was a student who took the above course. He had no technical background for the course and had a heavily humanities background. We thought he was a good user to interview because we were able to talk to him about the frustrations and confusions of learning the most simple CS. Also, we were able to compare his interview with the above interview readily.
  • Our final interviewee was a member of the school board of a large suburban district near Los Angeles. He is been extremely interested and involved in education reform. We thought he would provide an interesting insight into the backend of education. He helped us to understand how teachers might adopt and use this technology from an institutional standpoint.

CI Interview Description
Our first interview was conducted in the office of the user. We started out by explaining a little about who we were and what the goals of our project was. We then asked him to walk us through a couple of his extremely early lessons to help us get a picture of what current teaching practices are. Our second interview was conducted in a classroom with a student who had taken the aforementioned introductory computer science course. The student is concentrating in a nontechnical field but took the computer science course to fill in a distribution requirement and learn some fundamental computer concepts. We asked the student to walk through some of his previous code in the course, explaining what he did in the assignment. Our final interview took place in a common space. This interview was tricky to take the perspective of contextual interview. As we will discuss later, professional development for teachers is extremely individual and therefore hard to observe. We therefore made sure to talk about specifics from a more distant perspective.

Our first user, the professor, provided us with an outline of the first few weeks of lectures, and discussed how he explains various concepts ranging from binary to memory to logical loops. Our second user, the student, talked briefly about what he liked and disliked about the introductory computer science class he took. In particular, he liked learning about internet security issues as well as computer coding. However, the coding portion of the class was the most difficult for this student to learn. As part of the interview, I asked the student to go through some of his code from the computer science class that he took. When going through the HTML code that he written, he noted in many places where he had to fix syntactical errors. Most of his errors in this particular program came from omitting the ‘/’ character from the end tags. Similarly, his JavaScript errors were almost entirely syntactical, usual because of a missing or extra ‘}’ character. Both the professor and the student agreed that binary was a very hard concept to teach/learn. In particular, the student suggested that most students struggle with the concept of how many decimal numbers can be represented by n bits. Both also agreed that a very difficult concept in computer science is how all computers are actually equivalent, and how languages can all do the same computation. The professor also mentioned that most students have an incredibly difficult time understanding the abstractions of the computer process. In particular, the student had trouble understanding compiling, assembling, and linking because the concepts were too abstract. However, the professor and student disagreed on some issues. Firstly, the professor believed that students struggled most on the idea of memory and more specifically the actual holding of memory. While the student admitted that memory was difficult, he believed that more students had trouble with binary representations of numbers. The student also emphasized that he was most frustrated by the syntax problems that he encountered while coding.

After having looked into a teaching lesson itself, we wanted to take a step back and gain more perspective on what it’s like to be a teacher. This is where we turn to our last interview. Our final interviewee, as a member of a school board, has a big picture knowledge of what fundamentally it is like to learn how to teach. The main takeaway we got from the interview is that current pedagogical education is extremely individual. He walked us through all of the professional development activities that a teacher does in a given year (including professional development conferences and school events). They were very few of them (~5 days per year) and individually focused. Teachers make their own lesson plans and learn new technological tools on their own. He talked us through the process of integrating iPads into his district and how poorly they are used by many teachers. This individuality contradicts our first interview slightly. Our first user actually published the notes of his class in a book so other teachers could use them to shape their lessons. I would explain that difference as being the difference between K-12 and college education. Ultimately, I think that for the context of our project, we have to understand that K-12 teachers wouldn’t have much support in learning how to use our tool. It must be self-contained, self-teaching, and extremely foolproof if we hope to gain traction with this audience.

Task Analysis Questions
1. Who is going to use the system?
Our target audience is the typical classroom. We envision the product being used with about 20 students and a single teacher. The students don’t need to have any background in CS or experience coding.

2. What tasks do they now perform?
Current CS education tends to be done with “whiteboard coding.” This is the practice of writing code on a whiteboard to demonstrate the features. Other practices commonly used is simple lecture style and for certain concepts, teachers use abstract drawings. For example, when students first learn about memory, it is often represented with a box to be filled with data. These abstractions of data are very important, as students can quickly get confused when exposed to the more complicated principles that lie under them.

3. What tasks are desired?
Typically, educators are looking for a tool to help boost the understanding of their students in a classroom setting. Having the ability to present basic concepts of CS to students in a way that is easy to visualize, grasp, understand, and interact with is very desirable. Additionally, having a teaching tool that students can use and get feedback from without need to have the teacher on hand to always give direct instructions is very useful as it allows students to continue to learn on their own and at their own pace.

4. How are the tasks learned?
Currently, teaching is an extremely self-taught process. Teachers individually come up with their own lesson plans. In our system, tasks could be learned through a simple tutorial lesson that would introduce the students to the basic operations and principles behind our system. Concepts would be introduced one at a time and the interactions between could be demonstrated as students make progress.

5. Where are the tasks performed?
Classrooms with a projector and whiteboard. Only a few computers for a lot of students. Our system could potentially operate on any smooth surface to which ID tags could be easily attached and moved around.

6. What’s the relationship between user & data?
Users could potentially have personal profiles that track their progress throughout the lessons. When using the system in a group setting, the system could controlled by the teacher to set up lessons and track the progress of the class as a whole.

7. What other tools does the user have?
Textbooks, the internet, Toy programming languages (e.g. Scratch)

8. How do users communicate with each other?
Teachers communicate with their students in a classroom setting. The Teachers can give their students guidance and instructions as they use the system, and the students can freely ask questions about the current lesson.

9. How often are the tasks performed?
Students would go through a lesson about 2-3 times a week. Multiple lessons could potentially be completed in one class session if they are short.

10. What are the time constraints on the tasks?
Class session usually last about an hour. This should be plenty of time to go through most lessons. However, more difficult and complex tasks may need to be broken up into multiple lessons spanning multiple days.

11. What happens when things go wrong?
Ideally, any errors encountered would be a result of improper set up by the user, not flaws in the code itself. If something does go wrong such as the project getting out of alignment in the whiteboard or the camera has trouble reading the ID tags, users could simply tell the system to recalibrate. All data and current lesson progress would be saved. Data should be saved periodically regardless to ensure that if the system fails completely, the users’ current progress is still preserved.

Description of Four Tasks
Tutorial – How to Teach a Lesson (Storyboard not included)
This purpose of this lesson would be to provided a ground work for the teacher and students to build and to teach them the basic principles of using our system. This lesson could include walk throughs of specific lessons they should be teaching. Highlighting blocks and stepping them through each part of the process. Currently, this is an individual learning task done by teachers. They read textbooks and compile a lesson plan. We would hope to significantly reduce the difficulty of this process.

Teaching Number Base (e.g. Decimal, Binary, Octal, Hex)
Our first two interviews both said that one of the hardest things for early tech students to learn is binary. This is something that is fundamental to CS but can be conceptually very challenging to students. We hope to help the students to better understand number base through visualization. We imagine they would place Binary/Octal/Hex/etc. tiles on the board. They would then see how different number systems represent numbers. This would be a moderately difficult task to perform with our system but one that is currently very hard.

Teach First Toy Program (I/O and intro looping)
Another suggested teaching example from our interviews was an extremely simple toy programing lesson. The lesson essentially works through each of the fundamental parts of a computer and provides a simplified example:

Read in a number from input and print (intro. to I/O)
Read in a number from input, do an operation, and print (intro. to processing)
Read in two numbers from input, print the product (intro. to memory)
Read in numbers from input until a 0 is given. Calculate the sum and print (intro. to loops)
We would create these programs with simple code put on the board and inputs placed on the board as well. This lesson (especially memory) is something that is currently difficult to teach and learn but we hope to reduce that difficulty.

Simplified Turtle Graphics
Our final lesson is the most complex. We want to create a tactile version of the Logo programming language library. Users could control the movement of a virtual turtle and see how his path changes as they change various part of code on the board. This would be a great way to introduce functions and show the execution of the program in a visual manner. It would use:

Create “Function” cards that would allow users to define custom operations
Only uses a basic set of commands, such as forward, turn, and pen color.
This language is well established a good way to teach new students about programming but the difficulty is it often requires a computer for each student. We plan to reduce that difficulty by adapting the program to our tool, requiring only one computer per classroom.

Interface Design
The user can teach and interact with a variety of lessons on computer science topics. By placing a “lesson” card on the board, the system reacts and projects the interface for that lesson. Then, the user can add physical cards to the board in the relevant areas (eg. Binary values, Code, Input, etc..) and the system will react by showing the execution and output of the program. Unlike any current solution, our system allows a teacher or professor to visually demonstrate to an entire class these computer science topics. The interactivity of our system allows the teacher to make quick, visual changes to the program or input to demonstrate how the output will change. We believe that this will be an engaging, fun, and flexible way to teach students important topics in CS.

Storyboards
Teaching Number Base

This slideshow requires JavaScript.

Teach First TOY Program (I/O and intro looping)

This slideshow requires JavaScript.

Simplified Turtle Graphics

This slideshow requires JavaScript.

Sketch of the System

Photo 2013-03-11 04.46.49 PM

P2 — Elite Four

Group Name and Number: The Elite Four; Group 24

Names and Contributions: Clay Whetung, Jae Young Lee, Jeff Snyder, Michael Newman

Clay conducted one of the interviews and helped answer many of the questions, especially regarding contextual inquiry.
Jae conducted one of the interviews and helped answer many of the questions, especially regarding contextual inquiry.
Jeff conducted one of the interviews and helped answer many of the questions, especially regarding task analysis and interface design.
Michael drew up the storyboards and sketches and helped answer many of the questions, especially regarding task analysis and interface design.

Problem and Solution Overview

We are addressing the problem of people (especially students) leaving important items (e.g., keys, wallet, phone) behind when they leave their rooms/buildings. This can lead to various further problems, like being locked out, being unable to start one’s car, or being unable to send/receive texts or voice calls. Our proposed solution is a system that uses proximity sensors to detect when a user is attempting to leave a room without their keys/phone/wallet and alerts them (visually or audibly) before it’s too late. Our sensors can also help a user find missing items around the house or even out in the world. This addresses the problem by both preventing and dealing with the aftermath of user forgetfulness.

Description of Users Observed in Contextual Inquiry

Our target user group includes all people who leave important items behind. Specifically, we are focusing on rooms/buildings with self-locking doors, since the consequences of forgetting something are often more severe. A great example of this user group is college students. We also have easy access to these students and can make first-hand observations without much trouble. The first user we observed lives in a quad in Brown Hall. He is a junior in the COS department who describes himself as fairly organized. He spends a large amount of time in his room since it is one of his main places for studying and doing work. The second user we observed was a CBE senior living in a triple in 1903 Hall. He was generally organized, but his room had become quite messy in the past few weeks as his thesis workload increased. He often leaves his room to work, go to the gym, do laundry, and attend various extra-curriculars. He would like to be able to ensure that he is completely prepared when he leaves the room, with minimal effort on his part. The third user we interviewed lives in a quad in Little Hall. She is a senior in the PSY department and described herself as organized, though her room was somewhat messy. She would like a solution to help prevent expensive lock-outs.

Contextual Inquiry Interview Descriptions

We interviewed the three users in the environment that our system would be employed — their dorm rooms. Each of the three was observed leaving their rooms. We logged the habits that they had formed for preparing to leave their room. Afterwards we discussed with each participant their routine in detail and how they thought their current system could be improved. We also asked them what they were willing to sacrifice for an entirely new system and provided them with some of our initial ideas for our project in order to help focus and guide the discussion.

There were several common experiences that each user shared. They all commented on the habits that they used to ensure they were ready to leave their room. This universally involved checking pockets/bags for wallets, phones and keys. However, these systems fail catastrophically when key items are missing from the wallet, or items are mistaken for each other. For example, users would remove their prox from their wallets to do laundry or go to the gym, and when they forgot to place it back into their wallet they were locked out. Along with this, they would lock themselves out if they felt their phone in their bag, but mistook it for their wallet. These were by far the most common lockout cases. They also commonly propped their doors to prevent lockouts, but this could lead to fines by fire safety. Two of our users also expressed interest in a device that would help them find lost items. Such a device would preferably be stationary, such that it does not become lost as well. Interviewees also stated they they would only use a system that was easy to install and did not change the form factor of their necessities very much.

Interestingly, one of the users that we had interviewed had investigated permanent solutions to the door locking problem. They had set up a variety of systems in their room to allow access without their prox, so they could never be locked out. However, this was a massive security issue and they were fined multiple times for their efforts. One user was willing to go to great ends for this system, even briefly considering total room overhauls for an optimal system. However, they did express that such changes may be unfeasible.

Answers to Task Analysis Questions

1. Who is going to use the system?
We’re focusing the system on students, but it’s usable by pretty much anyone who lives in a house/dorm/apartment and owns keys, a phone, a wallet, a purse, or other important items.

2. What tasks do they now perform?
Currently, users are simply forced to remember everything they need to bring when they leave home. When they want to find missing items, they must perform either an inefficient sweep over their entire room(s) or a slightly better search of possibly incorrect “last seen” locations. Finding missing items outside of one’s home is pretty much a hopeless task.

3. What tasks are desired?
Ideally, users wouldn’t have to rack their brains for missing items every time they leave the home. Instead, they will be automatically reminded if they’re about to leave something behind. Searching for missing items, particularly in one’s own home, should be faster and easier than simply looking everywhere.

4. How are the tasks learned?
These tasks are mostly learned by habit; they are performed several times a day every day by users. Most users develop strategies for finding lost items early in life, though they may not confront the problem of remembering important items until they live on their own for the first time.

5. Where are the tasks performed?
A user will generally attempt to determine if he has all necessary items immediately before leaving their abode, usually when close to the exit. Searching for missing items can occur anywhere, both inside the home and out.

6. What’s the relationship between user & data?
We don’t collect personalized information about the data or create a centralized data store; therefore privacy concerns are minimal/nonexistent. The only data we collect is proximity data  (e.g., are the user’s keys nearby?), which doesn’t need to be stored.

7. What other tools does the user have?
The user’s memory is their primary tool, both for remembering not to leave things behind and for trying to find missing items. However, the memory is a fickle and unreliable tool. For finding one’s phone in particular, there do exist apps that use GPS or other mechanisms to track the missing phone. However, these tools may not work in all situations — for example, if the phone is powered off. For remembering items and being let back into a room after being locked out, a roommate might be a useful “tool.” Many users use hacked solutions — makeshift door stops, door mechanism modifications, or others — to prevent their doors from ever closing.

8. How do users communicate with each other?
When locked out, Princeton students will call Public Safety with their phone, if they have it. If phoneless, they will generally attempt to borrow a phone from a neighbor or kind stranger. Roommates who are locked out may contact each other via cell-phone or email to borrow each other’s proxes. If users realize they have forgotten other important devices while away from their rooms, they may contact their roommates or significant others and request that they meet up and bring the forgotten devices.

9. How often are the tasks performed?
Leaving a room is done multiple times per day. Finding missing items (hopefully) is done less frequently, depending on the forgetfulness of the user. On average, the users interviewed and our group members get locked out on the order of once every 2-3 months.

10. What are the time constraints on the tasks?
There isn’t really a “time constraint” on not forgetting to take one’s keys/wallet/phone out of the room, although usually this is a process that occurs within just a few seconds. On the other hand, finding lost items (especially for important things like phones) is a task that is best accomplished within a short time frame — such as a half-hour to a couple of hours. If a user ends up locked out, they may need access within a short time frame if they have left necessary items in their abode, such as completed assignments.

11. What happens when things go wrong?
At worst, a user will be locked out of their room (potentially wearing nothing but a towel) with no way to call someone else for help. Less drastic scenarios include simply being locked out, being unable to start a car, being unable to open a door or a bike lock, being unable to make calls or send texts, and/or being unable to purchase things. Some of these scenarios could be seriously problematic; others are merely annoying.

Description of Three Tasks

Task 1: Being reminded to take keys/phones/wallet along when leaving a room.
Our system will have a proximity sensor and be located (by default) near the door. When a user opens the door to leave their room, our system will flash red LEDs and play a warning melody if certain items (e.g., keys, phone) are not detected. If these items are present, the system will light up green (and may also play a happy tune).
Current difficulty: Easy
Proposed difficulty: Trivial

Task 2: Being able to use the proximity detector in one’s house to find missing items.
Though our system will typically be located near the door, users will also be able to carry it around (with some kind of battery pack) in order to detect important items in their home. Since the area being searched is relatively small and confined, it should be relatively easy to find objects simply using proximity sensing. The device will have an “item-detecting” mode that the user can activate, and it will light up green and beep more quickly as it gets nearer to the missing item(s).
Current difficulty: Moderate
Proposed difficulty: Easy

Task 3: Using the proximity sensors to find missing items out in the world.
This task is very similar to the previous task in that it involves detection of missing items; however, finding missing items outside of the home environment is far more difficult due to the increased size of the area being searched. For this, the user will need to have a general sense of where their item(s) might be; unlike in the previous task, a broad sweep over the entire searchable area will not be feasible. Battery life and detection range are also more of a consideration for this task; we may need to use sensing devices with a longer range than (for example) RFID tags.
Current difficulty: Hard
Proposed difficulty: Moderate

Interface Design

Our system consists of several parts: tags of various form factors that users can attach to important items and a door-frame mounted sensor to check for items when leaving the room that can also be detached and used as a handheld proximity sensor to locate lost items. The sensor will have a Li-On battery, rechargeable over USB, so it can be used as a portable device. We believe that for wallets, a credit-card form factor tag would be minimally intrusive, whereas for keys and cell phones, a small fob (about the size of a quarter) would be more appropriate. Each of these tags would contain a battery and an antenna. Our system alerts users when they attempt to leave their room or abode without every item they’ve tagged using our system by playing an alert noise and flashing red LEDs. When a user leaves the room with important items, the system plays a happy noise and flashes green LEDs. To add a new item to the set tracked by our system, the user presses a button on the door sensor and holds the new tag up to it until the sync completes. A user can also remove an item from this set by a similar process.  The visual and audible reminder provided by our system will help even users in an altered mental state remember their important items. The system can handle multiple users by maintaining separate device profiles for each inhabitant. It will alert a user leaving if it does not detect a complete set of devices. Users can separately add and remove devices. By removing the sensor from its door frame mount, the system will automatically switch to proximity sensing mode so that it can be used to locate missing objects — both in the user’s room and elsewhere. When no device tags are detected the system will flash red and beep slowly; as the user nears the missing item(s), the system will flash green and beep with increasing frequency. As far as we know, no other system grants this kind of functionality to a user — at best, there exist ways to find lost phones using GPS, but only when the phones are powered on. Our system is far more versatile than that, especially since it implements preventative as well as remedial solutions to the problem of forgetting important items.

Storyboards

Task 1: Being reminded to take keys/phones/wallet along when leaving a room.

A clueless user about to leave his room without his keys.

A clueless user about to leave his room without his keys.

Our system notifies the user that he has forgotten his keys.

Our system notifies the user that he has forgotten his keys.

Task 2: Being able to use the proximity detector in one’s house to find missing items.

A sad user unable to find his keys.

A sad user unable to find his keys.

Our system can be used as a proximity sensor to help locate the missing keys.

Our system can be used as a proximity sensor to help locate the missing keys.

Task 3: Using the proximity sensors to find missing items out in the world.

A helpless user who has just realized that he lost his phone at the beach.

A helpless user who has just realized that he lost his phone at the beach.

Our system can be used as a proximity detector anywhere to find the missing phone.

Our system can be used as a proximity detector anywhere to find the missing phone.

Design Sketches

A sketch of the proximity sensing device.

A sketch of the proximity sensing device.

How different items would be tagged by the device.

How different items would be tagged by the device.

The portable user interface.

The door-mounted user interface.

The portable user interface.

The portable user interface.

 

Lab 2 – Name Redacted

Names: Brian, Matt, Joshua, Ed

Group Number: 20

We built a capacitive piano with four keys. In the final iteration, you can play one note at a time but the sound is continuous for as long as it is pressed. Our creative add on is including a nob (potentiometer) that allows a user to change the pitch. It was successful in that it creates interesting tones and we can push the idea of duets on the same instrument. In the final performance, the two performers performed different duties while simultaneously working together to create a beautiful sound. In the future, we would like to add more dynamic music instead of a single tone. We would also create more roles for additional people to further our idea of multiple individuals performing on the same instrument. Finally, we would like to add some tactile feedback to the capacitive keys so that there is a physical response to the individual playing the music.

Prototype 1: Four capacitor sensors for four different notes. The notes are distinct and played one at a time but if two keys are pressed, they are played sequentially to create pseudo-chords.

http://www.youtube.com/watch?v=m4fnaxtYZm0

// Graphing sketch
//Adapted from http://arduino.cc/en/Tutorial/Graph 

import processing.serial.*;

 Serial myPort;        // The serial port
 int xPos = 1;         // horizontal position of the graph

 int zero_pos;
 int one_pos;
 float max_pos;
 float min_pos;


 
 void setup () {
	 // set the window size:
	 size(1080, 720);
	 max_pos = height; 
	 min_pos = 0;       

	 // List all the available serial ports
	 // println(Serial.list());
	 // I know that the first port in the serial list on my mac
	 // is always my  Arduino, so I open Serial.list()[0].
	 // Open whatever port is the one you're using.
	 myPort = new Serial(this, Serial.list()[0], 9600);
	 // don't generate a serialEvent() unless you get a newline character:
	 myPort.bufferUntil('\n');
	 // set inital background:
	 background(224,228,204); 
	}
	void draw () {
 	// everything happens in the serialEvent()
 }
 
 void serialEvent (Serial myPort) {
	 // get the ASCII string:
	 String inString = myPort.readStringUntil('\n');

	 if (inString != null) {
		 // trim off any whitespace:
		 inString = trim(inString);

		 if (inString.length() != 0) {
		 	String[] sensor_strings = inString.split(" ");

		 	if (sensor_strings.length < 3) {
		 		println("RETURN");
		 		return;
		 	}

		 	float inByte = float(sensor_strings[0]); 
		 	inByte = map(inByte, 0, 1023, 0, height/3);

		 	float yPos = height;
		// draw the line:
		stroke(105,210,231);
		line(xPos, yPos, xPos, yPos - inByte);
		yPos -= (inByte + 1);

		inByte = float(sensor_strings[1]); 
		inByte = map(inByte, 0, 1023, 0, height/3);

		stroke(167,219,216);
		line(xPos, yPos, xPos, yPos - inByte);
		yPos -= (inByte + 1);


		inByte = float(sensor_strings[2]); 
		inByte = map(inByte, 0, 1023, 0, height/3);

		stroke(250, 105, 0);
		line(xPos, yPos, xPos, yPos - inByte);


		if ((yPos-inByte)  min_pos) {
			min_pos = yPos-inByte;
		}
		drawMax(max_pos);
		drawMax(min_pos);


		 // at the edge of the screen, go back to the beginning:
		 if (xPos >= width) {
		 	xPos = 0;
		 	max_pos = yPos-inByte;
		 	min_pos = yPos-inByte;
		 	background(224,228,204); 
		 } 
		 else {
		 // increment the horizontal position:
		 xPos++;
		}
	}
}
}

void drawMax(float max_pos) {
	stroke(255, 0, 0);
	ellipse(xPos, max_pos, 2, 2);
}

Prototype 2: We are extending prototype 1 but now have a potentiometer that shifts the tone of all of the notes. This allows the instrument to have a larger range than just the four notes presented.

http://www.youtube.com/watch?v=LjhAQpXjqIY

#include <CapacitiveSensor.h>


/*
 * CapitiveSense Library Demo Sketch
 * Paul Badger 2008
 * Uses a high value resistor e.g. 10 megohm between send pin and receive pin
 * Resistor effects sensitivity, experiment with values, 50 kilohm - 50 megohm. Larger resistor values yield larger sensor values.
 * Receive pin is the sensor pin - try different amounts of foil/metal on this pin
 * Best results are obtained if sensor foil and wire is covered with an insulator such as paper or plastic sheet
 */
 
int speakerPin = 7;
int knobPin = 1;
int knobVal = 0;

int threshold = 3000;


CapacitiveSensor   cs_0 = CapacitiveSensor(3,8);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_2 = CapacitiveSensor(2,10);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_4 = CapacitiveSensor(4,12);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_6 = CapacitiveSensor(6,13);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

void playTone(int tone, int duration) {
  for (long i = 0; i < duration * 1000L; i += tone * 2) {
    digitalWrite(speakerPin, HIGH);
    delayMicroseconds(tone);
    digitalWrite(speakerPin, LOW);
    delayMicroseconds(tone);
  }
}

void playNote(char note, int duration) {
  char names[] = { 'c', 'd', 'e', 'f', 'g', 'a', 'b', 'C' };
  int tones[] = { 1915, 1700, 1519, 1432, 1275, 1136, 1014, 956 };

  // play the tone corresponding to the note name
  for (int i = 0; i  threshold) {
     playNote('c', 100); 
    }
    
    if (total_2 > threshold) {
     playNote('d', 100); 
    }
    
    if (total_4 > threshold) {
     playNote('e', 100); 
    }
    
    if (total_6 > threshold) {
     playNote('f', 100); 
    }
    

//    Serial.print(millis() - start);        // check on performance in milliseconds
//    Serial.print("\t");                    // tab character for debug windown spacing
//
//    Serial.println(total);                  // print sensor output 1
    

    delay(10);                             // arbitrary delay to limit data to serial port 
}

Prototype 3: We extending prototypes 1 and 2 so that now the notes play continuously until the user releases his/her finger from the key. The potentiometer can shift the note while it is playing.

http://www.youtube.com/watch?v=QjxADA7NE1E

#include <CapacitiveSensor.h>


/*
 * CapitiveSense Library Demo Sketch
 * Paul Badger 2008
 * Uses a high value resistor e.g. 10 megohm between send pin and receive pin
 * Resistor effects sensitivity, experiment with values, 50 kilohm - 50 megohm. Larger resistor values yield larger sensor values.
 * Receive pin is the sensor pin - try different amounts of foil/metal on this pin
 * Best results are obtained if sensor foil and wire is covered with an insulator such as paper or plastic sheet
 */
 
int speakerPin = 7;

int knobPin = 1;

int threshold = 5000;

int knobVal = 0;


CapacitiveSensor   cs_0 = CapacitiveSensor(3,8);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_2 = CapacitiveSensor(2,10);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_4 = CapacitiveSensor(4,12);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

CapacitiveSensor   cs_6 = CapacitiveSensor(6,13);        // 10 megohm resistor between pins 4 & 2, pin 2 is sensor pin, add wire, foil

void playTone(int tone, int duration) {
  for (long i = 0; i < duration * 1000L; i += tone * 2) {
    digitalWrite(speakerPin, HIGH);
    delayMicroseconds(tone);
    digitalWrite(speakerPin, LOW);
    delayMicroseconds(tone);
  }
}

void playNote(char note, int duration) {
  char names[] = { 'c', 'd', 'e', 'f', 'g', 'a', 'b', 'C' };
  int tones[] = { 1915, 1700, 1519, 1432, 1275, 1136, 1014, 956 };

  // play the tone corresponding to the note name
  for (int i = 0; i  threshold) {
     note_playing = true;
     playNote('c', 100); 
    }
    
    if (total_2 > threshold) {
     note_playing = true;
     playNote('d', 100); 
    }
    
    if (total_4 > threshold) {
     note_playing = true;
     playNote('e', 100); 
    }
    
    if (total_6 > threshold) {
     note_playing = true;
     playNote('f', 100); 
    }
    
    if (!note_playing) {
     noTone(speakerPin); 
    }
    

//    Serial.print(millis() - start);        // check on performance in milliseconds
//    Serial.print("\t");                    // tab character for debug windown spacing
//
//    Serial.println(total);                  // print sensor output 1
    

    delay(10);                             // arbitrary delay to limit data to serial port 
}

List of parts:
4 1MegaOhm resistor
1 potentiometer

4 pieces of aluminum foil
4 alligator clips
1 Arduino
1 Buzzer

Detailed Instructions:
To make the keyboard, tape four strips of foil to a piece of paper, making sure to leave one end lose so you can connect them later and the top of the other end exposed so you can actually touch the foil. To make the key sensors themselves, take a 10M Ohm resistor and connect it to a digital output pin. Then connect the other end to a digital input pin. Connect one end of an alligator clip to the input-side of the resistor and connect the other end to the exposed end of one of the pieces of foil. Repeat this process for the three other keys.
For the pitch control, take a potentiometer and connect one pin to 5V, the middle pin to an analog input pin, and the last pin to ground.

%eiip – P2 (Interviews)

Group Number: 18

Names:

Mario Alvarez ’14, Valya Barboy ’15, Bonnie Eisenman ’14, Erica Portnoy ’15.


Problem and Solution Overview:

Organizing books, finding them, and remembering where specific books are can be difficult. It becomes exceedingly difficult as the number of books grows, and as people begin using books both for work and pleasure. Moreover, people who have multiple locations that contain books (multiple rooms, home and office, etc.) have trouble remembering which books are where, and continually end up needing a book that is inaccessible. Our solution is a bookshelf that has an RFID scanner on it. Each book on the shelf has an RFID tag placed on it. The shelf knows which books it contains, and can inform the user that a book is on it when a user queries a software system. An important aspect of our system is that it doesn’t require the user to make lasting modifications to their books, which was a concern of most of the potential users we interviewed.


Descriptions of Users Observed:

The tar­get user group we ended up inter­view­ing were researchers who own a lot of books. The idea behind this tar­get user group was to have peo­ple who use books in mul­ti­ple ways, and could there­fore have more spe­cific orga­ni­za­tional needs. Moreover, these users tended to have mul­ti­ple loca­tions for stor­ing books — at home, in the office, study car­rels — and tended to also have bor­rowed books, which would need to be returned. This makes it more prob­lem­atic if they lose or mis­place books. We ended up inter­viewing some grad­u­ate stu­dents, in math­e­mat­ics and physics, and some pro­fes­sors of com­par­a­tive lit­er­a­ture, Eng­lish, and com­puter sci­ence. We wanted to see how peo­ple in dif­fer­ent dis­ci­plines use books. For example, we noticed that those in the arts tended to have sig­nif­i­cantly larger num­bers of books (by orders of mag­ni­tude), and there­fore were forced to already have estab­lished orga­ni­za­tional systems. Addi­tion­ally, users in tech­ni­cal fields favored e-books for plea­sure read­ing, but avoided them for research due to imprac­ti­cal­ity; the users we inter­viewed in the arts and human­i­ties expressed extreme dis­like for e-books. Based on this, we real­ized that our sys­tem would be bet­ter for a tar­get audi­ence of peo­ple who own no more than a few hun­dred books.


CI Interview Descriptions:

We observed two mathematics graduate students in their respective offices. They both kept their math books in the office, and their pleasure reading at home. The first student had some problems remembering which of her books were at home and which were in her office. The second graduate student had everything perfectly organized — fiction was organized by what he had or hadn’t read and then by author; mathematics was organized by subject — and knew exactly where everything was. He generally can locate a book without a problem (and demonstrated this by finding any book we asked for), but sometimes forgets he has certain books and buys multiple copies. Both used e-books only when the material was otherwise inaccessible, or when traveling, but preferred to do their reading using physical copies. Both also kept their library books on a separate bookshelf, so as not to misplace them. Finally, both said that an organizational system would be nice to have, especially to remember which books are where, but that they do not want to take the time to scan or otherwise document every book they own.

We also observed a professor of Visual Arts who has thousands of books. He had them organized by topic, language, and time period. He could find any given book within seconds. He also had an expansive library at home, which he used for research and for pleasure. The books he kept in his office were those that could be relevant to the classes he teaches. A problem he acknowledged having was moving books from one location to another (i.e. from his house to his office), but he was very good at remembering where his books were. He also never borrows books, and keeps his library books separate from everything else. Next, we observed an English professor in his home, where he keeps his largest collection of books (he has several personal libraries across his residences and offices). He keeps his books using a loose, informal system of organization, with books roughly organized by topic, and books he is currently using stacked on or near his desk. His book collection is so important to him personally – and he spends so much time with it – that he is more or less able to keep track of where everything is. This is a feat, since he estimates he has a few thousand books in his home alone. This system appeared to work well for him: for instance, after he mentioned Turing in passing, we asked him if he could find a book about Turing for us, and he was immediately able to show us where the book had been, as well as recall that he had moved it to one of his offices some time ago. He feels that strict organizational schemes “demystify” the process of finding books, and that the serendipity of finding a book different from the one he was originally looking for is important to his research.

We also went to the Princeton Public Library and observed people trying to find books on the shelves. They would look up a book on a computer, look along the edges of the bookshelves to find the section, and walk along the aisle until they found the right first number, and then they would look more closely once they were nearby. In order to find the specific book, people looked very closely at the titles and numbers on the shelves. We then observed people shopping at CVS, and noted that they had a very similar tactic: they would look at the aisle sections to determine which aisle to go to, and then look more closely once there. One thing we noticed was that when people look for things they tend to physically run their fingers along the spines, shelves, or objects. If they don’t want to touch the objects, they’ll hover next to them, scanning physically as well as visually.


Answers to Eleven Task Analysis Questions:

1. Who is going to use the system?

Our system could be used by anyone with a personal collection of books. However, we envision it being most useful to people who use physical books for research in technical fields. There is very little chance that people who already have thousands of books will catalog all of them, so they would be less likely to use our system; our system will likely be more amenable to a medium-size collection (up to a few hundred books) than to an extremely large collection anyhow. Though we are designing the system with researchers in mind, it will likely be useful to those outside research fields as well.

2. What tasks do they now perform?

Our target group performs a few key tasks while interacting with their book collections. They search for books that they know are in their collection (though they may not know the books’ exact location). After they are done using a book, they need to be able to return it to their collection. They sometimes need to go through their collection to determine whether they own a particular book. Finally, they need to be able to add new books to their collection. All of these tasks generally operate within the frame of a particular organizational scheme; therefore, implicit in each action is that the user should be able to do it while maintaining the invariants necessary to keep their organizational scheme usable. If they do not do this, the tasks of finding books become difficult.

3. What tasks are desired?

Essentially, the users desire to perform the same set of tasks more efficiently. It can take a long time for a user to determine whether they already own a book, since, without a comprehensive catalog of their collection, they need to examine all their books to determine that they do not have a particular one. Additionally, users need to be able to find books consistently and efficiently; as mentioned above, this is not always the case. Finally, researchers tend to have their book collections spread across multiple locations (for example, their home and their office), so determining which books are in which locations is another important task (this was particularly true of the English professor we interviewed).

4. How are the tasks learned?

Currently users employ some combination of two complementary strategies in order to keep their books organized (i.e., learn and remember where books are). First, they can impose a formal system of organization, grouping books together based on commonalities (as the film professor we interviewed did). In an extreme case, they might use an industrial-strength system such as Dewey Decimal. Second, they can use spatial memory, connecting these groups of books to the parts of the space in which they are stored for more rapid retrieval. Both have drawbacks: maintaining a formal system requires discipline and an investment of time each time a book is stored or retrieved, while using spatial memory effectively requires an intimate familiarity with one’s collection and the space that collection occupies, which can take years to develop.

5. Where are the tasks performed?

Wherever the users keep their book collections – so the home and office, at a minimum; possibly other locations for users whose collections are spread across other locations.

6. What’s the relationship between user & data?

In this case, the data is the set of books the user owns, and where those books are. Users have an expectation of privacy for their book collections, as these can be highly personal in nature. Though sharing the full list of books users own will not be required, users may want to be able to share certain information about their collections with friends (for instance, if their friends want to borrow a book from their collection). Users should be able to access data about their own collections remotely – for example, if a user is at a bookstore, she should be able to look up whether she already owns a copy of a book before buying a new one.

7. What other tools does the user have?

The user currently has standard bookshelves. The user may also have software applications to aid in cataloging, such as BookCrawler, but as the film professor we interviewed mentioned, their utility is questionable because they are not tightly integrated with the physical locations of the books like our system is. Additionally, the user may use a traditional cataloging system (such as Dewey Decimal) in order to precisely specify the correct location for each book; however, this requires knowledge of the system and a large amount of effort to maintain, with the result that few actually use such systems.

8. How do users communicate with each other?

Users may borrow books from each other and lend books to each other. Additionally, they must negotiate with others who share their space to ensure that their system of organization is maintained properly. In the case of the home, this may be with other family members who share shelf space (and who may share the collection, making them users as well). In the case of the office, this may be with co-workers or colleagues (researchers in particular often have shared office space.) Communication tends to be informal, verbal, and in-person.

9. How often are the tasks performed?

This varies greatly from user to user and from collection to collection, depending on the user’s habits and the purpose of the collection (for instance, whether it serves as an archive of old, seldom-used material or an active repository for currently-relevant research). Generally, the tasks of storing and retrieving books already in the collection are performed more frequently (several times a day to a couple of times a week) than adding books or checking whether a book is in the collection (a couple of times a week to a few times a year).

10. What are the time constraints on the tasks?

There are no time constraints per se, but if it takes too long to find a book, the individual may give up (although some, like the English professor we interviewed, value the experience of finding a different book than one originally sought). Doing research efficiently depends on finding research materials quickly, so time is likely most important for professional researchers, who want to free up time to do actual research rather than looking for books.

11. What happens when things go wrong?

If the system is used and then stops working (or stops being used), the user’s book collection may become extremely disorganized. As a result the user may not be able to find her books any longer. This is not a catastrophic result, but may make it difficult for the user to do her job if she is a researcher. The user may also lose books if their organizational system loses track of them. This is a potential problem with our system: since it does not require a particular physical layout of books, if users rely on our system and no longer organize their book collection in conventional ways, their collections will be left in a state of disarray if our system stops working.


Description of Three Tasks:

1. Retrieving a book from the collection

The difficulty of this task currently depends on a number of factors, such as how many different places the user keeps books. None of the users we talked to used a strict organizational scheme, though many had their books roughly sorted based on whether or not they were for research, as well as by topic or by time period. With such a scheme, most users are able to find books with moderate difficulty.

Under our proposed system, the user would be able to simply query our system for the book, which would then indicate its physical location. Our system would therefore make it easy to perform this task.

2. Shelving a new book, or returning a book to the collection

Currently, the difficulty of this task is proportional to both the user’s number of books and the strictness of their organization system. If their system is lax and flexible it might be easy to file a book – but this will make retrieving books difficult. If their system is strictly organized, keeping it organized requires significant effort and planning on the user’s part. On average, this tends to be a moderately difficult task.

Our system would keep track of where each book is, so you can put them wherever you like – either maintaining a conventional organizational scheme on top of BiblioFile, or not. The task becomes easy.

3. Determining if you own a book, or if a borrowed book is in your possession

Currently, if you do own a book, determining that you do own it is as easy as retrieving it – that is, of moderate difficulty. However, if you don’t own the book, you have to go through your entire collection to make sure that it is not in your collection at all, as opposed to simply being present but out of place. In this case, the task becomes hard.

With our system, once a user has entered all her books into the system, she can know with confidence whether or not she already owns a book, simply by looking that book up using our interface. This task is now easy.


Interface Design:

Provide a text description of the functionality of your system. What can a user do with it, and how?

Our system keeps track of a user’s books and their locations. A user can ask the system for the approximate location of a book, and the system will indicate where on the shelf it can be found. Additionally, a user could ask the system whether or not they own a particular book. We envision the user interacting with our system through a web application that lets them search for books as well as view all books in the collection. To add a book to the bookshelf system, we envision the user adding a physical “tag” to the book, which will then allow the book/tag combo to be automatically registered via our website.

2. What benefits does the application offer to the user? What is the scope of functions offered? How will your idea differ from existing systems or applications?

Our application allows the user to quickly find their books and determine what books they own without using rigid cataloging systems. Traditional organizational schemes are hard to maintain, and even harder to reconfigure. Our system allows users to place their books in any configuration within the special bookshelf, which gives the users more flexibility. Existing systems require the user to either adhere strictly to an organizational scheme such as Dewey Decimal, or to perform some specific action each time a book is moved (such as scanning a barcode).

3. Provide 3 storyboards. Each one should show how someone would use your system to accomplish one of the three tasks you chose above. Show motivation for using the system, as well as steps the users will go through to accomplish the task.

Photo Mar 10, 4 21 30 PM

Thanks to his Biblio-File system, this user is easily able to find and return his friend’s copy of Ender’s Game.

Photo Mar 10, 4 21 18 PM

Freed from the constraints of a rigid organizational system, this user lets his imagination run wild!

Photo Mar 10, 4 20 49 PM

Using his Biblio-File system, this scholar is able to quickly find the esoteric works he needs to conduct his research productively.

4. Provide a few sketches of the system itself. What might it look like? How might a user interact with it?

Hardware – Shelving: Our system will consist of one or more stacked modular shelves. Each shelf will contain an RFID sensor that will detect books contained within in, and transmit that information to the central server. It will also contain LED lights that will light up when a user selects a book in the software.

A modular shelving unit for BiblioFile

Hardware – RFID Stickers: The books’ presence will be sensed via RFID stickers that can be placed either inside a book (if the user owns the book) or on a bookmark that is placed inside the book (if the user does not own the book). These RFID tags can be disassociated with books after they are permanently removed from the collection and reused with other books.

Usage of the RFID stickers

 

Software – Lookup: Using an interface similar to that of Music on iOS, users will be able to search the collection or browse by any of several types of metadata that the system tracks for each book. When they find the book they want, they tap its title, and its shelf will light up underneath its approximate location. To edit the collection (delete books), users tap the edit button.

Software – Adding a Book: The system will initially try to use OCR to get the book’s information from the cover, after which the user will have the opportunity to edit or add information. The user then places the book on the shelf, so that the system knows which book is associated with which tag.

Interface mockup for looking up a book and for adding a new book to the collection

Software – Locating a Book: When a user taps on the book from the Lookup screen, she will be taken to a page containing the book’s information and tags associated with the location of the shelf that it is currently on. If the user desires, she may tap the illuminate button to have the shelf light up.

Interface mockup for locating a book in the collection

  

 

P2 – Runway

Team 13 – CAKE

Connie (task description, interface design, blog post), Angela (task analysis, interface design), Kiran (contextual inquiries, interface design), Edward (contextual inquiries, task description)
(in general, we collaborated on everything, with one person in charge of directing each major component)

Problem/Solution

3d modeling is often a highly unintuitive task because of the disparity between the 3d work space and the 2d interface used to access and manipulate it. Currently, 3d modeling is typically done on a 2d display, with a mouse that works on a 2d plane. This combination of two 2d interfaces to interact with a 3d one (with the help of keyboard shortcuts and application tools) is hard to learn and unintuitive to work with. A 3d, gesture-based interface for 3d modeling combines virtual space and physical space in a way that is much more natural and intuitive to view and interact with. Such a system would have less of a learning curve and facilitate more efficient interactions in 3d modeling.

Contextual Inquiry

Users

Our target user group is limited to individuals who regularly interact with 3d data and representations, and who seek to manipulate it in some way that depends on visual feedback. These are the people who would find the most practical use out of our system, and who would benefit most in their workflow.

Our first user was a professor in Computer Graphics. He has been doing 3D graphics for many years and has some experience with many different programs and interfaces for manipulating 3D data. However, he stated that he didn’t personally do heavy 3D tasks anymore (at least with the interface he showed us); however, he often shows new students how to use these interfaces. He likes exploring applications of 3D graphics to the humanities, such as developing systems for archaeological reconstruction. He does not focus much on interfaces, as he believes that a good spatial sense is more useful than an accessible interface (though for the applications that he develops, he is happy to alter the interface based on user feedback).

Our second user was a graduate student in Computer Graphics who has been at Princeton for several years. He has experience in 3D modelling using Maya, and his primary research project related to with designing interfaces for viewing and labelling 3D data. Because of his research and undergraduate background, he had a lot of background in basic 3D viewing manipulations and was very adept at getting the views he wanted. His priority for his interfaces was making them accessible to other, inexperienced users; this was because his research is supposed to be used for labelling objects in point clouds (which could be outsourced). In terms of likes and dislikes, he stated that he did not really get into modelling because it required too much refinement time and effort.

Our third interviewee was an undergraduate computer science student from another university. He has experience in modelling and animation in Maya, and did so as a hobby several years ago. He has taken courses in graphics and game design, and has a lot of gaming experience. However, his experience with Maya was entirely self-taught, and he acknowledged that his workflows were probably very unlike those of expert users. Because of this, his priorities were not speed or accuracy of tasks, but being able to create interesting effects and results. He was happiest when experimenting with the sophisticated, “expert” features.

Interviews

We observed our first two CI subjects in their offices, with their own computers, monitors, mice, and keyboards. They were usually alone and focused on their task while using their systems. The last user did his modelling in his home or dorm room, generally alone or working with friends. We asked our subjects to show us the programs that they used to do 3D manipulation/modelling, and perform some of the tasks that they normally would. Following the master-apprentice model, we then asked them to go over the tasks again, and explain what they were doing as if they were training us to perform the same things. After this, we tried to delve a bit deeper into some of the manipulations that they seemed to take for granted (such as using the mouse to rotate and zoom) and have them try to break down how they did it, and why they might have glossed over it. Finally, we had a general discussion about their impressions about 3D manipulation in general, especially focusing on other interfaces they had used in the past and the benefits/downsides compared to their current programs.

The most common universal task was viewing the 3D models from different perspectives – each user was constantly rotating the view of the data (much less frequently panning or dollying). Another fairly common task was selecting something in the 3D space (whether a surface, a point, or a model). After selection, the two common themes were some sort of spatial manipulation of the selected object (like moving or rotating it), or non-spatial property editing of the selected object (like setting a label or changing a color). It seems that the constant rotation was helpful for maintaining a spatial sense of the object as truly 3D data, since without the movement it could simply look like a 2D picture of it (this is especially true because you can translate or zoom a single 2D image without gaining any more 3D information, whereas rotation is a uniquely 3D operation).

Essentially all of the operations we observed being performed fell into the categories mentioned above.

Our first user’s program was used for aligning several 3D scans to form a single 3D model; his operations involved viewing several unaligned scans, selecting a single one, and moving/rotating it until it was approximately aligned with another model. We noted that he could not simultaneously change views and move the scans, so he often had to switch back and forth between changing views and manipulating the object.

Our second interviewee showed us his research interface for labeling point cloud data. The point cloud data labeling application was really primarily about viewing manipulations: the program showed the user a region of a point cloud, and the user would indicate whether it was a car, streetlight, tree, etc. The user only really had to rotate/pan/zoom a little bit if necessary to see different views of the object under consideration, and sometimes select pre-segmented regions of the point cloud.

The Maya workflow was far more complex. For example, the skinning process consisted of creating a skeleton (with joints and bones), associating points on the model surface with sections of the skeleton, and then moving the joints. The first part was very imprecise, since it involved repeatedly creating joints at specific locations in space. After creating a joint, the user had to carefully drag it into the desired position (using the top, side, and front views). The user thought this was rather tedious, although the multiple views made it easy to be very precise. He did not go into much detail about the skinning process, using a smart skinning tool instead of “painting skinning weights” on the surface (which basically looked like spray-painting the surface with the mouse). Finally, joint manipulation just involved a small sphere around the joint that had to be rotated using the mouse. He also described how the advanced features generally worked (but didn’t actually show them) and described them as generally involving selecting a point and dragging it elsewhere, or setting properties of the selected object.

Task Analysis

  1. Who is going to use the system?
    Our system can be used in 3D Modelling: Game asset designers, film animators/special effects people, architectural designers, manufacturing, etc. More generally, we can use our system for basic 3D Manipulation and viewing – Graphics research, medical data examination (e.g. MRI scans), biology (examining protein/molecular models), etc. People in these fields generally are college educated (since the application fields are in research, industrial design or media), and because of the nature of the jobs they will probably have experience with software involving 3D manipulation already. They certainly must have the spatial reasoning abilities to even conceptualize the tasks, let alone perform them. It may be common for users to have more artistic/design background in non-research application contexts. We imagine that there is no ideal age for our system; because of the scope of the tasks we expect that the typical user will be teenaged or older. Additionally, our system requires two functioning eyes for the coupling of input-output spaces to be meaningful.
  2. What tasks do they now perform?
    In 3D modeling, they take 3D scans of models and put them together to form coherent 3D models of the scanned object. They also interact with 3D data in other ways (maps, point cloud data representations of real life settings), create and edit 3D objects (models for feature films and games: characters, playing fields), define interactions between 3D objects (through physics engines, for example, define actions that happen when collisions occur), and navigate through 3D point-clouds (that act as maps, of, say a living room: see floored.com).
  3. What tasks are desired?
    On a high level, the desired tasks are the same as the tasks they now perform; however, performing some of these tasks can be slow or unintuitive, especially for those who are not heavy users. In particular, manipulation of perspective is integral to all of the tasks mentioned above, and manipulating 3D space with 2D controls is not intuitive and often difficult to learn so that operations run smoothly.
  4. How are the tasks learned?
    Our first interviewee highlighted the difference between understanding an interface (getting it) and being good at using it intuitively (grokking it). The first of these involves learning how things work: right click and drag to translate, click and drag to rotate, t for translation tool, etc. These facts are learned through courses, online training and tutorials, and mentorship. The second part can only be done through lots of practice and actual usage to achieve familiarity with the 3D manipulations. Our first interviewee noted that, after learning gaining the spatial intuition for one type of interface, other such tasks and interfaces often become much easier to grok.
  5. Where are the tasks performed?
    Tasks are performed on a computer, typically in offices, as they are often a part of the user’s job. Animators generally work in very low lighting, to simulate a cinematic environment. People are free to stop by to ask questions, converse, etc. — which interrupts the task at hand, but does not actively harm it, just as with other computer-based tasks. Also some people might design at home for learning, or personal app or game development.
  6. What’s the relationship between user & data?
    The user wants to manipulate the data (view, edit, and create it). Currently, most 3D data is commercial, and the user is handling the data in the interests of a company (e.g. animated models for films), or for research purposes with permission given by a company (e.g. Google Streetview data). Some of it can be personally acquired, e.g. with kinect. Some users create the data: for example, designing game objects in video games, or 3D artwork for films.
  7. What other tools does the user have?
    Manipulation of 3D data requires tools with computing ability — with a computer, there are other interfaces, such as through the command line, to select and move objects (specifying exact locations). With animation, animators can use stop-motion animation, moving actual objects incrementally between individually photographed frames. In general, a good, high-resolution display is a must. Most users do their input via mouse and keyboard. There are some experimental interfaces that use specialized devices such as trackballs, 3D joysticks, and in-air pens. Many modellers also use physical models or proxies to approximate their design, e.g. animators might have clay models and architects might have small-scale paper designs.
  8. How do users communicate with each other?
    Users communicate with each other both online and in person (depending on how busy they are, how close they are located) for help and advice; they also report to managers with progress on their tasks. (for instance there are developer communities for specific brands of technology http://forum.unity3d.com/forum.php?s=23162f8f60e0b03682623bf37fd27a46 for example ).
    In general, modelling has not been a heavily collaborative task. 3D modellers might have to discuss with concept artists on how to bring their ideas into the computer. Different animators might be working on different effects on the same scene in parallel, such as texturing and animating, or postprocessing effects.
  9. How often are the tasks performed?
    Animators undertake 3D manipulation jobs daily — for almost the entire day (and during this time, continually manipulating view and selecting objects to create and edit the 3D data). Researchers, on average, tend to perform the tasks less frequently. The professor we interviewed rarely manipulates the 3D data himself (except for demos); the graduate student still interacts with the data daily, as his research project is to design an interface that involves 3D interaction. In terms of individual tasks, the functions that are performed most frequently are by far changing the perspective. This happens essentially continuously during the course of the 3D manipulation session, almost such that it doesn’t feel like a separate task. Next most common is selecting points or regions, as these are necessary for most actual manipulation operations.
  10. What are the time constraints on the tasks?
    As changing perspective is such a common task, required for most operations, it is a task which users will not want to spend much time on (given that their goals are larger-scale operations). For operations which they are aware require a significant amount of time (e.g. physical simulations, rendering), they are willing to wait, but would certainly prefer them to be faster — they are also more willing to wait for something like a fluid simulation if they are aware of what the end result will generally be like (which is another problem).
  11. What happens when things go wrong?
    Errors in modelling can generally be undone, as the major software keeps track of change history for each individual operation. Practical difficulties may arise in amount of computer resources required to create a model of high resolution.

Task Descriptions

  1. (Easy) Perspective Manipulation
    This task classically involves rotating, panning, and zooming the user’s view of the scene to achieve a desired perspective or to view a specific part of the scene/model. Perspective manipulation is a fundamental, frequent task for every user we observed. It seems to serve the dual purpose of preserving the user’s mental model of the 3D object, as well as presenting a view of the scene where they can see/manipulate all the points necessary to complete their more complex tasks.
    Currently, this task has a large learning curve, but once people are used to it, it becomes easy and natural; the only hurdle is that with current interfaces, it is not possible to change this while performing another mouse-based task. With our proposed interface, we first propose to separate the two purposes of preserving spatial sense and presenting manipulable views. With a stereoscopic, co-located display, we make preserving spatial sense almost a non-issue with no learning curve, especially with head-tracking. We also believe that using a gestural, high degree-of-freedom input method can allow for more intuitive camera control, equating to an easier learning curve. We also note that using gestural inputs will allow users to perform perspective manipulation simultaneously with other operations, which to us is more in line with how the users conceive of this (as a non-task, rather than a separate task).
  2. (Medium) Model Creation
    This task is to create a 3D mesh of some desired appearance. Depending on the geometry of the desired model, it can be created by starting off by combining and editing some simple geometry (spheres, ellipsoids, rectangular prisms), or modelled off of a reference image, from which an outline of the model can be drawn, extruded, and refined to create a model. The task involves object (vertex, edge, face) selection, creation, and movement (using perspective manipulation to navigate to the location of the point of interest), and typically involves many iterations to achieve the desired look or structure.
    Game designers and movie animators perform this task very often, and a current flaw of the system is that the creation of a 3D shape happens in a 2D environment. We anticipate that creating a 3D model in 3D space will be much more intuitive.
  3. (Hard) 3D Painting
    Color and texture on 3D models gives a sense of added style and presence. Many artists use existing images to texture a model, e.g. an image of metal for the armor of a model of a medieval knight, as it is relatively easy and efficient. However, when existing images do not suffice for texturing an object, an artist can paint the desired texture onto the object. Such 3D painting is a very specialized skill, as painting onto a 3D object from a 2D plane is very different from traditional 2D painting (many artists who are skilled in 2D painting are not in 3D painting, and vice versa) and can be unintuitive. Major platforms for 3D painting project a 2D image onto the object, which can cause unexpected distortions for those unfamiliar with the sense of operation. In our interface, we intend for users to be able to paint onto an object in 3D space with their fingers.

Interface Design

Functionality

We intend to include functionality for most basic types of object manipulation (rotation, translation, scale), which will be mapped to specific gestures, typically with both hands and arms. We also want to allow for more precise manipulation such as selection/distortion and 3d painting, which involve gestures using specific fingers in more precise movements. To add control and selection capabilities, we hope to incorporate a tablet or keyboard, perhaps to change interaction modes or select properties such as colors and objects. Together, these will encompass a considerable portion of the functionality that basic 2d interfaces provide currently, while making the interaction more intuitive because it is happening in 3d space.

Storyboards

Easy task: Perspective Manipulation

Easy task: Perspective Manipulation


Medium task: Model Creation

Medium task: Model Creation


Hard task: 3D Painting

Hard Task: 3D Painting

System Sketches

Top: Front view of the system, including all component devices. Middle: Tablet (or keyboard) for additional selection and control, like mode or color change. Bottom: Side view of system, showing the devices and basic form of interaction.

Top: Front view of the system, including all component devices. Middle: Tablet (or keyboard) for additional selection and control, like mode or color change. Bottom: Side view of system, showing the devices and basic form of interaction.

Top: basic ways for the user to interact with the system using hand gestures. Bottom: More precise gestures and interactions that require greater accuracy. (Not shown: control mechanisms such as a tablet or keyboard to switch between them, or provide extra choices for functionality)

Top: basic ways for the user to interact with the system using hand gestures. Bottom: More precise gestures and interactions that require greater accuracy. (Not shown: control mechanisms such as a tablet or keyboard to switch between them, or provide extra choices for functionality)

L2 – Dohan Yucht Cheong Saha

Team Members:

  • Andrew Cheong (acheong@)
  • David Dohan (ddohan@)
  • Shubhro Saha (saha@)
  • Miles Yucht (myucht@)

Group Number: 21

Description

For our project, we built an instrument that allows a person to conduct the tempo of a piece of music by walking. The primary components of the instrument are piezos attached to each foot and two buzzers. Taking a step moves the song along to the next note, and each song can have both a bass and treble part play simultaneously. We built this project because we like the idea of being able to control existing music through motion. We are pleased overall with the final product because it works as we envisioned it and is actually pretty fun to use. There are a number of possible improvements that we would like to add, including the ability to change to different songs and read music from arbitrary midi files. Additionally, the device is somewhat difficult to attach to your body: perhaps this could be made more portable by the use of a piezo sensor integrated into your shoes (along with a battery pack for the Arduino). We are also limited to songs which change on each tap of the foot, as opposed to songs that might have several pitches play per beat. Other possibilities would be to synthesize the sound using processing or use a similar interface to create music (e.g. a different drum for each foot) as opposed to controlling existing music.

Prototypes

Before building out final product, we built three separate prototypes, with the third leading to our final product.

Instrument 1 – Conducting a Choir

When building our first prototype, we set out to be able to control the volume and pitch of a song by raising and lowering our hands (as if conducting a musical group). While the final result worked, it did not perform as well as we had hoped. The main issue with the instrument is that it is difficult to properly estimate changes in position from the accelerometer (although it naturally works very well for detecting actual accelerations).

Instrument 2 – Anger Management

We liked the idea of using the force of a knock on a piezo to control sound. In this project, hitting your computer results in it playing sound back at you that corresponds to how hard you hit it.

Instrument 3 – Walking Conductor

This prototype is the core of our final instrument and is composed of piezos attached to each foot and a beeper. Knocks (e.g. steps) on the piezos trigger the next note in a melody to play.

 

Final Instrument

We decided to modify the walking conductor instrument to allow multiple parts to play at once. The control mechanism remained the same, but the addition of a second beeper allows us to play a bass part at the same time as the melody.

Parts list:

  • 2 piezo elements
  • 2 beepers
  • 2 rotary potentiometers
  • 2 1-megaohm resistors
  • 1 Arduino Uno
  • 4+ alligator clip cables
  • Electrical tape

Assembly Instructions:

  1. On a breadboard, connect one end of one resistor to analog pin A0 and the other end to ground. Connect one end of other resistor to analog pin A1 and the other end to ground.
  2. Connect one piezo element in parallel with each resistor, attaching it to the breadboard using the alligator clip cables.
  3. On a breadboard, connect pin 1 of one potentiometer to digital pin 3, pin 2 to one beeper, and pin 3 to ground. Connect pin 1 of the other potentiometer to digital pin 6, pin 2 to the other beeper, and pin 3 to ground.
  4. Connect the other pins on each beeper to ground.
  5. Attach the piezo elements to your shoes using electrical tape.
  6. Run around to your favorite song!

Source Code Instructions:

  1. This code makes use of the “Arduino Tone Library” which allows one to play multiple notes simultaneously using the arduino. It is required to run our code, so download it using the link above. The library comes from the Rogue Robotics project.
  2. Download our code below, and begin controlling Thus Spoke Zarathustra as you walk! Different music can be played by replacing the trebleNotes and bassNotes arrays with different note sequences.

Code:

/*
 * COS 436 Lab 2: Sensor Playground
 * Author: Miles Yucht, David Dohan
 * Plays "Thus Spoke Zarathustra" by Richard Strauss on beats measured by piezo elements.
 */
#include "pitches.h"
#include "Tone.h"
#ifdef REST
# undef REST
#endif
#define REST -1

Tone bassTone;
Tone trebleTone;

int treblePin = 3;
int bassPin = 6;
int piezoPin1 = A0;
int piezoPin2 = A1;

int threshold = 50;

int currentNote = 0;

int pauseTime = 100;

//melody to Thus Spoke Zarathustra
int trebleNotes[] = {NOTE_C4, NOTE_G4, NOTE_C5, REST, NOTE_E5, NOTE_DS5, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, 
    NOTE_C4, NOTE_G4, NOTE_C5, REST, NOTE_DS5, NOTE_E5, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, REST, NOTE_C4, NOTE_G4,
    NOTE_C5, REST, NOTE_E4, NOTE_A4, NOTE_A4, NOTE_B4, NOTE_C5, NOTE_D5, NOTE_E5,
    NOTE_F5, NOTE_G5, NOTE_G5, NOTE_G5, NOTE_E5, NOTE_F5, NOTE_G5, NOTE_A5, NOTE_B5, NOTE_C6, REST};

//bassline to Thus Spoke Zarathustra
int bassNotes[] = {NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, 
NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, 
NOTE_C4, NOTE_G3, NOTE_C4, NOTE_G3, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_C4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_C4, NOTE_C4, NOTE_G4, NOTE_E4,
NOTE_C4, NOTE_G3, NOTE_G3, NOTE_E3, NOTE_A3, NOTE_G3, NOTE_C4, REST};

void waitForKnock() {
  int sense1, sense2;  
  while (true) {
      sense1 = analogRead(piezoPin1);
      sense2 = analogRead(piezoPin2);
      if (sense1 > threshold || sense2 > threshold)
        break;
      /*
      Serial.print(threshold - sense1);
      Serial.print(", ");
      Serial.println(threshold - sense2);
      */
    }
    /*
    Serial.println(sense1);
    Serial.println(sense2);
    */
}

void playTone(Tone tone, int notes[]) {
  if (notes[currentNote] == REST)
    tone.stop();
  else
    tone.play(notes[currentNote]);
}

void playNextNote() {
  if (currentNote <= 57) {
    playTone(trebleTone, trebleNotes);
    playTone(bassTone, bassNotes);
  } else
    currentNote = -1;
  currentNote++;
}

void setup() {
  bassTone.begin(3);
  trebleTone.begin(6);
  /*
  Serial.begin(9600);
  */
}

void loop() {
    waitForKnock();
    playNextNote();
    delay(pauseTime);
}

P2 – Feel the Music!

Group Name: VARPEX

Group Members and Contributions: Abbi Ward, Dillon Reisman, Prerna Ramachandra, Sam Payne

Dillon recorded participant answers and wrote up information about our participants/people we observed and their responses and gathered test subjects.
Sam provided the music and music equipment, recorded the answers to the task analysis questions and gathered test subjects.
Abbi drew some of the sketches and wrote up information about the contextual interviews.
Prerna drew the storyboards, gathered some of the test subjects and helped with conducting interviews and compiled the blog post.

Group Number: 9

Problem and Solution Overview

We are addressing the problem that the concert experience can only be replicated by expensive, non-portable hardware. Further, these systems cause noise pollution in tight-living quarters. We will create an article of clothing worn on the torso that will use vibrating motors to generate stimulus similar to that created by loud bass – that is, a vibrating sensation on the skin. Our solution is portable, relatively inexpensive, and avoids the noise pollution problem.

Description of Users Observed in Contextual Enquiry

The target group of users we observed were students aged 18-22. They all have enjoyed listening to music at eating clubs on weekends or going to the occasional concert. They do these things, however, for a variety of different reasons. Some go to clubs for the social aspect of it (their friends are there, their friends are performing, etc.). Most, however, find the act of listening to the music enjoyable in its own right. The observed users enjoy many forms of music, and for most that includes some form of electronic music, though they had varying opinions on the quality of dubstep specifically. It helps that what is played in clubs is mostly of this type, as opposed to rock or other genres.

Contextual Inquiry Interview Descriptions

We invited six individuals to Sam Payne’s room on a Saturday afternoon. Sam has a good quality subwoofer which we used to experiment with different bass frequencies. We invited individual participants to the room at different times. We introduced ourselves and described the class and the purpose of the questions we were going to ask. For each participant, we walked through a series of questions about their music-listening habits and tried to get a feel for how, when, and why these students are listening to music. We then played a series of frequencies for them (from 110 Hz down to 50Hz) while they stood next to the subwoofer. We asked them to describe the sensation and where they felt it. We then put a foam chair near the subwoofer and had them listen to the tones again. We asked them to describe the sensations again, where they felt it, and how they liked it.

As participants answered our questions, we recorded their answers in this form, which gives a basic outline of the sorts of questions we asked.

Most of our users primarily listen to music in their dorms with laptop speakers. If they go outside, they’ll typically use headphones. These headphones are generally low-quality earbuds rather than slightly higher quality over-the-ear headphones. Many students will listen to music on speakers in whatever lab they’re working in. The participants we interviewed enjoyed a variety of genres, from dubstep and electronica to pop and opera. Even if the participants did not report that electronica or dubstep was very high on their list of favorable genres, they often at the very least appreciated it for its diverse and interesting sound and the “intense” sensation they often got from it.

Users felt the vibrations in very different locations. Some of this difference may have been due to different body types. The tall, thin men we interviewed felt vibrations very strongly in the top of their chest but other participants did not feel these vibrations as strongly. Another area where sensations were strongly felt was in the mid-back, along the spine. Users enjoyed the chair, as opposed to just feeling the vibrations in the air. The chair produces a more intense diffusion of vibration. People likened this feeling to a massage of the back and legs. This suggests that users enjoy the intensity of response and the similarity to the raw air experience may not be as important.

Answers to Task Analysis Questions

1. Who is going to use system?
This system would be for people ages 16-34 who enjoy concerts. They enjoy concerts because of the sensation derived from loud music, and would like to have this same sensation when they listen to music at home.

2. What tasks do they now perform?
They go to concerts when they can, and they play loud music in their home (if they can). Some individuals, such as college students living in dormitories, must listen to music through headphones due to noise pollution.

3. What tasks are desired?
Users would like the concert experience on-the-go and without disturbing their neighbors.

4. How are the tasks learned?
Users already know how to use their headphones and feel music. Our system would be an extension of simply plugging in your headphones.

5. Where are the tasks performed?
People feel music at concerts, parties and in dorms/apartments, if they have good speakers. People listen to music everywhere.

6. What’s the relationship between user & data?
Feeling music is a subjective experience, so our data is the reported experience of users. There may be some loss of data if users cannot describe or are not aware of factors that affect their experience.

7. What other tools does the user have?
Depending on the situation, the user may have headphones or speakers and subwoofers. They also have music-playing devices, such as iPods, computers. This equipment varies in expense.

8. How do users communicate with each other?
At concerts, people communicate through gesture and yelling.

9. How often are the tasks performed?
How often tasks are performed varies between individuals because it depends on the type of equipment they have. Those without subwoofers only feel music at concerts so they may feel music a few times per year. Those with subwoofers may feel music on a daily basis.

10. What are the time constraints on the tasks?
At a concert, the time of the concert constrains when users can listen and feel music. For those with good speakers and subwoofers, the constraint may be the time it takes for your neighbors to call public safety.

11. What happens when things go wrong?
If the concert is cancelled, the police are called, or your speaker breaks, you don’t listen and feel your music.

Descriptions of the Three Tasks

1. Many times, people want to listen to music at a loud volume in their room while     roommate is studying, or without disturbing their neighbors. Many students will listen to music on headphones. However, the physical sensation of listening to music is lost while listening on headphones. This product would allow people to feel that sensation without disturbing those around them. The task of listening to music loudly in your room is easier than the other tasks, but it becomes more difficult as you take into consideration the possibility of disturbing your neighbors.

  • Users working outside of their rooms often listen to music while working. The portability of this device allows users to feel music while studying without disturbing those around them.

2. Many students listen to music while walking to class. This of course is done on headphones since speakers are not portable. However, again, the physical sensations of listening to music are lost while listening on headphones. The portability of this product would allow students to listen and feel their music in any setting, and on the go. The task of listening to loud music while moving is difficult, and is even more difficult when you take into consideration the possibility of disturbing other persons.

3. While at concerts, many people enjoy the feeling of music, and some songs are even written to promote these sensations. While producing music, artists could use this product to feel their music and gauge how their music will feel in a concert setting. Attending concerts is difficult due to availability and cost.

Storyboards

Storyboard 1 

storyboard 1

Listening to music while walking to class and getting the live music experience

Storyboard 2

storyboard 2.1

Listening to music in the library or lab while doing work

storyboard 2.2

Getting the live music experience while doing work!

Storyboard 3

storyboard 3

Listening to music in your room, and getting the live music experience without disturbing your roommate

 Design Sketches

photo 16-54-30

Overview of the jacket, fit and spot to place iPod

photo (1) 16-54-30

Front and back of jacket, and sketch of the motors

photo (2)

Parts of the body affected by the jacket

Interface Design Description

Since our system will be wearable, it will grant the user a lot of freedom over where and when they can ‘feel’ their music. Ideally it will also look inconspicuous in its function; a user should be comfortable simply wearing it as a jacket or other article of clothing. It will still enable the user to simply listen to music should they wish, as they did before, but with the added ability to activate the more immersive ‘feeling’ system through vibrating motors. Currently this ability is not offered by any product on the market- headphones do not offer users the concert-going experience of ‘feeling music’.

Lab 2 – Team Chewie (Chewbacca)

Team Members
Stephen Cognetta (cognetta@)
Eugene Lee (eugenel@)
Jean Choi (jeanchoi@)
Karena Cai (kcai@)

Group Number: 14

Description
Our musical instrument was a head-banging music maker. The accelerometer, which is attached to a hat, detects when the user has done a sudden motion (like head banging). When this motion is detected, the tempo of the song that is playing will match the rate at which you are banging your head. By varying the location of touch along a linear sensor, the pitch of the song can also be changed. We thought that our project was a success because we were able to become familiar with the accelerometer while also making a cool project that created music with the motion of the user’s body. As an extension of this project, we could somehow attach the accelerometer to someone’s feet so that when they exercised, the beat of the music would match the rate at which they were running. We also thought that our design could be improved so it was less visible and less bulky.

PROTOTYPE 1 : Ambient Music Generator
http://www.youtube.com/watch?v=_C6pe1Skn1w
This instrument by default emits an ambient noise. When a sudden jerk in the accelerometer is detected, it will emit a dolphin noise. Such a device could be attached to children’s toys that make playing with toys even more fun!
Used CHuck and Processing.
PROTOTYPE 2 : Two-Dimensional Music Player

This instrument uses a two-dimensional sensing technique to change both the pitch and the tempo of a song. If force is applied to the FSR, the tempo decreases. Meanwhile, by sliding the FSR along the soft potentiometer (linear sensor), the pitch will vary.

Head-Banging Music Maker. Final project for this lab. We created a hat that would respond to the movements of the head, if the head jerks fast, the music would play faster, and vice versa. It won’t play at all if the head isn’t moved.

List of Parts
– Arduino Uno
– jumper wires
– accelerometer
– Buzzer
– Hat

Instructions on how to recreate the Head-Banging Music Maker
We attach the buzzer and accelerometer to the device as shown in the picture below. (basically, the accelerometer is wired in as was done in Step 5 of the lab, using the AREF pin, and the buzzer is hooked up to the arduino board). Then attach to the hat as shown. Most of the work is performed in the code, which is shown below.

ttt

hat

Head Bangin’ Source Code

/*
  Graph

 A simple example of communication from the Arduino board to the computer:
 the value of analog input 0 is sent out the serial port.  We call this "serial"
 communication because the connection appears to both the Arduino and the
 computer as a serial port, even though it may actually use
 a USB cable. Bytes are sent one after another (serially) from the Arduino
 to the computer.

 You can use the Arduino serial monitor to view the sent data, or it can
 be read by Processing, PD, Max/MSP, or any other program capable of reading 
 data from a serial port.  The Processing code below graphs the data received 
 so you can see the value of the analog input changing over time.

 The circuit:
 Any analog input sensor is attached to analog in pin 0.

 created 2006
 by David A. Mellis
 modified 9 Apr 2012
 by Tom Igoe and Scott Fitzgerald

 This example code is in the public domain.

 http://www.arduino.cc/en/Tutorial/Graph
 */

// ******************************************************
// CHORD INITIALIZATION
// ******************************************************

const int C = 24,
          D = 27,
          E = 30,
          F = 32,
          G = 36,
          A = 40,
          B = 45,
          C2 = 48,
          H = 0;

int twinkleStar[] = { C, C, G, G, A, A, G, H,
                      F, F, E, E, D, D, C, H,
                      G, G, F, F, E, E, D, H,
                      G, G, F, F, E, E, D, H,
                      C, C, G, G, A, A, G, H,
                      F, F, E, E, D, D, C, H};
int songLength = 48;
int songStep = 0;

int singleNote[] = { 1, 1, 1, 1 };
int majorChord[] = { 4, 5, 6, 0 };
int minorChord[] = { 10, 12, 15, 0 };
int seventhChord[] = { 20, 25, 30, 36 };
int majorChordLength = 3;
int *chords[] = { singleNote, majorChord, minorChord, seventhChord };
const int chordsLength = 4;

int chordType = 0;                // changes between chords[]
int arpStep = 0;                  // changes between chord frequencies

// ******************************************************
// INPUT INITIALIZATION
 // ******************************************************
const int linPin = A1;
const int xPin = A5;
const int yPin = A4;
const int zPin = A3;
const int tonePin = 10;
const int groundpin = A2; // analog input pin 2
const int powerpin = A0; // analog input pin 0
const int proxPin = A6; // analog input pin 6

// ******************************************************
// MAIN LOOP
// ******************************************************

unsigned int tempo;
unsigned int frequency;
unsigned int chordFrequency;
int currentBang;
int bangInterval = 0;
int minimumBangInterval = 70;
int maximumBangInterval = 200;

void setup() {
  // initialize the serial communication:
  Serial.begin(9600);

  pinMode(groundpin, OUTPUT);
 pinMode(powerpin, OUTPUT);
 digitalWrite(groundpin, LOW); 
 digitalWrite(powerpin, HIGH);
}

int prevX = 400;
boolean rising = false;
boolean falling = false;
int mini, maxi, diff;

void loop() {
  int linReading = analogRead(linPin);
  int xReading = analogRead(xPin);
  int yReading = analogRead(yPin);
  int zReading = analogRead(zPin);

  // send the value of analog input 0:
  //int potReading = analogRead(potPin);
 // int btnReading = digitalRead(btnPin);
  // send the value of analog input 0:

  // wait a bit for the analog-to-digital converter 
  // to stabilize after the last reading:
  delay(2);

  int x, y, z;
  x = xReading;
  y = yReading;
  z = zReading;
  if (prevX < x) {
    rising = true;
    falling = false;
  } else {
    rising = false;
    falling = true;
  }
  prevX = x;
  if (falling == true)    mini = x;
  if (rising == true)     maxi = x;
  diff = maxi - mini;

  /*Serial.print(x);
  Serial.print("\t");
  Serial.print(prevX);
  Serial.print("\t");
  Serial.print(mini);
  Serial.print("\t");
  Serial.print(maxi);
  Serial.print("\t");
  Serial.println(diff);*/

  int * chord = twinkleStar;

  //measure the time
  if (bangInterval < maximumBangInterval)     bangInterval++;   //when head bang is detected      if (diff > 80 && bangInterval > minimumBangInterval) {
    currentBang = bangInterval;
    bangInterval = 0;

  //  Serial.print(tempo);
  //  Serial.print(",");
    Serial.println(chordFrequency);

    unsigned int tempo = currentBang*2;
    unsigned int duration = tempo - tempo / 20;

    float chordFactor = (float)chord[songStep] / (float)chord[0];
    if (linReading < 1020)
      frequency = 500;
    chordFrequency = frequency * chordFactor;
    tone(tonePin, chordFrequency, duration);

     Serial.print(songStep);
    Serial.print(" ");
    Serial.print(chordFrequency);
    Serial.print(" ");
    Serial.print(chordFactor);
    Serial.print(" ");
    Serial.println(currentBang);

    delay(tempo);
    songStep = songStep < songLength - 1 ? songStep + 1 : 0;

     chordFactor = (float)chord[songStep] / (float)chord[0];
    /*if (linReading < 1020)
      frequency = linReading;*/
      frequency = 500;
    chordFrequency = frequency * chordFactor;
    tone(tonePin, chordFrequency, duration);

    Serial.print(songStep);
    Serial.print(" ");
    Serial.print(chordFrequency);
    Serial.print(" ");
    Serial.print(chordFactor);
    Serial.print(" ");
    Serial.println(currentBang);

    songStep = songStep < songLength - 1 ? songStep + 1 : 0;
  }

}

L2 – Team EyeWrist

Team Members: Evan Strasnick, Joseph Bolling, Xin Yang Yak, Jacob Simon

Group Number: 17

What We Built: We built a system that overlays tunes on top of a bass beat, using an accelerometer to select notes from within a given chord so that the result is always in tune for a specific key. A rotary pot allows the user to control the tempo, a push button on the larger breadboard varies the chord being used – either to toggle to the next set of notes or to modulate the sound through a continuous hold, the accelerometer picks out a note from within the currently selected chord, and the push button on the hand-held component plays the tune. To play our musical instrument, the user simply holds the hand-held controller (the smaller breadboard) in one hand and presses the button to create sound while controlling pitch via rotation. The user can also vary the tempo with the Tempo Knob and change the chord by pressing the Chord Progression button. For our demonstration, we chose a CM, FM, GM chord progression, but the instrument could theoretically support any arrangement or number of chords the user desires. This instrument was not only a success in terms of offering a number of ways in which the user could create an original beat, but it was also surprisingly  addicting to play. We would have liked to use more speaker elements to offer different instrumental options as well.

Prototypes:

1) Our first prototype was a simple mapping of the accelerometer tilt to allow the user to play melodies over a certain range of musical pitches. While this offered the most control over pitch in theory, we wanted the user to not have to worry so much about tilting correctly for the pitches that they wanted, allowing them to focus more on the beat that they were dropping. This philosophy inspired our next design…  http://www.youtube.com/watch?v=2g6CM7rNnuk&feature=youtu.be

2) Our second prototype instead preserved the fun of using the accelerometer to play, but aided the user by mapping its output to the notes within specific chords. After adding in the push button users were able to program in their own chord progressions for a given song and then simply iterate through them. http://www.youtube.com/watch?v=VkYfAKPzM4Q&feature=youtu.be

3) Our third prototype added in a constant bassline, which was always tuned to match the accelerometer’s output, keeping things easy for the user. Further, we added a potentiometer which allowed the user to control tempo. http://www.youtube.com/watch?v=C6CM0272D44&feature=player_detailpage#t=23s

 

Final Performance: Entitled “Lab 2 in A Minor,” this piece is a satire on satire, and a celebration of celebratory pieces (and poor percussionists). Enjoy: http://www.youtube.com/watch?v=QrFFuBdYg8k&feature=youtu.be

 

Parts Used:

Wires + Crocodile clips
Arduino
Accelerometer
Buzzer
Piezo element
2 push buttons
1 rotary pot
1 * 220 ohm resistor
1 * 330 ohm resistor

Instructions to Recreate:

To build our instrument, mount the accelerometer and a push button on a small breadboard or other device that can be held and manipulated with one hand.  Connect the accelerometer Vin, 3Vo, Gnd, and Yout to the A0, A1, A2, and A4 pins on the Arduino, respectively.  Ground one terminal on the pushbutton, and connect the other to the Arduino’s digital pin 2.  Be sure to keep your wiring tight on this section-You may want to bundle your wires out of the way and secure the connections with electrical tape or solder, as this part of the device will be moving a lot as you play.

Connect the buzzer in series with a 220Ω resistor between ground and digital pin 3.  Use alligator clips to connect the piezo element between digital pin 3 and ground.  These two components will produce your sound, so feel free to experiment with different physical setups to see what kind of tonality you can get- we found that covering the sound hole on our buzzer actually made it considerably louder.

Mount the second pushbutton and the potentiometer on a separate breadboard.  Ground one leg of the pushbutton, and connect the other to digital pin 4 on the Arduino. Connect the center leg of the potentiometer to pin A5, and connect the outer two legs to the Arduino’s +5V and Gnd pins.  When wiring this section, try to keep your wiring tight and out of the way- you’ll want to have uninhibited access to the pushbutton and potentiometer as you play. Then just upload our code to your Arduino and have fun!

 

Source Code:

/*
::::::::::::::
PIN LAYOUT
::::::::::::::
*/

// Accelerometer's power & ground pins
const int groundpin = A2;
const int powerpin = A0;

// Accelerometer's Y axis
const int ypin = A4;
const int tempopin = A5;

// Speaker pins ===
const int piezopin = 3;
const int buzzerpin = 5;

// Button pins ===
const int buttonpin = 2;
const int chordpin = 4;

int** progression;
int chordindex;
int basscounter;

boolean basson;
boolean buttonpressed;

/*
::::::::::::::
MUSIC VARS
::::::::::::::
*/

int scaleN = 8;
int chordN = 4;

// C Major Scale: c, d, e, f, g, a, b, c
int CMajorScale[] = {3830, 3400, 3038, 2864, 2550, 2272, 2028, 1915};
char CMajorScaleNotes[] = "cdefgabc";

// C Major Chord: c, e, g, c
int CMajor[] = {3830, 3038, 2550, 1915};
char CMajorNotes[] = "cegc";

// G Major Chord: g, b, d, f
int GMajor[] = {5100, 4056, 3400, 2864};
char GMajorNotes[] = "gbdf";

// F Major Chord: f, a, c, f
int FMajor[] = {2864, 2272, 1915, 1432};
char FMajorNotes[] = "facd";

// A Minor Chord: a, c, e, g
int AMinor[] = {4544, 3830, 3038, 2550};
char AMinorNotes[] = "aceg";

int* CFCAC[] = {CMajor, FMajor, CMajor, AMinor, CMajor};
int* CFCGC[] = {CMajor, FMajor, CMajor, GMajor, CMajor};

/*
::::::::::::::
SETUP
::::::::::::::
*/

void setup() {

// initialize the serial communications:
Serial.begin(9600);

pinMode(groundpin, OUTPUT);
pinMode(powerpin, OUTPUT);
digitalWrite(groundpin, LOW);
digitalWrite(powerpin, HIGH);

// Input pins
pinMode(ypin, INPUT);

pinMode(buttonpin, INPUT);
digitalWrite(buttonpin, HIGH);
pinMode(chordpin, INPUT);
digitalWrite(chordpin, HIGH);

// Output pins
pinMode(buzzerpin, OUTPUT);
pinMode(piezopin, OUTPUT);

progression = CFCGC;
chordindex = 0;
basson = true;
basscounter = 0;

}

/*
::::::::::::::
FUNCTIONS
::::::::::::::
*/

// ===
#define THRESHOLD 270

// Returns a value between lo and hi
// by taking in and subtracting the threshold
int mapToRange(int in, int lo, int hi) {

// Subtract threshold value
int result = in - THRESHOLD;

if (result > hi) return hi;
else if (result < lo) return lo;
else return result;

}

//
int convertToTone(float value, float range, int* notes, int n) {

// do this until we find something better

if (value >= range)
return notes[n-1];

int i = (value / range) * n;
return notes[i];

}

// Plays a note with period through speaker on pin
void playTone(int period, int duration, int pin) {

// Time between tone high and low
int timeDelay = period / 2;

for (long i = 0; i < duration * 1000L; i += period * 2) {
digitalWrite(pin, HIGH);
delayMicroseconds(timeDelay);
digitalWrite(pin, LOW);
delayMicroseconds(timeDelay);
}
}

void playTwoTones(int period, int duration, int pin, int bassperiod, int basspin) {

// Time between tone high and low
int timeDelay = period / 2;

for (long i = 0; i < duration * 1000L; i += period * 2) {
if (buttonpressed) {
digitalWrite(pin, HIGH);
}
if (basson) {
digitalWrite(basspin, HIGH);
}
delayMicroseconds(timeDelay);
if (basson) {
digitalWrite(basspin, LOW);
}
if (buttonpressed) {
digitalWrite(pin, LOW);
}

delayMicroseconds(timeDelay);
}
}

// Not using this right now but keeping for reference
void playNote(char note, int duration) {

char names[] = { 'c', 'd', 'e', 'f', 'g', 'a', 'b', 'C' };
int tones[] = { 1915, 1700, 1519, 1432, 1275, 1136, 1014, 956 };

// play the tone corresponding to the note name
for (int i = 0; i < 8; i++) {
if (names[i] == note) {
playTone(tones[i], duration, buzzerpin);
}
}
}

/*
::::::::::::::
MAIN LOOP
::::::::::::::
*/

void loop() {

basscounter++;
if (basscounter == 4)
{
basscounter = 0;
}
if (basscounter == 0)
{
basson = true;
}
else basson = false;

if (digitalRead(chordpin) == LOW) {
chordindex++;
if (chordindex == 4) chordindex = 0;
}

// Play sounds if the button is pressed
buttonpressed = (digitalRead(buttonpin) == LOW);

Serial.println("Button pressed!");

// Analog Read
int analog = analogRead(ypin);
int tempo = analogRead(tempopin) + 10;

// Debug: Print the analogRead value
Serial.print("AnalogRead: ");
Serial.println(analog);

// Map the Y-axis value to the range [0, 250]
float yValue = mapToRange(analog, 0, 135);

// Calculate the tone (period value)
int tone = convertToTone(yValue, 135, progression[chordindex], chordN);
int basstone = progression[chordindex][0];

// Debug: Print out the yValue and tone (period)
Serial.print("yValue: ");
Serial.println(yValue);
Serial.print("tone: ");
Serial.println(tone);
Serial.println("------------");

// Actually produce some sound!
playTwoTones(tone, 100, piezopin, basstone, buzzerpin);

// Delay before next reading:

delay(tempo);

}

Slider Piano: group 24

Andrew Ferg
Bereket Abraham
Lauren Berdick
Ryan Soussan

Group 24

Before we tell you about our amazing resistor piano, we just wanted to show you this awesome picture of the acceleration picked up by the accelerometer in Part 5.

Data picked up from the accelerometer

Three ideas:
– Make a drum out of the piezo. The piezo readings can be run through processing to play a tune/melody each time the piezo is hit.

IMG_0433

– a therimen made from an accelerometer. As you rotate the accelerometer through an angle theta, the pitch changes. As you rotate through a different angle, the tempo changes.

IMG_0221

– A piano made out of a slimpot. As you press different areas of the slimpot, the buzzer plays different notes.

Piano Slider

Final result:

In the end, we chose the piano. However, for the final design we ditched the buzzer and played the notes through the computer. Using the serial port, we sent the notes to a processing script. Using the SoundCipher library, we were able to play computer generated notes and simulate the piano keys visually. We thought it was a success. The graphics synced well with the hardware and it sounded realistic. We did have some issues with the “debouncing” of the keys — if you held a key down it played the note twice. But it was not that noticeable.

IMG_2054

Parts used in final system

Arduino
Wires
Slimpot
Resistors

Instructions to recreate

A slimpot has variable resistance when the user presses different areas of the slimpot. Seeing this, we decided to segment the resistances, so each resistance was assigned an interval. Each interval corresponded to a different piano key. We sent this piano key number to Processing over the serial port. We then downloaded Soundcipher library off the internet which plays notes given a frequency. This allowed us to play the keys. We used rectangles to visualize the keys of the keyboard on the screen (using Processing for the graphics). The keys dimmed in colour when they were pressed. The 32bit Processing had to be downloaded in order to use the serial port.

Source code

Arduino Code:

// these constants won't change:
const int pin1 = A0; // the piezo is connected to analog pin 0

// Voltage thresholds of the piano keys
// 15,20,33,50, 130, 200,
// these variables will change:
int sensorReading = 0;      // variable to store the value read from the sensor pin

void setup() {
 Serial.begin(9600);       // use the serial port
 
}

void loop() {
  sensorReading = analogRead(pin1);    
  
  if (sensorReading > 200) Serial.write((uint8_t)0);
  else if (sensorReading > 130) Serial.write((uint8_t)1);
  else if (sensorReading > 50) Serial.write((uint8_t)2);
  else if (sensorReading > 33) Serial.write((uint8_t)3);
  else if (sensorReading > 20) Serial.write((uint8_t)4);
  else if (sensorReading > 15) Serial.write((uint8_t)5);
  else if (sensorReading > 2) Serial.write((uint8_t)6);
  else;
  
  delay(100);  // delay to avoid overloading the serial port buffer
}

Processing Code:

import arb.soundcipher.*;

SoundCipher sc;
int keysNoteMap[] = {59, 60, 63, 64, 65, 67, 69};
import processing.serial.*;
Serial myPort;  // The serial port

void setup(){
  sc = new SoundCipher(this);
  //keysNoteMap = new int[7];
  
  //keysNoteMap[7] = {59, 60, 63, 64, 65, 67, 69};
  size (250,150);
  println(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);
  println("starting....");
}

// keep processing 'alive'
void draw() {
  boolean pressed = false;
  int note = 9;
  while (myPort.available() > 0) {
    note = myPort.read();
    pressed = true;
    println(note);
    sc.playNote(keysNoteMap[note], 100, 1);
  }
  
//white tut 
  fill(255);
  if( pressed && note==6){
    fill(204);
  }
  rect (10, 10, 30, 100);
    
  fill(255);
    if( pressed && note==5){
    fill(204);
  }
  rect (40, 10, 30, 100);
    fill(255);
    if( pressed && note==4){
    fill(204);
  }
  rect (70, 10, 30, 100);
    fill(255);
    if( pressed && note==3){
    fill(204);
  }
  rect (100, 10, 30, 100);
    fill(255);
    if( pressed && note==2){
    fill(204);
  }
  
  rect (130, 10, 30, 100);
    fill(255);
    if( pressed && note==1){
    fill(204);
  }
  
  rect (160, 10, 30, 100);
    fill(255);
    if( pressed && note==0){
    fill(204);
  }
  
  rect (190, 10, 30, 100);
  
  //black tut
  fill(0);
  rect (32,10,15,60);
  fill(0);
  rect (62,10,15,60);
  fill(0);
  rect (122,10,15,60);
  fill(0);
  rect (152,10,15,60);
  fill(0);
  rect (182,10,15,60);
  
  if(pressed){
    sc.playNote(keysNoteMap[note], 100, 4.0);
    delay(100); 
  }
  pressed = false;
}