P5 – Cereal Killers (Team 24)

Cereal Killers, Group 24

Bereket, Andrew, Ryan, Lauren

We are making a gesture bracelet to do computer shortcuts.

 

Tasks:

We continued with our three tasks from last time, with modifications based on our testing in P4:

1. Controlling computer hooked up to TV without having to get up

This task is our easiest, and has a person using the gesture bracelet to control a video stream on their laptop hooked up to a TV.  They can do things such as pause, play, and adjust volume.

2.  Using online recipe while cooking

This our our medium task.  It requires the user to use an online recipe and do things such as zoom and scroll up and down while they are cooking and their hands are occupied and /or dirty.  The gesture bracelet prevent the laptop from getting dirty and makes the experience easier for the experimenting cook, who may be cooking for a family or a group of roommates.

3.  Assisting an injured person to aide their computing experience

This is our hard task, primarily because of the challenge for the user to pick useful shortcuts.  The injured user (who can otherwise only use one hand) uses the bracelet to do things such as hold the shift key when typing, scroll through pages, and go back and forward on their web browser.

Our tasks did not change.  Instead, aspects of them were adjusted to incorporate revisions to the feature bracelet.  We felt that our tasks did a good job or targeting users, and based on our user tests, users agreed.

Revised Interface:

We decided to change our interface for two main features.  First, we realized from user testing that users often had trouble coming up with gestures, so we made predefined gestures that they can choose from.  We also added shaking the bracelet quickly, side to side before use to start and stop a gesture.  This was done so that we would have an easier time recognizing gestures instead of having to look for them at all times.

Each tasks is storyboarded below and updated to include our revisions:

Task 1 part 1

Task 1 part 1

Task 1 part 2/2

Task 1 part 2/2

Task 2 part 1 of 2

Task 2 part 1 of 2

 

Task 2 part 2

Task 2 part 2

Task 3 part 1

Task 3 part 1

Task 3 part 2

Task 3 part 2

As can be seen, we added a list of prerecorded gestures that the user can choose from.  This will be added for our final version.

New Prototype:

We implemented keyboard and mouse recording and playback.  We can have a user input a keyboard sequence and map it to a gesture, or input a mouse click sequence and map it to a gesture.  Videos are added below.  Right now, the sequences work for windows only, but we hope to have it implemented on all operating systems eventually.

We also made a gui for the program.  It is a template for our program, and we will run our code through it for the final project.  The gui is pictured below.

We left out gesture recognition for our prototype.  We did this since we could easily do a wizard-of-oz recognition and still get good feedback from users without having it.  Meaning, we will have a group member type in a number to the terminal when the user does a gesture to trigger the matching computer shortcut. Our gui is also not perfected, so we will use a command line interface for user testing.  We could then incorporate (simulate) all of the functionality of the system, and have it be believable and close to how it would actually perform for user testing.  Obviously, we were time constrained, so we focused our efforts on recording keyboard and mouse input, and outputting them, and making a believable system for user testing.  We were able to get data from the bracelet to python where we will do the gesture recognition, and we will work on having this and our full gui ready for our final demonstration.

The external libraries we used were pyHook, pywin32, and autopy.

 Video/Images:

Logging computer input:

Emulating user output:

 

Gui

Gui Main Menu

screen2

Assignment2 – Ryan Soussan

1. Observations:

All of my observations were from my history course in McCosh, taken on three separate days.

a.  I first interviewed a senior girl when class ended, and got permission to interview her when she got to class before lecture.  She got to class five minutes early,  and spent her time chatting and playing with her iphone.  She was sitting next to a friend, and played angry birds until lecture started.

b.  My second interview was with a junior male.   He got to class six minutes early, and went over the course syllabus for a minute, before checking facebook and espn on his computer for the rest of the time.

c.  My last interview was with a sophomore male.  He got to class two minutes before lecture and prepared a word document for class notes.  He waited the last minute silently for the teacher to start lecturing.

 

From my observations and questions I gathered the following:

– most students arrived before lecture started, and roughly half the class was there with 3-5 minutes until lecture.

– many people spent time on computers and iphones

–  people sitting quietly with notebooks out

– small percentage chatting

– lots of people looking around

Generally, people were either waiting for lecture, chatting with friends, or using a smartphone/computer to surf the web, check email, and use Facebook.

From my questions:

a)  Do you have anything specific you like to do before lecture?

– “I like to play games on my Iphone, and review notes for class”

–  “it’s a good time to check email”

–   “Not really, just unwind between classes”

b)  Do you have a smartphone or computer to use?

” I don’t bring my computer to class, but I have an Iphone”

“Yes, I like to take notes on my computer.”

” I carry my computer to class, I don’t use my smartphone much except for texts though”

c) Would you like something else to do before class?

“Yes, it can be a nice time to be productive but I am generally unoccupied for a decent amount of time”

” I don’t know, it’s not really that long, but maybe, it depends on what it was”

” Mmm, possibly, Facebook normally does the trick though.”

From my observations I got the feeling that people kept themselves happy before lecture, but had time for something else.  Many students have internet access, and classes generally tended to be divided between boredom, productivity, and entertainment.  From this, I decided to make ideas with utility and entertainment, while not being too intensive so students can relax before lecture.

2.  Ideas

I brainstormed with Bereket Abraham, Andrew Ferg and Lauren Berdick.

We came up with the following list:
1. A name game to get to know your neighbors

2. A talent show with students performing as they wish

3. Q and A with professor based on student’s questions and a voting system to choose the questions asked
4. A joke telling contest with students rotating telling jokes before class

5. An application video of what is going to presented during the lecture to give people an overview and draw interest
6. Crowdsourced music making, with each student getting to contribute a beat to a song that builds up until lecture starts
7. Current events shown that are related to the class, such as campus speakers, new books, important findings, etc

8. One tough problem that everyone collaborates on and if they get it then the entire class gets extra credit

9. Crowdsourced, collaborative art project that everyone contributes to until the semester ends

10. A personal blog for the class where students can post and view other posts as they wish

11. Professor or preceptor gets to prepare a story to tell the class during the day

12. Professor gives a brief summary / update of his current research or the state of his research field.

13. Riddle of the day that the class works together to solve

14.  Professor/preceptor gives potential interview questions for jobs or graduate school related to the field

15. Map application that shows shortest path to your next class from your current location, with a stop for food along the way

16. App for perpetually late students that provides a live audio broadcast of the first 10 minutes of large lectures

3.  Favorite Ideas

My two favorite ideas were the question and answer session with the professor, and the collaborative art project.  I chose the Q and A idea because it let students pick whether they wanted to be productivity or entertainment based on their questions, and did not require all of the students to participate so others could carry on activities they enjoyed more.  I chose the collaborative art project for similar reasons, since only interested students needed to participate, and because I thought having a final project that many people had worked on would be a neat touch to a course.

 

Pictures:

 

Q and A Ranker:

Q and A Homepage

Q and A Login

 

 

Q and A Past Questions

 

Q and A Question Submission

QA Flag

Q and A Flag Submission

 

The Q and A Ranker was made as a website.  Students enter their netID, and select their course to go to the course’s Q and A page.  From here, students use a reddit-style system to submit their own questions for their Professor, upvote and downvote other questions, and flag posts.  Posts can be flagged if they are offensive, already asked, or related to homework.  This is designed to keep questions interesting and make the session not turn into a homework session.  As well, students are listed next to their submissions to prevent offensive/or rude material.

Class Graffiti:

Class Graffiti Homepage

Class Graffiti Draw Page

Class Graffiti Update/History

The class graffiti idea is also a website, and students travel to the homepage for their course to work on a drawing project.  Students can contribute to the art, look at other’s recent submissions, and view the last year’s final work.  The drawing panel allows the student to pick colors and brush sizes from a panel, and add to the drawing wherever they want.

Testing:

For testing, I decided to go with the Q and A ranker website.

I first tested with a male student.  I gave him the login page and he followed that, and acted like he pick his course.  He then went to the homepage (which I gave him), and was initially confused, and wasn’t sure what to make of the arrows and questions.  He clicked on submission, and pretended to make his own submission.  He then went back to the homepage and looked at the questions again.  He clicked on an arrow, and realized it was a voting machine.  He than viewed the already asked questions.

When I asked him if he liked the website, he told me he thought it was a neat idea, but wasn’t sure if the Professor would have time to answer the questions.  He liked that people’s names were linked to questions, and also worried that it would turn into a homework session (I then showed him the flag option, and he agreed this would help prevent that).  He overall found that it could be useful for some classes, and agreed that he might use it if he got to class early.

 

For my next tester, I gave hime the login and then homepage, and he recognized the similarities to reddit.  He upvoted two posts and downvoted one and laughed, and then went to view older questions.  He then returned to the homepage and made his own submission.   Again, the user did not click the flag option.  When I asked what he thought of the site, he said he liked it and called it a “reddit site”.  He though he would use it, but was worried questions would get out of hand (I then showed him the flag option).   Overall, he found the site a little silly, but agreed it could be a nice way to pass time and commented that he would enjoy asking some of his Professors some questions.  He wasn’t sure Professors would agree to participate, or might be busy setting up for class, but said if they weren’t he could see it being a nice addition to the 10 minute break in between classes.

My last tester also drew comparisons to reddit, and was excited to use the site.  He logged in, laughed at the class selection, and went right to upvoting and downvoting posts.  He asked if he could click on the posts, and said that it might be neat to allow the user to submit information on what motivated them to ask the question to help convince others to up or downvote it.  He liked that he could see the net results of the vote next to the post.  He then went to view the past questions.  He went back to the homepage, and pressed flag, but then said that he didn’t think the post was flag worthy, so he tried to submit his own post instead.

The user enthusiastically felt that the site would be useful, and began to brainstorm questions he would ask his professors.  He said the site was simple and suited the idea, and he would use it for classes if he had time and wasn’t running late.

Tester submitting their post

Tester flagging a post for being offensive

Tester clicking to view already asked questions

Insights:

From my testing, I realized that there needs to be a way to make sure the Professor is on board with the website and answering questions before implementing the website.  This could come in the form of asking them what types of questions they are comfortable with and only allowing submissions of those, and giving the Professor an overriding power to delete posts before they can even be viewed for consideration by students.  Additionally, there may need to be more instructions for using the arrows.  Users who had frequented sites such as reddit immediately knew what to do when given the prototype, however other users were not sure how the voting system worked.  Colors would help make things more clear, but additionally more instructions or a help section on the website could be provided to help users unfamiliar with the format.  Another option could be to add the ability to provide text/a paragraph which motivated a users submitted question, that would be linked from the submitted question.  This would help students pick questions, and make up their minds if they are torn or ambivalent.  Testers seemed to like the ability to ask professors questions, and it appears that this website could be a viable option for the ten minute break in between classes.

The Cereal Killers L1 – Viewer 360

Andrew Ferg

Bereket Abraham

Lauren Berdick

Ryan Soussan

 

 

Our project:

 

We built a “joystick” controller which, when employed, controlled the movement of a 3D box on the screen. The joystick is made up of two rotary potentiometers. The potentiometers were used to rotate the cube in the y and z direction. One potentiometer controls the y direction, the other in the z. At first, we tried to add in a thin pot sensor to move the box up and down. However, the thin pot was not cooperating with the system and its readings were being affected by the readings of the potentiometer. We had to take this out of the system, because it was causing unexpected movement. Despite this, overall, we believe the project was quite successful. We were able to get rid of jerkiness in the image, so it was a smooth fluid motion and rotation. There was a problem with oversensitivity due to small fluctuations but we were able to fix that. Therefore, given time, we believe this could be made into a reasonable interactive interface. This has AutoCAD applications. 3D designs can be rotated and fully examined with easy movement. 3D images of buildings, monuments, molecules, etc (the opposite of 360 degree camera tours) can be viewed from all angles (e.g. for educational purposes or tours). It could be extended to control, instead of a box, for example a 3D model of a car as a game, or other objects.

 

Sketches:

Alarm system using fsr, buzzer, LED and photocell

An alarm system which uses an fsr and a photocell to detect unexpected motion to set off a buzzer and LED light as an alarm.

 

Joystick using flex sensor and potentiometer

Joystick using flex sensor and potentiometer

 

3D viewer or Driving Game

Uses a potentiometer to rotate a graphic. Input from sensors go through to Processing.

Storyboard:

Viewer 360

Can be used for AutoCAD purposes.

 

Architectural Viewing

Can be used to examine ideas for building design.

 

 

3D Tetris

Can be used for 3D gaming purposes, e.g. making tetris more intense in 3D.

 

3D Viewing

Can be used for educational purposes. For example viewing DNA molecules; the 3D graphics and rotating manipulation allows 360 degrees of examination and studying.

 

Final system:

Video uploaded on youtube at:

http://www.youtube.com/watch?v=WnXExTsq5nc&feature=youtu.be

 

Parts used:

 

–          2 potentiometers

–          1 arduino board

–          8 Wires

–          1 breadboard

 

 

To recreate the design:

 

To build the 3d Viewer, we used two potentiometers with the arduino and fed values to processing.  We brought 5V and ground to a breadboard from the arduino, and fed those values to each potentiometer.   The potentiometers were wired the same – we put 5V on the pin on side ‘3’ of the potentiometer, wired an analog pin from the ardiuno to the middle pin of each potentiometer (using analog pins 0 for one and 1 for the other), and wired the ground to the ‘1’ side of each pot.  In all we used — jumper cables between the breadboard and arduino.  When we turn the pot, the analog inputs change on the arduino.   The potentiometers each had full values of 10k Ohms.  To summarize the code, we fetched values from the analog inputs and then converted these values (stored initially as integers) to bytes, and sent them to processing through our ardiuno and computer using a usb cord.  Processing then scaled the value of each pin to a number between 0 and 2pi, and these values determined the rotation of the 3d object.  We chose to rotate the object about the y and z axes, and both can be done at the same time.  This provides the user with the ability to view every part of the 3d object, and rotate it to preferred positions.

 

 

Source Code:

 

//ARDUINO CODE

<pre>

/* FSR simple testing sketch.

Connect one end of FSR to power, the other end to Analog 0.

Then connect one end of a 10K resistor from Analog 0 to ground

For more information see www.ladyada.net/learn/sensors/fsr.html */

int fsrPin = 0;

int fsrReading;

int scndPin = 1;

int scndReading;

byte fsrByte;

byte scndByte;

 

// the FSR and 10K pulldown are connected to a0

// the analog reading from the FSR resistor divider

void setup() {

// We’ll send debugging information via the Serial monitor

Serial.begin(9600);

}

 

void loop() {

fsrReading = analogRead(fsrPin);

scndReading = analogRead(scndPin);

 

fsrReading /= 4;

scndReading /= 4;

 

fsrByte = (byte)fsrReading;

scndByte = (byte)scndReading;

 

Serial.write(fsrByte);

Serial.write(scndByte);

// the raw analog reading

delay(100);

}

 

</pre>

 

<pre>

//PROCESSING CODE

 

import processing.serial.*;

 

float alpha;

float theta;

float alphaN;

float thetaN;

Serial port;

 

void setup_port() {

port = new Serial(this, Serial.list()[4], 9600);

}

 

void setup() {

size(500,500,P3D);

alpha = 0.0;

theta = 0.0;

alphaN = 0.0;

thetaN= 0.0;

setup_port();

}

 

void draw() {

background(255);

stroke(0);

fill(175);

 

while (port.available() <2) { };

 

if (port.available() > 0) {

thetaN = port.read();

alphaN = port.read();

thetaN *= 0.02463994238;

alphaN *= 0.02463994238;

if (abs(thetaN – theta) > .2) {

theta = thetaN; }

if (abs(alphaN – alpha) > .2) {

alpha = alphaN; }

}

 

translate(250, 250); //translate the coordinates

rotateX(theta); //rotate Z first

rotateY(alpha); //rotate y then

box(200); //draw an ugly box

}

 

//void keyPressed() {

//  if (key == ‘w’) {

//    theta += .1;

//  }

//  if (key == ‘s’) {

//    theta -= .1;

//  }

//  if (key == ‘a’) {

//    alpha -= .1;

//  }

//  if (key == ‘d’) {

//    alpha += .1;

//  }

//  if (key == ‘i’) {

//    y -= 5;

//  }

//  if (key == ‘k’) {

//    y += 5;

//  }

//}

 

</pre>