P3: Dohan Yucht Cheong Saha

Group # 21: “Dohan Yucht Cheong Saha”

 

 

  • Miles Yucht

  • David Dohan

  • Shubhro Saha

  • Andrew Cheong

 

Mission Statement

 

The system we’re evaluating is that of user authentication with our system (hereon called “Oz”). To use Oz, users make a unique series of hand gestures in front of their computer. If the hand gesture sequence (hereon called “handshake”) is correct, the user is successfully authenticated. This prototype we’re building attempts to recreate the experience of what will be our final product. Using paper and cardboard prototypes, we present the user with screens that ask him/her to enter their handshake. Upon successful authentication, the Facebook News feed is shown as a toy example.

 

Our mission in this project is to make user authentication on computers faster and more secure. We want to do away with text passwords, which are prone to hacking by brute force. At the same time, we’d like to make the process of logging into a computer system faster than typing on a keyboard. In this assignment, David Dohan and Miles Yucht will lead the LEAP application development. Andrew Cheong will head secure password management. Shubhro Saha will lead development of the interface between LEAP and common web sites for logging in during the demo.

 

Description of Prototype

 

Our prototype is composed of a box and a Leap Controller. The box is shaped in a way so that more volume is covered at the top of the box. The Leap Controller is placed at the bottom of the box so that it will be able to detect the handshake gestures. The motivation behind the particular box design is to promote uses to place their hands slightly higher. With more volume being covered at the top, people will place their hands higher up as well. For initial authentication, the user will select his or her profile either by selecting a profile or via facial recognition. They can also reset their password through their computer.



Here is the box with the Leap Controller at the bottom. More volume is covered at the top of the box; therefore, the user naturally places his/her hand higher up in the box.

 

Using the Prototype for the Three Tasks

 

Task One: User Profile Selection / Handshake Authentication — In this scheme, most applicable to students at a university computer cluster, the user approaches the system and selects the user profile he/she wishes to authenticate into.

 

Our sample user is prepared to log in to Facebook

 

The user selects his/her account profile

 

Oz asks the user to enter their handshake

 

The user executes his/her handshake sequence inside a box that contains the LEAP controller

 

Our user is now happily logged in to Facebook.

 

Task Two: Facial Recognition / Handshake Authentication — As an alternative to user profile selection from the screen, Oz might automatically identify the user by facial recognition and ask them to enter their handshake.

 

The user walks up to the computer, and his/her profile is automatically pulled up

 

From this point on, interaction continues as described in Task One above.

 

Task Three: Handshake Reset — In this task, the user reset his/her secret handshake sequence for one of usually two reasons: (1) they forgot their previous handshake or (2) they seem to remember the handshake, but the system is not recognizing it correctly.

 

At the handshake reset screen, the user is asked to check their email for reset instructions

 

Upon clicking the link in the email, the user is asked to enter their new handshake sequence

 

Prototype Discussion

 

We grabbed a file holder and made paper linings for the sides. Because this box is aimed to prevent others from seeing your handshake, we had to cover up the holes along the sides of the file holder with the paper linings. These were taped on and the Leap Controller was placed at the bottom of the box.

 

No major prototyping innovations were created during this project. The file holder we found had a pretty neat form factor, though.

 

A few things were difficult. We had to procure a properly shaped boxed for Oz users to put their hand in and accommodating of the LEAP motion controller. Out of convenience, our first consideration was a FedEx shipping envelope (1.8 oz. 9.252”x13.189”). In short time, this solution was ruled out because of the odd shape. Second, we found a box for WB Mason printing paper. This too was ruled out, this time because of bulkiness. Finally, we found a plastic file holder in the ELE lab that had an attractive form for our application. This solution was an instant hit.

 

Once we found the box, it worked really well for our application. In addition, putting the LEAP inside it was relatively straightforward. Black-marker sketches are always an enjoyable task. All in all, things came together quite well.

 

Group 10: P3

Group Number and Name

Group 10 – Team X

Group Members

  • Osman Khwaja (okhwaja)
  • JunJun Chen (junjunc)
  • Igor Zabukovec (iz)
  •  (av)

Mission Statement

Our project aims to provide a way for dancers to interact with recorded music through gesture recognition. By using a Kinect, we can eliminate any need for the dancers to press buttons, speak commands, or generally interrupt their movement when they want to modify the music’s playback in some way. Our motivation for developing this system is twofold: first of all, it can be used to make practice routines for dancers more efficient; second of all, it will have the potential to be integrated into improvisatory dance performances, as the gestural control can be seamlessly included as part of the dancer’s movement.

Prototype Description

Our prototype includes a few screens of a computer interface which would allow the user to setup/customize the software, as well as view current settings (and initial instructions, gestures/commands). The rest of the prototype depends heavily on Wizard of Oz components, in which one member of our team would act as the kinect and recognize gestures, and then respond to them by playing music on their laptop (using a standard music player, such as itunes).

Our prototype will have three different screens to set-up the gestures. Screen 1 will be a list of preprogrammed actions that the user can do. These include stop, start, move to last breakpoint, set breakpoint, go to breakpoint, start “follow mode”, etc.

     set_gesture_main

Once the user selects a function, another screen pops up that instructs the user to go make a gesture in front of the kinect and hold it for 3 seconds or so.capturing

Once the user creates a gesture, there will be a verification screen that basically reviews what functionality is being created and prompts the user to verify its correctness or re-try to create the gesture.

save_gesture

Tasks

(So that we can also test our setup interface, we will have the user test/customize the gestures of each task beforehand, as a “task 0”. In real use, the user would only have to do this as an initial setup.)

The user selects the function that they want to set the gesture for:

choose_gesture

The user holds the position for 3 seconds:

pause_gesture

The user confirms that the desired gesture has been recorded, and saves:save_osman

An illustration of how our prototype will be tested is shown in the video below. For task 1, we will have the users set gestures for “play” and “pause”, using the simple menus shown. Then we will have them dance to a recorded piece of music, and pause / play it as desired. For task 2, we will have them set gestures for “set breakpoint” and “go to breakpoint”. Then they will dance to a the piece of music, set a breakpoint (which will not interrupt the playback), and then, whenever desired, have go to that breakpoint. For task 3, we will have the users set a gesture to “start following”, and record a gesture at normal speed. We will then have the users dance to the piece of music, set the following when desired, and then adapt the tempo of the music playing according to the speed of the repeated gesture.

Our “Wizard in the Box”, controlling the audio playback:

wizard_in_box

Discussion

i. We made our initialization screens using paper, but the bulk of it was “Wizard of Oz”, and just responding to gestures.

ii. Since our project doesn’t really have a graphic user interface, except for setup and initial instructions, we relied heavily on the Wizard of Oz technique, to recognize and respond to gestures and voice commands. Since what the user would mostly be interacting with is music and sound, which can’t be represented well on paper, we felt it was appropriate to have our prototype play music (the “wizard” would just push play/pause, etc on a laptop).

iii. It was a little awkward to try to prototype without having the kinect or even having a chance to get started creating an interface. Since users would interface with our system almost completely through the kinect, paper prototypes didn’t work well for us. We had to figure out how to show interactions with the kinect and music.

iv. The Wizard of Oz technique worked well, as we could recognize and respond to gestures. It helped us get an idea of how tasks 1 and 2 work, and we feel that those can definitely be implemented. However, task 3 might be too complicated to implement, and it might be better to replace it with a simpler “fast-forward / rewind” function

Do You Even Lift- P3

Group Number and Name

Group 12 — Do you Even Lift?

First Names of Team

Andrew, Matt, Adam, and Peter

Mission Statement

We are evaluating a system designed to monitor the form of athletes lifting free weights and offer solutions to identified problems in technique.

Some lifts are difficult to do correctly, and errors in form can make those lifts ineffective and dangerous. Some people are able to address this by lifting with an experienced partner or personal trainer, but many gym-goers do not have anyone knowledgeable to watch their form and suffer from errors as a result. Our system seeks to help these gym-goers with nowhere else to turn. In this regard, we want our prototype to offer an approachable interface and to offer advice that is understandable and useful to lifters of all experience levels.

Concise Mission Statement: We aim to help athletes of all experience levels lift free weights with correct technique, enabling them to safely and effectively build fitness and good health.

Member Roles: This was a collaborative process. We basically worked together on a shared document in the same room for most of it.

Adam: Compiled blog, wrote descriptions of 3 tasks, discussion questions…

Andrew: Drew tutorial and feedback interface, mission statement, narrated videos…

Matt: Drew web interface, took pictures, mission statement…

Peter: Mission statement, discussion questions, filmed videos…

Clear Description of the Prototype w/ Images

Our prototype is a paper prototype of the touch screen interface for our system. Users first interact with our paper prototype to select the appropriate system function. They then perform whatever exercise that system function entails. Finally, users receive feedback on their performance.

We envision our system consisting of a kinect, a computer for processing, and a touch screen display. Our touch screen display will be the only component with which users physically interact. If we do not have a touch screen computer for our prototype, we wil substitute an ordinary laptop computer.

This is our proposed startup page. From this page, users can select the exercise which they are about to perform. They also have the option to click the “What is This?” button which will give them information about the system.
After selecting an exercise, users can enter either “Quick Lift” mode or “Teach Me” mode. In “Quick Lift, “our system will watch users lift weights and then provide technical feedback about their form at the end of each set. In “Teach Me” mode, the system will give the user instructions on how to perform the lift the selected. This page of the display will also have a live camera to show users that the system is interactive.

In the top right corner of the display too, users can see that they have the option to log in. If they log in, we will track their progress so that they can view it in our web interface and so the system can remember their common mistakes for future workouts.
In “Quick Lift” mode, users have the option of receiving audio cues from our system (like “Good Job!” or “Keep your back straight!”). Users will then start performing the exercise (either receiving audio cues or not). Once they are finished with a set, we will show a screen like the one below. On the screen we will show users our analysis of each repetition in their previous set of exercises. We will highlight their worst mistakes and will allow them to see a video of themselves in action. This screen will also allow to see their result from previous sets. Likewise, if a user was logged in, this information would be saved so that they could later reference it on a web interface.

If a user selects “Teach Me”, they are taken to a screen like the one below. This screen gives a text description, photos, and a video of the exercise. After reading the page, the user can press the “Got it!” button. The system will then encourage the user to try the exercise themselves using the unweighted bar. After the user successfully performs the exercise a number of times, the system will prompt the user to try that exercise in “Quick Lift” mode.

The picture below is our web interface. Here, workout data is organized by date. By clicking on a date, users can unfold the accordion style menu to view detailed data from their workout such as weight lifted, number of sets and repetitions, and video replay. Users can filter the data for a specific exercise using the search bar at the top. Searching for a specific exercise reveals a graph of the users performance on that exercise.


Descriptions of 3 Tasks

Provide users feedback in their lifting form

We intend the paper prototype to be used in this task by having users select an exercise and then the “quick lift” option. The user will then navigate through the on screen instructions until he or she has completed the exercise. When the user performs the exercise, we will give them feedback by voice as well as simulate the type of data that would appear on the interface.

The opening page for “Quick Lift.” The kinect is watching the user and will give feed back when the user begins exercising.

The user begins exercising.

As the user exercises, the side menu accumulates data. A live video of the user is displayed in the video pane.

After the lift, users can navigate through data from each repetition from their set of exercises. The text box tell user’s their errors, the severity of the errors, and explanations as to why those errors are bad.

Track users between sessions

We intend the paper prototype type to be used in this task by having a user interact with the web interface.  First, a user will see the homepage of the web interface. The user will then click through the items on the page and the page elements will unfold to reveal more content.

The web interface is a long page of unfolding accordion menus.

In the web interface, users can look back at data from each set and repetition and evaluate their performance.

Accordion menus unfold when the users clicks to display more data.

Create a full guided tutorial for new lifters

We intend the paper prototype to be used in this task by having users select an exercise and then the “teach me” option. The user will then navigate through the on screen instructions until he or she has read through all the instructive content on the screen. Then, when the user is read to perform the exercise, he or she will press the “got it!” button.

The user selects the “teach me” option

As the can go through the steps of the exercise to see pictures and descriptions of each step.

Discussion of Prototype

i. How did you make it?

We made the prototype by evaluating tasks users perform with the system and coming up with an interface to allow for the completion of those task. It made most sense for use to create a paper interface that would display the information users would see on the display monitor. The idea is that users would use the paper interface to interact with the system as they would with the touch screen and we would use our own voice commands to provide users with audio feedback about their form if they wanted it.

ii. Did you come up with any new prototyping techniques to make something more suitable for your assignment than straight-up paper? (It’s fine if you did not, as long as paper is suitable for your system.)

We used a combination of the standard paper prototype with “Wizard of Oz” style audio and visual cues.

iii. What was difficult?

 It was difficult to compactly represent the functionality offered by the Kinect on a paper prototype. As described above, we were able to partially account for this by adopting “Wizard of Oz” style audio and visual techniques. However, our system relies on users taking advice from a computer and it was difficult to test how receptive a user would be to our advice.

iv. What worked well?

We think our paper prototype interface is pretty intuitive and makes it easy for users to choose the functionality they want. The design seems pretty self explanatory which is especially helpful when new users interact with the system. We also were pleased with the methods we chose to give users realtime feedback without distracting them from their lifts.

L3 – Team Colonial Club

Team Colonial Club, Group #7

David Lackey
John O’Neill
Horia Radoi

Description

This robot tries to simulate walking using two servomotors attached to opposing sides of the body. It is using friction to thrust itself forward. .

This robot is a precursor to the infamous ATAT, present in the (good) episodes of the Star Wars saga. As opposed to the robot that helped conquer Hoth, our model uses only two legs, on opposing sides, and upon careful calibration, the device can walk. Since our project was struggling with this last step, we decided to add a flag on top of it and call it a marvel of Empire Engineering.

List of Brainstorming Ideas

  1. oscillating helicopter
  2. creature that moves until it finds a sunspot / light
  3. boat that submerges itself when it hears a sound
  4. tank creature (multiple motors within on tread)
  5. worm
  6. creature that travels by rapid, random vibrations
  7. hovercraft
  8. robot that quickly rotates a flag
  9. swimmer
  10. walks around while spinning a disco ball
  11. robot that goes slower in the cold
  12. samurai robot that twirls a staff (we could have two battle)
  13. tug of war robots, each moving away from one another

Photos of Design Sketches

photo 1 photo 2

Final System Media    

IMG_20130326_214035

Breadboard

IMG_20130326_213955

The creature, complete with flag

Creature
Video of Moving Robot Carrying Flag
Moves from point A to point B, ever so slowly…

List of Parts

  1. 1 arduino + jumper wires + usb connection/power
  2. 2 servo motors (for legs)
  3. 1 AC motor
  4. Electrical tape (to attach pieces together)
  5. Transistor
  6. Diode
  7. Capacitor
  8. 330 ohm resistor
  9. Arduino
  10. Custom flag to represent your team

Recreation Instructions

After acquiring the appropriate materials, wire the DC motor to digital pin 8 as well as two Servo motors to pin 9 (so that they move in conjunction with one another.) Next, tape together the two Servo motors as demonstrated in the video, then orient the DC motor with the pin facing upward and tape it to the two Servo motors (making sure to include your own personalized flag to the top of the DC motor.) Once these all of these components are assembled, upload the attached code and watch your creature strut its stuff!

Source Code

#include  

int servoPin = 9;
int motorPin = 8; 

Servo servo;  

int angle = 0;   // servo position in degrees 

void setup() 
{ 
  pinMode(motorPin, OUTPUT);
  servo.attach(servoPin); 
} 

void loop() 
{ 
  // scan from 0 to 180 degrees
  for(angle = 0; angle  0; angle--)    
  {                                
    servo.write(angle);        
    analogWrite(motorPin, 250);   
    delay(5);       
  } 
}

Team CAKE – Lab 3

Connie, Angela, Kiran, Edward
Group #13

Robot Brainstorming

  1. Peristalsis robot: Use servo motors with rubber bands to get it to move with elastic energy somehow
  2. Helicopter robot: spinning rotor to make it fly (like the nano quadrotor except less cool)
  3. Puppet robot: use motor or servos to control puppet strings
  4. Crab style robot: crawling on six legs
  5. Robo-ball: omnidirectional rolling
  6. 3- or 4-wheeled robot: like a car
  7. fixed magnitude offset in motor speed of 2-wheel robot — move in squiggles
  8. Magnet-flinging robot: has a string with a magnet attached to it, uses a motor catapult to throw it forward, latches on to nearest magnet, and then has another motor to reel in the string. rinse and repeat
  9. Flashlight-controlled robot: use photosensors and it only moves if a lot of light is shone on it
  10. Tank robot: use treads around wheels
  11. Hopping robot: uses a servo to wind up a spring and fling itself forward
  12. Inchworm robot: moves like an inchworm
  13. Sidewinder robot: moves like a sidewinder
  14. Hot air balloon: make a fan that blows air past a heated element into a balloon (might be unsafe)
  15. Sculpture: moves linked magnets in constrained area with a magnet on motor (more of an art piece than a robot)

Red light Green light Robot

Our robot is a two-wheeled vehicle made of two paper plates. It plays red light green light with the user: when the user shines a flashlight on the vehicle, it moves forwards. It stops moving when the light stops shining on it.
We made this because it was a simple way to make a robot that was still interactive, rather than just moving arbitrarily on its own. While our electronics worked exactly as planned, it was very difficult to create a chassis that would allow the motor to drive the wheels while being able to support the weight of the battery pack, arduino, and breadboard. In fact, our robot didn’t really work – it just shuddered slightly but didn’t move. This was primarily due to the weight of the components; we’d need a more specialized set of parts like Lego or some structural kit with gears instead of sticks, plates, and tape. It was especially difficult to find a way to connect the smooth motor shaft with the plate (although we did get a very good attachment with just one plate and a motor).

Here is a shot of our robot in action, or to be more accurate, robot inaction.

Here is a shot of our robot in action, or to be more accurate, robot inaction.

In this picture you can see the electronics as well as the attachments of the components and dowels to the wheels.

In this picture you can see the electronics as well as the attachments of the components and dowels to the wheels.

This is the design sketch for the Red light Green light robot

This is the design sketch for the Red light Green light robot

Parts List

  • Arduino Uno with battery pack
  • Breadboard (as small as possible!)
  • Wires
  • Photoresistor
  • PN2222 Transistor
  • 1x DC motor
  • 1x 1N4001 diode
  • 1x 270Ω resistor
  • 1x 3KΩ resistor
  • 2x paper plates
  • 1x photoresistor
  • Wooden dowel, at least 40cm long
  • Tape
  • Paperclips

Instructions

  1. Attach the photoresistor from 5V to analog pin 5 through a 3KΩ pulldown resistor
  2. Attach the motor as shown in http://learn.adafruit.com/adafruit-arduino-lesson-13-dc-motors/breadboard-layout; use the diode between the two sides of the motor, attaching the middle pin of the transistor to digital port 3 through the 270Ω resistor
  3. Measure the ambient light reading from the photoresistor, and then the flashlight reading, and set the threshold appropriately between the two readings
  4. Punch holes as appropriate in the paper plate wheels (small holes in the center, two larger ones opposite each other).
  5. Unfold a paperclip and wind one half around the spinning axle of the motor. Tape the other half flat on the outside of one wheel.
  6. Break the dowel in half and poke the halves through the larger holes in the wheels, tape them in place.
  7. Securely attach arduino, breadboard, and battery pack in a solid block. Connect the motor appropriately, and make sure the photoresistor faces upward.
  8. Unfold a paperclip and securely tape half onto the arduino/breadboard/batteries contraption. Unfold the other half and poke a straight prong through the paper plate not attached to the motor.

Source Code

/* Red light Green light robot
 * COS436 Lab 3, group 13
 */
int motorPin = 3;
int photoPin = A5;
int motorspeed = 0;
int threshold = 830;
 
void setup() 
{ 
  Serial.begin(9600);
  pinMode(motorPin, OUTPUT);
  pinMode(photoPin, INPUT);
} 
 
 
void loop() 
{
  Serial.println(analogRead(photoPin));
  if (analogRead(photoPin) > threshold) {
    motorspeed = 200;
  }
  else {
    motorspeed = 0;
  }  
  analogWrite(motorPin, motorspeed);
  delay(40);
} 

GaitKeeper

a) Group number and name

Group 6 – GARP

b) Group first names and roles

Gene, Alice, Rodrigo, Phil

Team Member Roles:
Gene – Editor, D.J.
Alice – Writer, Punmaster
Rodrigo – Idea man/Photographer
Phil – Artiste/Builder

c) Mission Statement

Run­ners fre­quently get injuries. Some of these injuries could be fixed or pre­vented by proper gait and form. The prob­lem is how to test a per­son’s run­ning form and gait. Cur­rent tools for gait analy­sis do not pro­vide a holis­tic pic­ture of the user’s gait. Insole pres­sure sen­sors fail to account for other bio­me­chan­i­cal fac­tors, such as leg length dif­fer­ences and pos­ture. While running, users have very lit­tle access to data — they are entirely depen­dent on their own per­cep­tion of their form while run­ning. This per­cep­tion can be rather inac­cu­rate, espe­cially if the run­ner is fatigued. We will build a system that solves these problems while causing minimal alteration to natural running movements. We hope to develop our plans for sensor placement, location of wearable components, and data visualization. We hope to discover whether or not people think this has enough or too many components. We will evaluate our product for comfort based on the intended size, weight, and shape. We will evaluate the effectiveness of depictions of the recorded data.  Our mission is to build a low-impact gait-analysis system that generates a more meaningful representation of data than the existing systems. These metrics will facilitate sports medicine and the process of buying specialized athletic footware.

d) Prototype Description

We implemented a basic version of the foot sensor, the ankle device, the wrist display, and the screens for the computer once the device has been hooked up. The foot sensor is made out of cardboard and is about the size and thickness that our insert will be. We drew on top the general layout of where the sensors will likely be. The ankle device is made out of foam, some weights, and material from a bubble wrap envelope as the strap.The foam and weights were used to create a similar sized and feeling device to an arduino with a battery pack and accelerometer.The wrist display is just three circles drawn onto a strap, again made from bubble wrap envelope. The circles represent LEDs that will light up to indicate correct, so-so, and bad gait (tentatively chosen as green, yellow, and red). For the screens we have mapped out a welcome screen which can take you to either load new data or view past analyses. Selecting to load new data will first take you to a waiting screen.  Once the device is connected, the listed device status will change to connected and the progress bar will begin to fill. From there it will go to the page that displays the analysis tools. We have depicted a heat map image of the foot showing the pressure, and information about each sensor, the runner, the data, and any additional notes the user wants to input. Selecting history from the welcome screen will take you to a page with a list of users. You can either scroll through them or select the top text bar to type in a name to narrow down the results. Clicking on a user will open a drop down menu for dates, selecting a date will take you to the analysis from that day, which is basically the same page as the one you go to after loading new data, but will load any previously made notes or user input.

The foot pad for GaitKeeper

The ankle holder for our prototype.

e) Prototype Testing Tasks

TASK 1: While running, assess gait and technique.

The motivating story for this test is the story of a runner who is worried that their form while running is unhealthy, placing them in danger of injuring themselves.

We will have the athlete install the system on themselves to determine the intuitiveness of placement of the various components. To facilitate testing, the subject will run on a treadmill. One member of our group will perform real-time gait analysis of our subject based on the evaluator’s personal running experiences. Another member will change the color of the simulated LED accordingly. The group member in charge of gait analysis will observe the runner for gait alterations. We will weigh the prototype components to the approximate weight of each component, and observe the prototype’s stability and attachment during running. Also, we will interview the user about the comfort of the system.

The athlete puts the foot pad in their shoe

The athlete straps the prototype on their ankle

The athlete runs on the treadmill

During the workout, the LEDs change color depending on the health of the gait

TASK 2: After an injury, evaluate what elements of gait can be changed in order to prevent recurrence.

The motivating story for this test is the story of a runner who has injured themselves while running. The injury may or may not be gait-related. They are working with a specialist in biomechanics and/or orthopedic medicine to alter their gait to reduce the chance of exacerbating the injury or re-injuring themselves.

We will attempt to find a runner with a current injury, and interview them about the nature of their injury ahead of time. Specialists in biomechanics have extensive demands on their time, so one group member will play that role instead, including brief research into prevention of the injury in question. After assisting the runner in placing and securing the system, we will have the runner run on a treadmill. After the run, we will simulate plugging the device into the computer, and will show simulated running data.

After the workout, the device is plugged into a computer.

The data is shown and aspects of the gait can be identified. Using the data, the doctor can see whether the gait has unhealthy characteristics and can suggest exercises that would help improve the gait for the athlete.

TASK 3: When buying running shoes, determine what sort of shoe is most compatible with your running style.

The motivating story for this test is the story of a customer who is going to a running store and looking for the perfect shoe for their gait. Even if the store allows the user to test the shoe on a treadmill in the store, it is difficult to find the right shoe from feel alone (we know this from personal experience after a team member bought a shoe in P2). A quantitative process would be less error-prone and would allow the in-store specialists to give more substantial advice to customers.

The shoe pad fits into various shoe models of the same size

A gait window for each shoe is opened on a computer in front of the treadmill. The user can see the results of the sensors as they run and the specialist, by the treadmill, can see it as well.

f) Prototype Discussion

i. How did you make it?

The prototype was constructed from found and brought materials in an ad-hoc manner. We worked to simulate the weight and feel of the actual system.

ii. Did you come up with any new prototyping techniques to make something more suitable for your assignment than straight-up paper?

Yes. Our prototype involves fewer interface screens than the projects we contemplated for the individual paper prototyping assignment. We used materials in a three-dimensional manner to model how they will fit on the body of the user.

iii. What was difficult?

It was difficult coming up with names for buttons and functions that would be intuitive for a user.

Building a simulation of physical objects is difficult because the feel of the objects is more important than the visual appearance. We couldn’t think about this physical shape and feel prototyping the same way we did about the paper prototyping we did for the individual assignment.

It was also difficult to determine the layout of the screens.  Specifically, designing the main analysis screen was a challenge because we wanted it to be as informative as possible without being cluttered.  This is clearly a central challenge of all data analysis tools, as it required us to really consider our distinct user groups and how each of them will interact with the data differently.  After a good deal of discussion, we decided that it would be effective to have a single desktop interface that all user groups interact with.  Our main concern here was that runners might be overwhelmed by the information that doctors or running company employees would want to see for their analysis.  However, we concluded from our previous interviews that the running users who would use this product would probably be relatively well informed, almost to the level of running store employees.

iv. What worked well?

We let group members who were immediately excited about prototyping and began building components without prompting continue with prototyping for the duration of the assignment. This produced prototyping material quickly, and let good ideas flow.

By connecting the intended uses of the tasks together, we were able to make an interface that addresses the multiple needs of each task simultaneously. This simplified our product and allowed us to make it more understandable and applicable to use cases we weren’t even thinking about.

The design of the physical device was also a success, as we found through testing that it did not significantly affect how we ran.  The weights added, which we made comparable to an Arduino with batteries, were not excessive.  The form factor was also acceptable, even in a context with a good deal of movement.

P3: NavBelt

Group 11 – Don’t Worry About It
Krithin, Amy, Daniel, Jonathan, Thomas

Mission Statement
Our purpose of building the NavBelt is to make navigating cityscapes safer and more convenient. People use their mobile phones to find directions and navigate through unfamiliar locations. However, constantly referencing a mobile phone with directions distracts the user from safely walking through obstacles and attracts unwanted attention. From our first low-fidelity, we hope to learn how best to give feedback to the user in a haptic manner so they can navigate more effectively. Such will include how long to vibrate, where to vibrate, and when to signal turns.

Krithin Sitaram (krithin@) – Hardware Expert
Amy Zhou (amyzhou@) – Front-end Expert
Daniel Chyan (dchyan@) – Integration Expert
Jonathan Neilan (jneilan@) – Human Expert
Thomas Truongchau (ttruongc@) – Navigation Expert

We are opening the eyes of people to the world by making navigation safer and more convenient by keeping their heads held high.

The Prototype
The prototype consists of a paper belt held together by staples and an alligator clip. Another alligator clip connects the NavBelt prototype to a mock mobile phone made of paper. The x’s and triangles mark where the vibrating elements will be placed. For clarity, the x’s mark forwards, backwards, left, and right while the triangles mark the directions in between the other four.

The Tasks
1. Identifying the correct destination (Easy Task)
2. Provide information about immediate next steps. (Hard Task)
3. Reassure user is on the right path. (Moderate Task)

Method
1. The user types his destination into his mobile phone, and verifies using a standard map interface that the destination has been correctly identified an appropriate route to it has been found.

2. One of the actuators on the NavBelt will constantly be vibrating to indicate the direction the user needs to move in; we simulated this by having one of our team repeatedly poke the user with a stapler at the appropriate point on the simulated belt. Vibration of one of the side actuators indicates that the user needs to make a turn at that point.

The following video shows how a normal user would use our prototype system to accomplish tasks 1 and 2. Observe that the user first enters his destination on his phone, then follows the direction indicated by the vibration on the belt.

http://www.youtube.com/watch?v=oeiPrTMWa0c&edit=vd

The following video demonstrates that the user truly can navigate solely based on the directions from the belt. Observe that the blindfolded user here is able to follow the black line on the floor using only feedback from the simulated belt.
http://www.youtube.com/watch?v=2ByxGkh11FA&edit=vd

3. In order to reassure the user that he or she is on the correct path, the NavBelt will pulsate in the direction of the final destination; if the actuator at the user’s front is vibrating that is reassurance that the user is on the right track. Again, a tester with a stapler will poke at one of the points on the belt to simulate the vibration.
http://www.youtube.com/watch?edit=vd&v=cH1OzO-7Swc

Discussion
We constructed the prototype from strips of paper and alligator clips to hold the belt together and to represent the connection between the mobile phone and the NavBelt. We also used a stapler to represent the vibrations that would direct the user where to walk. We encountered no real difficulties and the prototype was effective for guiding a user between two points.

%eiip — P3 (Low-Fidelity Prototype)

Group Number and Name

Group 18: %eiip

First Names

Bonnie, Erica, Mario, Valya

Netids (for our searches)

bmeisenm, eportnoy, mmcgil, vbarboy

Mission Statement

We are evaluating a bookshelf system for storing and retrieving books. Our system currently involves a bookshelf, with embedded RFID scanners; RFID tags, which must be inserted into each book; and a web application for mobile devices that allows a user to photograph their book to enter it into the database, as well as search through the database and view their books. The purpose of our project is to allow book owners the flexibility of avoiding a rigid organizational system while also being able to quickly find, retrieve, and store their books. With our low-fidelity prototype, we want to learn if users think that using our system is natural enough that they would be willing to use it. We want to observe how users choose to interact with the system, and whether or not it frustrates them. Specifically, we also want to observe their physical motions in order to tailor the construction of our more high-fidelity prototypes. Based on this, we state our mission as follows: We believe that there’s something uniquely special about personal libraries. Rigid organizational systems remove some intangible, valuable experience from book collections; also, we’re lazy! We want to build a way for people to keep track of their books that’s as natural and easy as possible. In this assignment, Mario is drawing interfaces for the mobile website and writing up descriptions; Bonnie is writing responses to questions from the first part (mission statement, etc), writing description of prototypes, and creating task walkthrough films; Valya is writing test user stories for each task and creating task walkthrough films; and Erica is constructing the cardboard prototype shelf, writing the prototype discussion, and putting together the blog post.

Description of Prototype, With Images

Our prototype consists of a two-shelf, cardboard “bookshelf” with paper “RFID sensors” taped to the back; index cards representing the mobile web interface for adding books to the system; some books; and some index cards representing RFID tag bookmarks.

Bookshelf with books in it.

“RFID sensors” on the back of the bookshelf.

RFID tag bookmark in book

Main book search and list screen. The user has searched for a book that is not in the collection.

Adding books, step 1: Screen that asks the user to take a picture of the book

Adding books, step 2: Asks the user to take a picture of the book’s ISBN number.

Adding books, step 3: User enters or verifies information about the book.

Adding books, steps 4 and 5: Instructs the user to insert the RFID bookmark or tag sticker and add the book to the shelf.

The user filed the book successfully!

Display all books in the user’s collection.

Detail screen for a particular book. The user can see the book’s information, and has the option to delete it from the collection.

Tasks

Easy Task:

We brainstormed a few tasks that a user interacting with our prototype could feasibly want to do. Some easy tasks that we thought of were putting a book that’s already in the system onto the shelf, checking if a given book is in a user’s collection, and retrieving a book from the shelf. Ultimately we decided that retrieving a book from the shelves was a more important task to test. Moreover, checking if a book is there could be part of the retrieval process. We would tell our testers the following: Imagine that you have acquired a new bookshelf, for storing your books. You also have a database which keeps track of the books that are on the bookshelf, and where they are, accessible via a mobile website. Suppose that you want to get a textbook for your Imagined Languages class, for example In the Land of Invented Languages by Arika Okrent. Can you get the textbook from the bookshelf? Having accomplished that, could you also get me John Milton’s Paradise Lost? We will ask the user to perform these tasks using our mobile web application. [Note: The Imagined Languages textbook is actually on the bookshelf, so getting it should be easy for the user. However, Paradise Lost is not. We want to see if the user can search the database and accurately understand the information given. If the book is on the bookshelf, the user should go and get it, if the book is not in the system we expect them to say something along the lines of “I do not have this book.”]

The bookshelf lights up to tell you where your book is.

Searching for a book that is not there.

Searching through your library.

Moderate Task:

For our moderate task we chose adding a book to our system, which would include tagging it, adding it to the database, and then placing it on the shelf. We would tell our testers the following: You’re starting a new semester, so you have lots of new classes. In particular, you just purchased Lotus Blossom of the Fine Dharma (The Lotus Sutra). You want to add this book to your collection so that you can find it in the future. We will have the user tag the book, add it to the database, and then place it on our prototype bookshelf.

Placing a tagged book onto the bookshelf.

The website tells you to RFID-tag your book.

Difficult Task:

Finally, for our difficult task we chose adding an entire collection to the system. The reason we’re concerned with this as a separate task is because it’s unclear to us how annoying this would be for a user, and we want to see if we can make it more natural and less tedious. We would tell our testers the following: Suppose that you just bought a new bookshelf that keeps track of all of your books for you. The problem is that you already own quite a few books, and you need to add all of them to the system. Here is a pile of the books you own. We will then have the user tag all of them, add them to the database, and add them to the bookshelf so that they can find them in the future.

Taking a picture of the cover to add the book to the database.


Prototype Discussion

While our prototype includes a software system, we are also extremely interested in seeing how the users interact with the physical objects in our system. Thus, we constructed a scaled-down cardboard bookshelf that can hold a few books. Since paper isn’t exactly sturdy enough, we used cardboard, duct tape, and index cards to put together a bookshelf. We constructed the bookshelf using a couple of disassembled cardboard boxes folded into a shelf-like shape. We added “RFID readers” by folding index cards into vaguely reader-like shapes and taping them onto the back. We are using index cards folded in half to simulate RFID tags. We used index cards to create a paper prototype, in the usual manner.

Putting together cardboard in a way that will hold a few books using minimal amounts of cardboard was slightly difficult but doable. We used some index cards to stabilize it, which was pretty cool. We prototyped our web application (the main interface to the system) using paper index cards, which we felt were appropriate given that the application is targeted primarily at mobile devices. Getting the specifics of all the workflows correct at first was somewhat difficult, since we had not fully fleshed them out before – for instance, our first implementation of the “add book” workflow did not allow users to verify the quality of the pictures they took before proceeding to the next step, which was an awkward design. We also had some initial struggles with conforming to standard mobile UI conventions and best-practices; thinking consciously and critically about the layout of UIs is difficult, especially for mobile contexts where screen real-estate is at a premium.

Group GARP – L3

Gene Mereweather
Philip Oasis
Alice Fuller
Rodrigo Menezes

Group #6

We built a tricycle that used three rolls of tape as wheels and wooden dowels as axles. One DC motors powered a cardboard fan that pushed the tricycle. We consider the prototype a success, but it definitely is not the most efficient mode of transportation! We tried to keep the tricycle as compact as possible, as weight played a very large part in its motion.

Brainstormed Ideas

  1. Continuous treads (like a tank!). Use rolls of aluminum foil for wheels and aluminum foil as the treads.
  2. Reeling-in robot with thread attached to motor and other end fixed
  3. Balls instead of wheels – allows for more directions (makes parallel parking a breeze!). Servo motors can change the direction of the DC motors.
  4. Hover car – fans on the bottom for hovering and fan on the top for direction
  5. Use two servo’s with wooden extensions to act as legs, with another “leg” trailing behind to keep it balanced
  6. Put an off-center weight on the DC motor, then set the whole assembly on top of angled toothbrush heads
  7. Use one servo to scoot the robot forward, and a DC motor for rotary
  8. Use the servo motor to change direction in the front two wheels and the DC motor for acceleration in the back two wheels.
  9. “parachuter” attach piece of cloth to catch wind, then use two servos to tug on the cloth so as to change motion
  10. Attach a magnet to the servo arm, secure another large magnet to the floor, and change the servo angle to attract or repel that other magnet
  11. have a little stone attached to a string. The servo will “throw” the ball out, the DC motor which has the string attached to it will pull on it to pull itself forward (the rock would have to be heavy enough)
  12. There will be two servos acting as breaks/balancers they will both lift up briefly then the DC motor which will be in the back will engage and move the robot forward, then the servos will lower to the ground to balance the robot again while the DC is stopped.
  13. Put it on wheels, but have a DC motor to power a fan.
  14. Attach three wheels that are on unpowered axels, have the robot moved forward by a fan that is attached to the back and is connected to a dc motor.
  15. Place a motor on top of a piece of metal and have it move at as fast a speed as possible so as to heat up the small piece of metal. Have a fan at the back. Place to whole thing over a piece of wax or ice. The thin hot metal will melt the wax and the flan will propel it forward. To change the direction you could place the dc fan on top of a structure that is attached to a servo.

Design sketches

 

 

Parts

– 3 rolls of tape of roughly the same diameter

– Cardboard, cut into hubs for the wheels, a base for the car and for the fan

– Wooden dowels for axles

– Aluminum foil, to hold the axles in place

– Tape, to keep everything together

– Arduino, breadboard and wires

– 1 DC motors

– 1 330 Ohm resistors

– 1 1N4001 diode Diodes, 1 PN2222 transistor

Instructions

– Cut out the cardboards so you have sturdy hubs for the wheels of tape and so you have a comfortable base for your tricycle.

– Fit the cardboard into the tape rolls and use the wooden dowels as axels. Tape aluminum foil to the base so that the axles can still rotate within the foil, but still keep the base up.

– Choose the smallest breadboard possible (to conserve weight), and use the diode/transistor/resistor to connect the DC motors to the fan on the base.

– Secure the arduino and breadboard on the base with tape.

– Turn on the robot and watch it move (slowly)

Video

video480

In the final version, we ended up separating the breadboard from the rest of the car to make it lighter.

Source

We used Adafruit’s default DC motor code:

 

/*
Adafruit Arduino - Lesson 13. DC Motor
*/


int motorPin = 3;
 
void setup() 
{ 
  pinMode(motorPin, OUTPUT);
  Serial.begin(9600);
  while (! Serial);
  Serial.println("Speed 0 to 255");
} 
 
 
void loop() 
{ 
  if (Serial.available())
  {
    int speed = Serial.parseInt();
    if (speed >= 0 && speed <= 255)
    {
      analogWrite(motorPin, speed);
    }
  }
} 

EyeWrist P2

Group 17-EyeWrist

Evan Strasnick, Joseph Bolling, Jacob Simon, and Xin Yang Yak

Evan organized and conducted the interviews for the assignment, brainstormed ideas, and wrote the descriptions of our three tasks.

Joseph helped conduct interviews, researched bluetooth and RFID as tech possibilities, and worked on the task analysis questions

Xin Yang took detailed notes during interviews, brainstormed ideas, wrote the summaries of the interviews, and helped write the task analysis questions.

Jacob brainstormed ideas, helped take notes during interviews, wrote the description of the idea, and drew the storyboards and diagrams.

Problem and solution overview
The blind and visually disabled face unique challenges when navigating. Without visual information, simply staying oriented to the environment enough to walk in a straight line can be challenging. Many blind people choose to use a cane when walking to avoid obstacles and follow straight features such as walls or sidewalk edges, but this limits their ability to use both hands for other tasks, such as carrying bags and interacting with the environment. When navigating in an unknown environment, the visually impaired are even further limited by the need to hold an audible gps system with their second hand. Our device will integrate gps interface and cane into a single item, freeing an extra hand when navigating. When not actively navigating by gps, the compass in our device will be used to help the user maintain their orientation and sense of direction. All of this will be achieved using an intuitive touch/haptic interface that will allow the blind to navigate discretely and effectively.

Users:
We began our project hoping to improve the quality of life of the visually impaired, and thus our primary user group is the blind. This group provides great opportunities for our design purposes for a number of reasons: first, as discussed in class, “standard” technologies tend to interface by and large in a visual way, and therefore working with the blind encourages exploration of new modalities of interaction. In addition, the blind endure a series of problems of which the majority of the sighted population (ourselves included) are not even aware. This motivated us to attempt to identify and hopefully solve these problems to noticeably improve the lives of the visually impaired.
Despite the difficulty of encountering blind users around the Princeton area, we wanted to make sure that we did not compromise the quality of our contextual inquiry by observing users who were not actually familiar with the difficulties of visual impairment, and thus we contacted the New Jersey Commission for the Blind and Visually Impaired, utilizing their listserv to get in contact with various users. As we waited for IRB approval to actually interview these users, we gathered background information by watching YouTube videos of blind people and the strategies that they have adopted in order to navigate the world safely and autonomously. We taught ourselves about the various pros and cons of the traditional cane, and practiced the actual techniques that the blind themselves learn to navigate. This alone provided dozens of insights without which we could not have hoped to understand the tasks we are addressing. (one such video can be found here: http://www.youtube.com/watch?v=VV9XFzKo1aE)
Finally, after approval was given, we arranged interviews with three very promising users. All were themselves blind – two born without vision and one who lost vision later in life – and each not only faced the challenges of navigation in their own lives, but also had some interest or occupation in assisting other blind people with their autonomy. Their familiarity with technology varied from slight to expert. Because of the difficulties the users faced in traveling and their distances from campus, we were limited to phone interviews.
Our first user, blind since birth, teaches other visually impaired people how to use assistive technologies and basic microsoft applications. He had a remarkable knowledge of existing technologies in the area of navigation, and was able to point us in many directions. His familiarity with other users’ ability and desire to adopt new technologies was invaluable in guiding our designs
Our second user, whose decline in vision began in childhood was the head of a foundation which advocates for the rights of the blind and helps the visually impaired learn to become more autonomous. While her technological knowledge was limited, she was able to teach us a great deal about the various ins and outs of cane travel, and which problems remained to be solved. She was a musician prior to losing her sight, and knew a great deal about dealing with loss of certain capabilities through inventive solutions and positive attitude.
Finally, our third user was directly involved with the Commission for the Blind and Visually Impaired, and worked in developing and adapting technologies for blind users. He already had a wealth of ideas regarding areas of need in blind technologies, and with his numerous connections to users with visual impairment, understood which features would be most needed in solving various tasks. In addition to his day job, he takes part in triathlons, refusing to believe that there is any opportunity in life which the blind were unable to enjoy.

The CI Interview

All of our CI interviews took place over the phone as none of our interviewees lived near campus or could travel easily. Prior to our interview, we watched an instructional video on cane travel to better understand the difficulties associated with cane travel, and identified that navigation and tasks requiring the use of both hands were tasks that we can potentially improve. Based on what we learned, we asked our users about general difficulties for the visually impaired, problems associated with cane travel and navigation, and how they would like to interact with their devices.

Our interviewees all indicated that indoor navigation is more difficult than outdoor navigation, as the GPS and iPhone have solved most of their outdoor navigation problems. Being blind also means having one less hand free, since one hand would almost always be holding the cane. Our interviewees also emphasized the importance of being able to hear ambient sound. The fact that all our interviewees are involved with teaching other blind users (through technology or otherwise) may have contributed to these similarities in their responses – one of our interviewees mentioned that people over the age of 50 tend to have more difficulty coping with going blind because of their reluctance to use new technologies.

There were also some differences between what the issues the interviewees brought up. Our first user was particularly interested in an iPhone-App-based solution for indoor navigation. He brought up how the smartphone had drastically lowered the cost of assistive technologies for blind people. Our second interviewer brought up that many problems associated with navigation can be overcome with better cane travel and more confidence. She, for example, mentioned that reading sheet music is a problem. Our third user suggested a device to help blind swimmers swim straight. These differences could be due to the difference in the interviewees backgrounds – for example, the second interviewee has a music background, while the third interviewee takes part in triathlons.

Task Analysis
1. Who is going to use system?
We are designing our product with the blind and visually impaired as our primary user group. The vast majority of computing interfaces today operate primarily in the visual data stream, making them effectively useless to the blind and visually impaired. We are hoping to improve the quality of life of those who have difficulties navigating their physical environment by developing a simple, unobtrusive, but effective touch-based navigation interface. These users will likely already have skills in navigating using haptic exploration (i.e. working with a cane), but they may not have much experience with the types of technology that we will be presenting to them. They can be of any age or education, and have any level of comfort with technology, but they share in common the fact that they wish to reduce the stress, inconvenience, and stigma involved with navigating their day-to-day space and thereby increase autonomy.

2. What tasks do they now perform?
The blind and visually impaired face numerous difficulties in performing tasks that seeing individuals might find trivial.  These include:
-Maintaining a sense of direction while walking
-Navigating through unfamiliar environments
-Walking while carrying bags or other items

Notably, it is impossible for seeing individuals like us to understand the wide range of tasks that we take for granted. Typically these tasks are solved using:
-A cane, with which the user may or may not already have an established “relationship”
-Audio-based dedicated GPS devices
-Navigation applications for smartphones, developed for use by the blind or used along with some form of access software, such as a screen reader
-Seeing-eye aides (e.g. dogs)
-Caretakers
-Friends/Relatives
-Strangers
The current solutions used in completing the tasks all require the use of more resources, in terms of auditory attention, hands, and help from others, than we believe are necessary.

3. What tasks are desired?
We wish to introduce a more intuitive, less obtrusive system that will allow the blind to navigate quickly and confidently.  Namely, users should be able to
-Traverse an area more safely and more quickly than before
-Maintain a sense of direction at all times while walking
-Navigate unfamiliar territory using only one hand to interact with navigation aides
-Navigate unfamiliar territory while maintaining auditory focus on the environment, and not a gps device
-Feel more confident in their ability to get around, allowing them a greater freedom to travel about their world

4. How are the tasks learned?
The blind and visually impaired spend years learning and practicing means of navigating and handling daily tasks without vision. These skills can be developed from birth for the congenitally blind, or can be taught by experts who specialize in providing services and teaching for the visually impaired. In the case of phone applications and handheld gps devices, a friend or family member may help the user learn to interact with the technology. For this reason, it will be especially important that the users themselves guide the design of our system.

5. Where are the tasks performed?
The tasks are performed quite literally everywhere. There is a distinction between navigating familiar environments, where the user has been before, and navigating new spaces.The latter task, which may be performed in busy streets, in stores, schools, or when traveling, involves a much greater degree of uncertainty, stress, and potential danger. It should also be noted that there is a distinction between indoor and outdoor navigation. Outside, GPS technology can be used to help the blind locate themselves and navigate. Indoor navigation becomes a much more difficult task. In both environments, the blind frequently are forced to request help from sighted bystanders, which can be embarrassing or inconvenient.

6. What’s the relationship between user & data?
Our current designs do not involve the storage and handling of user data per se, but as we proceed through the various phases of user testing, we believe that the visually impaired have a particular right to privacy due to the potential stresses and embarrassment of their results. For this reason, we hope to learn from specialists the proper way to interact with and record data from testing with our users.

7. What other tools does the user have?
The primary tool which serves to make navigation possible is the cane. This is an important and familiar tool to the user, and has the natural advantages of intuitive use and immediate haptic response. However, users must dedicate a hand to its use and transportation. Another tool that they frequently use is the GPS functionality of the smartphone – given the current location and destination, the user can receive turn-by-turn auditory directions. This has the advantage of allowing the user to navigate to a destination that is unfamiliar without asking for direction. The disadvantage is that the GPS is not always reliable, and does not provide directions indoors. The user also needs to use a hand to hold the smartphone. Users might also employ the use of aides, whether human or animal, although this further decreases the user’s autonomy.

8. How do users communicate with each other?

Barring any other deficits in hearing, the blind are able to communicate in person through normal speech; however, they are unable to detect the various visual cues and nuances that accompany normal speech. The blind have a number of special accessibility modifications to technology (text-to-speech, etc.) that increasing allow the use of the same communications devices (smartphones, computers) that seeing individuals employ.

9. How often are the tasks performed?
Blind people navigate every day. In day-to-day scenarios, the cane allows the users to navigate safely and confidently in familiar surroundings. However, while blind people don’t navigate to unfamiliar places as often, there is more uncertainty involved and is more intimidating, so this is a problem worth solving.

10. What are the time constraints on the tasks?
Navigation takes much longer without the use of vision. The specific time constraints on the task vary with where and why someone is travelling, but are often based on the times associated with events or meetings that the navigator wishes to attend. We hope to make the process of communicating with a navigation system much more efficient, in terms of time and mental energy required. Ideally, we believe our system can allow a blind user to navigate as quickly as a sighted person equipped with a standard gps system, by eliminating the delay associated with conveying information solely through the audio channel.

11. What happens when things go wrong?
The hazards of wrongly or unsafely navigating space are not merely inconvenient; they are potentially life-threatening. Safety is the number one consideration that limits the ability of the user to be confident in their navigational skills. Outside of the safety concerns associated with being unable to navigate smoothly and efficiently, visually impaired people who become lost or disoriented often have to rely on the kindness of strangers to point them to their correct destination. This can be embarrassing and stressful, as the user loses his or her autonomy to a complete stranger. Not only this, but even when things “go right,” and users manage to get from point A to point B, they psychological stress of staying found and oriented without visual information makes travel unpleasant.

Specific Tasks:
1) Walking home from a shopping trip: This task is moderately difficult with existing means, but will likely be much easier and safer with our proposed technology. We describe a scenario in which the user begins at a mall or grocery store and walks a potentially unfamiliar path which involves staying on sidewalks, crossing a few streets, and avoiding obstacles such as other pedestrians while staying on route to their destination by identifying particular landmarks. This must be accomplished all while carrying the purchased items and possibly manipulating a GPS device.
Currently, such a task involves utilizing a cane or similar tool to identify obstacles and landmarks. While basic cane travel is an art that has been developed over generations and has many invaluable merits, it also carries several drawbacks. Firstly, the cane must be carried around constantly and kept track of throughout the day, limiting the hands that the user has to carry other items (or interact with objects or people). If the user is utilizing a GPS device to guide them to their destination (as most blind people have now become accustomed to doing with their smartphones), they must use their other hand to manipulate the device. Thus, unless the user is setting something down or fumbling to carry everything, they will have to stop and set down things simply to operate their GPS. On the other hand, because the cane can only help guide the user relative to their mentally tracked position, if the user has no GPS device and loses track of their cardinal orientation, they have few means by which to reorient themselves without asking for help.
With our proposed system, the user will no longer need to worry about tracking their cardinal orientation, because the system can immediately point them in the right direction through intuitive haptic feedback. Because the cane itself will integrate with GPS technology via bluetooth, the user will not have to manipulate their phone in order to query directions or be guided along their route. This frees up a hand for the user to carry their items as needed.

2) Following a route in a noisy area: This is another fairly difficult task which will become significantly easier using our system. An example of this task is getting from one place to another in city region such as Manhattan. Because the user must receive their navigation directions via audible commands, the user has trouble navigating if they cannot hear their commands. Currently, aside from straining to hear, the main option for a blind person to still manage is by using headphones. However, most blind users prefer not to use headphones, as doing so diminishes their ability to hear environmental cues, on which they heavily rely to navigate.
Our system solves this problem by relaying directional commands in a tactile manner, allowing the person to clearly receive guidance even in the noisiest of environments. Similarly, the need for headphones is eliminated, allowing a person to never disrupt their perception of environmental cues by the occurrence of a GPS message. Guidance is continuous and silent, allowing the user to constantly know where they headed and how to get there.

3) Navigating an unfamiliar indoor space. Despite the preconceptions we might have about the outdoors being hazardous, this task is actually the most difficult of all. Currently, because most GPS technologies do not function well indoors, unfamiliar indoor spaces are generally agreed to be the most intimidating and difficult to navigate.
With current means, blind people typically must ask for help in locating features of an indoor space (the door leading somewhere, the water fountain, etc.), and build a mental map of where everything is relative to themselves and the entrance of the building. The cane can be used to tap alongside walls (“shorelining”) or to identify basic object features (i.e. locate the doorknob). Unfortunately, if the person loses their orientation even momentarily, their previous sense of location is entirely lost, and help must again be sought. For this reason, any indoor space from a bathroom to a ballroom poses the threat of getting “lost.”
Our system uses a built-in compass to constantly keep the user cardinally oriented, or oriented in the direction of a destination if they so choose. As a result, a user can build their mental schema relative to absolute directions and never worry about losing sight of, e.g., where North is located. The user need not draw attention to himself through audible means or carry a separate device for indoors such as a compass (or manipulate their smartphone with their only free hand). Most importantly, the user’s autonomy is not limited because the directional functionality integrated into their cane gives them the ability to navigate these otherwise intimidating spaces on their own.

Interface Design

Description: It was apparent from our interviews that our device should not impede users’ ability to receive tactile and auditory feedback from their environment. By augmenting the cane with haptic feedback and directional intelligence, we hope to create a dramatically improved interface for navigation while preserving those aspects that have become customary for the blind. Specifically, the “Bluecane” will be able to intelligently identify cardinal directions and orient the user through vibration feedback. Bluetooth connectivity will enable the cane to communicate with the user’s existing Android or iOS device and receive information about the user’s destination.  A series of multipurpose braille-like ridges could communicate any contextually-relevant information, including simple navigational directions and other path-finding help. The greatest advantage of an improved cane is that it wouldn’t disrupt or distract the user, unlike an audible navigation system, and it gives the user a free hand while walking.

Storyboard for Task 1

Storyboard for Task 2

Storyboard for Task 3

Design sketch

Design sketch