Final Blog Post – Group 11

Group 11: Don’t Worry About It







A one-sentence description of your project:

The Nav­Belt will help nav­i­gat­ing around unfa­mil­iar places safer and more convenient.


Hyper-links to all previous project assignment blog posts:

P1 –

P2 –

P3 –

P4 –

P5 –

(also with P5) –

P6 –


Video Demo:


Bullet list of changes since P6:

  • We changed the signalling protocol between the phone and the the Arduino

    • We used to signal 8 discrete desired heading, at 8 different specific frequencies, but have changed that to a continuous mapping between heading and frequency.

    • This allows the app to specify directions much more accurately.

  • We added a compass calibration routine, activated via a button in the phone UI.

    • This turned out to limit the accuracy of the directions we produced earlier.


How and why our goals/design evolved over the semester:

Our overall goal — to allow the user to navigate without constantly looking at a phone screen — has remained the same. However, we realized in the course of our testing that it is better for us to aim to complement the use of a visual map on the phone screen than to completely replace it. This is because some users still would like the map for information about their absolute position and the remaining distance to their destination, which is not easily learned through a tactile interface. In addition, we changed our final task from “reassure user that they’re on the right path” to “know when you’ve arrived”, because this was more concrete and far more measurable.

We have also made some concessions to practicality over the course of the semester. For instance, while we originally intended to use a BlueTooth connection between the Arduino and the phone, using an audio cable proved to be feasible and cheaper, and after realizing how much work installing each vibrating motor was going to involve, we ended up using only four buzzers instead of eight.

    We also flirted with a few changes that we did not stick with. For a while we considered building a web app rather than an Android app, an idea which we discarded because of the difficulty in playing audio from a web page on the fly with low enough latency. We also considered building a induction coil for bidirectional communication between the Arduino and phone, but this turned out to be both impractical and unnecessary.

Critical evaluation of our project:

    With further iteration, the NavBelt can definitely be a useful real-world system. There is a clear use case for this device, as it provides directions to a user while being less demanding of the user’s attention, and our testers appreciated that aspect of the system. Users testing the last iteration of our prototype were able to follow its directions with just a few seconds of explanation and very minimal intervention from us, indicating that we have tackled most of the major usability hurdles at this point. One of the as-yet untested aspects of the system is whether users will be able to put on and take off the belt themselves, since we always helped them with this in testing, but we expect that with a slightly more robust version of the belt users will be able to quickly learn to do this right.

    In addition, we were able to build this very cheaply, even though we were building a first prototype and therefore were prioritizing speed of prototyping over budget. The main expenses are the compass and the Arduino; if we were to produce this commercially we could use a simpler (and cheaper) chip than the Arduino. With economies of scale, we would probably be able to produce this for under $50, which is not bad for cool new tech.

    We have learned several things about the application space as a result of this project. One thing we learned is that obtaining accurate geo-location data is a challenge, as the resolution of the co-ordinates obtained through tools like Google maps is only good enough for navigation in a vehicle, not fine-grained enough to give accurate turning directions for a user on foot. We had to manually measure the coordinates of our waypoints using a phone GPS to get around this.

From the user testing of our implementations and our post-testing evaluations, we learned that the users tended to rely purely and completely on the belt readings. Perhaps because of the haptic nature of the belt, some users tended to treat its directions as though there were absolutely no uncertainties in the readings. We found this led to somewhat bizarre behavior, such as users wondering if they should go into or through a hedge bush versus an adjacent walking path, simply because the belt signaled them to turn slightly early – even though if they had been verbally told to turn right they would have applied a little more common sense and walked along the path. We believe this should be surmountable with both more highly accurate direction signaling and some habituation on the users’ part as they learn to interpret the signals from the belt.


Specific proposals for moving forward:

    If we had more time, we would consider setting up more buzzers on the NavBelt, to provide more fine-grained turning directions to the wearer. Of course, we would require further testing with users to determine whether or not this design would improve usability. Repeating our tests and data analysis from P6, which involved analyzing the completion times, the amount of time spent paused, and the number of navigation errors made, we are confident we would be able to easily gauge whether having more buzzers on the NavBelt would improve usability or not.

We also have the leftover challenge of implementing a slick map user-interface, one with a simple design and intuitive enough for anyone to learn and use quickly with ease. Ideally, a user would input a target destination in this interface, which would then generate a route towards the target. Currently, we hardcoded a series of waypoints, but a more complete design would calculate any path to any location in real-time. We plan to use Bing Maps API to implement turn-by-turn navigation and to use those coordinates as waypoints for the NavBelt. With further user testing we could observe and definitively test how long it takes user-testers to input a destination, start the belt, and begin the trek.


Link to .zip file – source code + readme file:

Third-party code used:

  • FFT code on the Arduino – to receive and translate tones from the Android
    ~ From the provided library


Links to PDF versions of all demo-session printed material:

PDF of poster:


P5: Expressive NavBelt – Working Prototype

Group 11: Don’t Worry About It

Krithin, Amy, Daniel, Thomas, Jonathan

1-Sentence Project Summary:

The NavBelt will help navigating around unfamiliar places safer and more convenient.

Supported Tasks in this Working Prototype

Task 1. Hard: Choose the destination and start the navigation system. It should be obvious to the user when the belt “knows” where they want to go and they can start walking.

Task 2. Medium: Figure out when to turn. This information should be communicated accurately, unambiguously and in a timely fashion (within a second or two)

Task 3. Easy: Know when you’ve reached the destination and should stop walking.

Rationale for Changing Tasks

We replaced Task 3 (Reassure user is on the right path [without looking at the phone]). Our lo-fi testers trusted the belt but kept referring to the map to know their absolute position. This is unnecessary for our system but users like to know this and we do not want to force them to modify their habits for our system. The task is replaced with “know when you’ve reached the destination”, a more concrete task that will be completed often.

We’ve kept our second task unchanged, because it is clearly an integral part of finding your way around.

We thought Task 1 (choosing the destination) would be the easiest, but our user tests showed otherwise. All three test users were confused with the phone interface and how to choose a destination. Users had no idea if the phone accepted the destination or if the belt had started giving directions. As a result, Task 1 is now our hardest task.

Revisions to Interface Design

Originally, we intended to use 8 buzzers, spaced evenly around the user’s waist. After testing in P4, however, where we simulated having just three buzzers, we decided that implementing four buzzers would be enough for this prototype. Four buzzers are sufficient for signaling forward motion, right and left turns, and reverse motion (for when the user has gone too far forward); this is enough to cover the majority of our use cases, so we decided that adding the four additional buzzers for intermediate turns was not worth the additional complexity it would bring to the prototype at this stage.

A second change we decided to make was to have the phone interface have an explicit switch to turn the belt navigation system on and off, in the form of a toggle button constantly displayed on the same screen as the route map. By default, the belt will start buzzing – and this switch on the display will indicate that fact – as soon as the user selects a destination and the route to it is computed. This at once overcomes an explicit problem that users told us about during P4 testing, where they were not sure whether the belt navigation system was supposed to have started buzzing, and an implicit problem where users might occasionally want to temporarily turn the buzzing off even while en route to a destination.


Updated Storyboards for 3 Tasks


Sketches for Still-Unimplemented Portions of the System, and Changes to Design

There are two key elements that are as yet unimplemented. The first of these is the map interface on the phone. We envision having an interface on the phone where the user will select a destination, observe the computed route to that destination, and if necessary toggle the vibration on the navbelt. We chose not to implement that yet at this stage, mostly because writing a phone UI is something more familiar to us as a team of CS majors than working with hardware, so we wanted to get the harder problem of the phone-Arduino-buzzer signaling out of the way first. We do however have a detailed mockup of how this UI will look; see the pictures of this UI in the following gallery:

Another key functional element not yet implemented is the correction for user orientation. We envision that this would be done using an electronic compass module for the Arduino, with the Arduino computing the difference between the absolute heading it receives from the phone and the direction the user is facing (from the compass reading) to determine the direction the user needs to move in and the appropriate buzzer to activate. An alternative way of implementing this would be to have the phone itself compute the direction the user needs to move in based on its internal compass and send the appropriate signal to the Arduino. We chose not to implement this at this stage because we have these two possible ways of implementing this functionality, and the latter of them, though possibly less accurate, should be relatively easy for us to do.

Overview and Discussion For New Prototype

We have thus far imple­mented a belt with four buzzers along it, each of which is con­nected to an output pin on an Arduino con­troller. These four buzzers all share a common ground, which is sewn to the inside of the belt in a zig-zag shape in order to allow it to expand when the belt stretches. This elasticity allows the belt to accommodate different waist sizes without repositioning the buzzers, as well as holding the buzzers tightly against the user’s body so that they can be felt easily.

As mentioned above, we decided to drop the number of buzzers from eight to four, as it reduces complexity. For the most part this does not affect the users, as receiving directions from four buzzers is practically as easy as receiving directions from eight buzzers.

The Arduino is con­nected to an Android phone through an  audio cable, and on receiv­ing sig­nals from the phone, causes the appro­pri­ate buzzers to vibrate. We also imple­mented a phone inter­face where we can press but­tons to gen­er­ate audio sig­nals of dif­fer­ing fre­quen­cies that match what the Arduino expects as input. We can thus use the phone inter­face to con­trol indi­vid­ual buzzers on the belt, and have tested that this works cor­rectly (see video at This is a much quicker and more efficient method of activating the buzzers than our previous method of attaching/detaching separate wires to the battery source.

Video of Belt Being Worn by User

Wizard-of-oz techniques needed so far are to have a user press buttons corresponding to the directions of motion on a phone interface to cause the appropriate buzzer to start vibrating.

We used example code posted on StackOverflow at by Steve Pomery to generate arbitrary tones on Android.


L3: The Expressive Rolly-bot


Thomas, Krithin, Amy, Daniel, Jonathan

Group Number:


We originally built an automaton that used two wheels powered by two DC-motors. The wheels were thick, sturdy paper plates large enough to fit the Arduino and batteries snugly in between, like a sandwich. Unfortunately, after many attempts and tweaks we were unable to make it successfully work, and scrapped it for a smaller, simpler design using two halves of a yo-yo. The Arduino and batteries were removed from the robot itself and were attached via alligator clips.
We wanted it to move really fast and considered small paper plates an effective method. Alas, they weren’t; the yo-yo wheels were better. It rolled around quite nicely and we like that it is basically a segway without a steering pole, it goes on its own with just two wheels. Attaching the plates directly to the DC motors didn’t work, so we attempted attaching bottlecaps to the motors, and then the plates to the bottlecaps. That didn’t work either so we switched to a yo-yo.
Brainstormed Ideas:
  1. Spinning Screw
  2. Mechanical Bird
  3. Grappling Velcro Hook – uses the Velcro arm to pull itself
  4. Quad-copter
  5. Wobbly-walker – Two-armed “crawler” that has two parallel arms offset by 180 degrees rotating together.
  6. Balloon as Bellows – a motor controls a valve to release air from a balloon to propel it
  7. Submarine
  8. Unicycle
  9. Growing Seed – a seed planted beneath a robot, the motor waters the seed, as it grows it will move the robot skyward
  10. Ripstick
  11. AT-AT – a four-legged spider-walker
  12. Car – either a two-wheel or four-wheel variation
Photos of Design Sketches AND Video of Final System:
Kaltura media memory limit reached, so here’s a link to all of our pictures and videos, captions are provided there as well.
Parts List:
  • Arduino
  • AA Batteries (4)
  • DC Motors (2)
  • Yo-yo (1)
  • Stackable Pin Header (3)
  • Sticky Putty
  • Electrical tape
Instructions for Recreation:
  1. Tape two DC motors together, so the spindles are facing away from each other.
  2. Place the three pin headers around the two motors, like a splint, to keep them aligned. Duct tape it all together.
  3. Wire the motors to the Arduino (motors are wired in parallel to the 5V and Ground pins) and wire the Arduino to the batteries.
  4. Stick putty inside the yo-yo halves.
  5. Attach yo-yo halves to DC motor spindles.
  6. Plug in batteries and watch it go!

Source Code:

No code was used, we just used the Arduino to connect the motors to the battery source; no code required.



Bereket Abraham, Horia Radoi, Thomas Truongchau, David Lackey, Jonathan Neilan

A3 – SCORE analysis


i. Most severe problems with the site and which Nielsen’s heuristic it violates:

1 – problem: H9 – Informs of error, but no solution offered/ no information provided on how to fix it.

solution: Explain in more detail (like a question mark option) or inform the student of who to go to for more information/help

2 – problem: H2 – Difficulty in differentiating between pages: “Student Center” and “Main Menu” are two different pages, though for a student, the “Student Center” is treated like the main page. Also, the titles of the drop-down menus are non-intuitive.

solution: Have clearer links, and fewer options on the “main” page (offer question mark buttons that explain what the page is for)

3 – problem: H7 and H8 – Drop down menus have similarly named options that perform different tasks, but they all lead to the same page/link, and students have to continue to hunt and search. Also, there is no separation or marking between the most commonly used features and the rarely used features; all options are thrown together, and in small font to fit on one page.

Solution: Layout, place common options at top of page, and rare ones at bottom or not on main page at all, on another page/menu

4 – problem: H5 – Redundant “Enroll: Swap” option is unnecessary, and causes an error and has no error prevention for classes with conflicts or having less than 3 classes.

Solution: Better course enrollment design in general, just scrap the whole system and build from the ground up to be honest.


ii. Problems made easier to find with Nielsen’s heuristics:

– We already knew what was annoying, but did not how to classify or quantify it.

H2: The “mismatch” between language from the system, and more intuitive language from the real world.

H1: We didn’t have any problems with this, but we would not have thought of it as an interesting issue without the heuristics list.

H8: We had a problem with the aesthetics, but hadn’t thought of the “minimalist” design concept as a way to improve visibility for a user.

H6: Recognition vs Recall is a good way to differentiate and classify problems we recognized with the system, we would otherwise not have known  how to effectively list.


iii. Usability problems not mentioned in Nielsen’s heuristics:

– Navigation of the website. More specifically, we consider the order and number of links one has to go through to hit a desired page or option is important enough to be its own heuristic.


iv. Useful class discussion points and/or potential final exam questions:

– What heuristics matter more (or less) based on the different interactive systems being used (e.g. a college student management site like SCORE versus a video game)?

– To what extent does the severity of certain heuristics matter based on differing interactive systems?

– How do people react when a system is changed, particularly if they have adjusted to and gotten used to the “bad” form of the interactive system (i.e. if we changed SCORE right now, how would seniors react)?


Individual Posts:
Thomas –
David –
Jonathan –
Horia –
Bereket –




A2 Jonathan Neilan


Person 1 – Early student – chats with friends, goes on facebook or gmail, etc.

Person 2 – Late student – does not/cannot do much other than find a seat and wonder what happened the first few minutes (came from Frick chem lab to COS building)

Person 3 – TA – chats with fellow TAs and reviews outline of today’s lecture or how many people e-mailed ahead for visits during office hours.


1-      Game review of last lecture or any previous lecture material

2-      Trivia quiz of next lecture or any future lecture material

3-      An app that updates when assignments announced in that class are posted

4-      An app that updates when lecture slides for THAT LECTURE DAY are posted

5-      An app to show dining hall food, how full it is, and where/when friends plan to eat

6-      Top (world) news in 10 minutes

7-      Social networking, an app you enter info about yourself and your seat, and it will suggest to you and others to seat next to each other (based on something you have in common)

8-      A program that prompts you to message someone you used to be close to (or your mother)

9-      A quick 10 minute analysis of how well you are faring on accomplishing your daily goals so far. Tracks what has been accomplished, what is there still to do, etc.

10-  Something that reminds of only one of your goals, and in the 10 minutes you have to brainstorm how you will incorporate it into your day.

11-  Something that tracks what you do on your laptop every minute over a period of time, then during that 10 minutes, you can review how you ACTUALLY spend your time

12-  An interactive program to cheer you up, e.g. make you acknowledge good weather, or show you pictures of cute animals, etc.

13-  For irregular journal keepers: a mini-diary that provides prompts, something totally writeable in 10 minutes or less.

14-  Creative writing, but based on prompts that encourage positive thinking (to relieve stress)

15-  An app that provides a quick overview of your google calendar events, meetings, appointments, etc. remaining for that day.


2 Favorite Ideas:

1-      #8 – A program that prompts you, perhaps weekly, to write to family, friends, loved ones, etc.; why? Because I’m so bad at remembering to do that which disappoints mother, and that 5-10 minute time is perfect for writing a quick message to keep in touch with those you care about!

2-      # 7; why? Because it’s a new and funny way to potentially find new friends in a class you may not know anyone in, especially if it is out of your department. Potential study buddies!

Weekly Letter Reminder

Weekly Letter Reminder

Class Potential Friend Finder

Class Potential Friend Finder


From User Testing of Weekly Letter Reminder:

From early student:

– Works great, but 8-10 minutes doesn’t seem like enough sometimes

– Most people he keeps in touch with regularly on facebook anyways, but the reminder is nice.

From late student:

– no love…but she likes the idea


From TA:

– Enjoys it for keeping in touch with colleagues.

– Downside is there’s no point if you do it regularly already via e-mail, etc.