P6 – Team TFCS

Basic Information (a,b,c)

Group Number: 4
Group Name: Team TFCS
Group Members: Collin, Dale, Farhan, Raymond
Summary: We are making a hardware platform which receives and tracks data from sensors that users attach to objects around them, and sends them notifications e.g. to build and reinforce habits.

Introduction (d)

We are evaluating the user interface of an iOS application for setting up and managing sensors. (From a design perspective, this is the most design-dependent part of the project, in which we construct the appropriate affordances and mental models for users to interact with our sensors.) Our app uses Bluetooth Low Energy (BLE) to connect to physical sensors which users attach to everyday objects. Each sensor is associated with a task when it is set up within the app. Then, the app logs when the task is performed, sends reminders when the task is not performed, and displays to the user their history in completing the task – and by extension, how successful they have been at maintaining a habit or behavior. Our P6 experiment is targeted towards evaluating the intuitiveness and accessibility of the sensor setup user interface, and gaining a better understanding of what tasks users would track using our app. This builds upon our P4 evaluation, which revealed several problems with the reminder model we presented to users, motivating us to rethink and simplify how sensors are set up.

Implementation and Improvements (e)

Link to P5 blog post

There were some changes made to the prototype since P5:

– A view to add new tasks was added. This view allows users to choose a sensor tag and then specify the task they are trying to monitor, and the frequency with which they want to use it.
– A webview for viewing statistics on tasks was added. This view is used to show the logs of tasks completed/missed by the user as graphs and charts. This helps the user track their progress in creating the habit and performing the task.
– Our backend server uses Twilio’s text service instead of APNS to send reminder notifications to the user. This is a simplification that lets us get by without an Apple developer account.
– In addition to sending alerts when a task is ignored, our backend tracks how frequently the user does complete tasks. We hope to use this data in the future for a proof-of-concept gamification feature.

Method (f)

Participants: Our participants were students who came to the Terrace library on the night of our test. There was a large number of students of different demographics present at the time of our test, and choosing one location enabled us to maintain a more consistent testing environment. Additionally, we found that the library was an excellent setting for finding target users, busy students who must maintain a consistent schedule amidst chaotic demands (see P4). To conduct the tests, we matched each task with a participant who expressed interest in developing a routine around that task. We found one participant who wanted to make a habit of going to the gym, another participant wanted to do a better job of reading books on a regular basis, and a final participant who was selected randomly, since we did not find an appropriate way to ask students about their medication usage before asking them to be participants in our study. Two were club members and one was independent.

Apparatus: We conducted all tests on the large table in the Terrace library. Our equipment included:

– An iPhone with the app installed
– Three sensor tags
– Three custom-made enclosures for the sensor tags, one for connecting the tag to a book, another for clipping to a bag, and a third similar one for sticking onto a pill box

Custom-made enclosure for attaching sensor tag to a textbook.

Custom-made enclosure for attaching sensor tag to a textbook.

Custom-made sensor tag enclosure to attach to bags.

Custom-made sensor tag enclosure to attach to bags.

The sensor tag is protected within the felt pouch.

The sensor tag is protected within the felt pouch.

– A textbook for our textbook tracking task
– A pencil case representing a pill box
– A computer which volunteers used to fill out our survey

Tasks: Our easy task tracks trips to the gym using the accelerom­e­ter built into our sensor tags. This task is rela­tively easy because the tracker only needs to be left in a gym bag, and it uses accelerom­e­ter motion to trigger an event. For our medium task, we track a user’s med­ica­tion usage by tagging a pill box. This task is of medium dif­fi­culty because only cer­tain changes in mea­sured data actu­ally cor­re­spond to task-related events; we have to fil­ter our data to find out when events have really occurred, introducing ambiguity in the user experience. Finally, for our hard task, we track the user’s read­ing habits by tag­ging text­books. This is the hard­est task for us to imple­ment because of the com­plex­ity of connecting a sensor tag to an arbi­trary book in a way that is intu­itive to the user. We also have to fil­ter our data to trig­ger events only in the appro­pri­ate cases, just as in our medium task.

These three tasks are worth testing because they require different enclosures and setup procedures for pairing a sensor tag to the object of interest. Since each task’s setup process involves the same UI, we expected to learn something about the comparative difficulty of each physical sensor enclosure and physical interface, in addition to how intuitive users found the setup process and reminder model. (These tasks remain unchanged from P5.)

Procedure: Our testing process followed the script we provide in the appendix. First, we explain the experiment to the user and obtain consent for their participation and using video recording. Then we ask them to sign our consent form and take a brief demographic survey. Next, one of the team members explains what the app tries to accomplish and introduces the user to the physical sensor tags. Then the user goes through the three tasks outlined above. For each task, we explain the objective – for example “the next task is to try and track how often you go to the gym by tracking movement of your gym bag”. The user is then given the sensor tags, the appropriate item (gym bag, book) and the iPhone with the app loaded. We then observe and record the user’s interaction with the system as they attach the sensor tag to the object, pair it with the app and add a new task. Because the users were not told exactly how to use the app or setup the tags, we noted how their actions differed from what we expected as they went through the setup stages. To simulate receiving a notification, we allowed the user to leave and sent them a notification in a 20 minute time window and used that as a reference point to get feedback about the notification system. Finally, we gave users a post-evaluation survey as linked in the index.

Test Measures (g)

Our dependent variables originated from two parts of our test – timing of user interaction during the evaluation, and user-reported difficulty in the post-evaluation survey.

– Time taken by users to complete setup of sensor tag through iPhone app (but not to physically attach the sensor tags)
– Time taken by users to attach sensortag (NB: users attached/set up tags in different orders for counterbalancing purposes)
– Setup accuracy. We broke down the process of tracking a task into 8/5 stages as follows:
Step 1: Begin Pairing Screen
Step 2: Enable Bluetooth Screen
Step 3: Check Bluetooth is Enabled (only on first task)
Step 4: Navigate Back to Taskly (only on first task)
Step 5: Pair sensor
Step 6: Select sensor (sensor succeeded in pairing)
Step 7: Complete new task screen
Step 8: Actually attach sensor

We then gave each user an “accuracy” rating, which was the ratio of (correct actions taken over all steps)/(total actions). This became a useful measurement in telling us which setup stages users found most confusing.

– Satisfaction with physical form and attachment of the sensor tags (were they too intrusive?).

– Satisfaction with the notification system. Specifically, we wanted to measure what users felt about the intrusiveness or non-noticeability of the notifications when they forgot to do a task.

– The difficulty in setting up different tasks, as surveyed. It was important to test if the process of setting up tasks and attaching physical sensors was too complicated, since that was an issue we ran into during P4.

Video:  http://youtu.be/aJ6ZIN6jzLc

Results and Discussion (h)

– We found the time it took for users to set up each sensor to be within our desired ranges. Users took an average of 51.33 seconds to go through the iPhone application and set up tracking of a single task, with a standard deviation of 14.72 seconds. We also observe that the average setup time decreased with each task the user setup. Users took on average 64 seconds to set up a task the first time, compared to 39.3 seconds on the third time through. Regardless of which task the user started their test with, the average time taken to setup the tasks decreases from the first task to the third. This reinforces our hypothesis from P5 that our application interface should be agnostic to different tasks or actions being created. This was a change from P4 which had interface elements specific to each task, as opposed to P5 where we tried to create a unified interface for creating any task.TimeToCompleteAppSetupChart

– To physically attach sensor tags, users took an average of 35.33 seconds (with a standard deviation of 12.91 seconds) to attach a tag over all tasks. We found, however, that while users very quickly set up the sensor tag to track reading a book, they often did so incorrectly. We gave users an elastic band enclosure that was designed to keep the sensor tag attached to the cover of the book, but users were confused and slipped the tags around the entire book. Most users said they would have prefered attaching the book sensor with a double-sided sticker, as they had with the pill box sensor. This was confirmed in the survey, when all users indicated the book sensor tag was “Slightly Intrusive.”

Q4

– The notion of using different sensors on each tag to track individual tasks was confusing to most people; this was indicated in the “What did you find hardest to accomplish?” section of the survey. They were unable to understand what each individual option meant, since we only provided the name of the sensor (accelerometer, gyroscope, or magnetometer) and asked the user to choose from them. Based on the results, we will simplify the motion-tracking sensors, combining use of the accelerometer and gyroscope into the rough class of motion-triggered sensors, and offering the user the choice of motion-triggered and magnet-triggered sensor. We will also provide the user with approximately one sentence explaining the difference. A more advanced way to do this would be to allow the user to train a machine learning algorithm by performing the action to be tracked several times. Then, we could learn what sensor values corresponds to each task. We could also allow the user to specify types of tasks, to make training more effective, e.g. moving bags, book openings, etc. However, this introduces significant complexity and relatively little benefit for our project.

– Users found the physical tags to be generally unintrusive. One of the things we were trying to test was how comfortable people were with using these not-so-small tags on everyday objects, and the general consensus on this question reinforces that users are willing to accommodate these tags despite their size and clunkiness. This might also be a result of the preference of the users we tested on; most of them currently use text files, the backs of their hands, and email as task reminder systems. Those are the lowest-friction and more primitive task reminder systems that we asked about, which suggests that users would be interested in a low-friction system in which objects themselves remind the user that they have not been used.

Q2 Q5

Q3

– Almost everyone was confused by the physical setup process in which the sensor tag was attached to the book. We intended for users to wrap the tag enclosure’s band around the outer cover of the book. People responded that they found setting up each tag on a book very easy, but performed the setup incorrectly, putting the band around the whole book, so that they had to remove the band in order to read the book. Two did so vertically, and one horizontally; this suggests that users did not go through our thought process of determining how the sensor tag would be used while reading the book. This could indicate a disparity in their understanding of how the system works and what they were using it for, but from observations during the tests, we found it more likely that users were not maintaining a high level of attention towards the task. The result could be that users would fix the sensor attachment upon actually reading their books, or it could be that they would remove the sensor enclosure when reading and forget to replace it. After the test, users suggested that we should allow them to stick the tag on the book instead of the band. Based on this recommendation, and the fact that users set up the other two tasks exactly as expected, we will focus on lighter and simpler stick-on/clip-on enclosures for future applications. (This was one of the dependent variables we set out to measure.)

– No users indicated the setup process was “slightly hard” or “extremely hard”. However, users only indicated that their likelihood of using our system is “somewhat likely” or “not likely”. We would have benefited from offering a broader spectrum of choices on both of these questions. However, the results that we gathered suggest that we are close to providing a significant benefit to users, and that we no longer need to make significant changes to the premise of our system or our reminder model; we are now at a point where we should focus on increasing the overall ease of use of the application and sensors, and making the utility of individual use cases more apparent.

– Notification system

Another aspect of the product that we set out to test was the notification system. We asked users to respond about how intrusive they felt the reminders were or if they weren’t noticeable enough.

Q6

These survey responses indicated that people were generally satisfied with the information they received in reminders. One good suggestion was to include the time since the user’s last activity in the notification. Finally, the user who rated receiving reminders as “annoying” suggested that we use push notifications for reminders, since they did not expect to receive reminders in text messages. This is a change we plan on making for the final product. The notification system will have the same information, but be sent to the users iPhone as a notification (using the iPhone’s APNS system) instead of a text message.

– Further tasks to incorporate

Finally, we tried to gain insight into additional use cases for our system. Users provided several good suggestions, including instrument cases and household appliances (e.g. lawnmowers). These could form the basis for future tasks that we could present as use cases for the application.

Appendix (I)

Consent Form
Demo Script
Raw Data
Demographic Survey
Post-Evaluation Survey

P4 – Team TFCS

Group Number: 4
Group Name: TFCS
Group Members: Farhan, Raymond, Dale, Collin

Project Description:  We are making a “habit reinforcement” app that receives data from sensors which users can attach to objects around them in order to track their usage.

Test Method:

  • Obtaining Consent: 

To obtain informed consent, we explained to potential testers the context of our project, the scope, duration, and degree of their potential involvement, and possible consequences of testing, with a focus on privacy and disclosing what data we collected. First, we explained that this was an HCI class project, and that we were developing a task-tracking iPhone app using sensors to log specified actions. We explained how we expected the user to interact with it during the experiment: they would use a paper prototype to program 3 tasks, by indicating with their finger, while we took photographs of the prototype in use. We also though it was important to tell participants how long the experiment would take (10 minutes), but most importantly, how their data would be used. We explained that we would take notes during the experiment which might contain identifying information, but not the user’s name. We would then compile data from multiple users and possibly share this information in a report, but keep user’s identity confidential. Finally, we mentioned that the data we collected would be available to users after on request.

Consent Script

  • Participants:

We attempted to find a diverse group of test users representing our target audience, including both its mainstream and fringes. First, we looked for an organized user, who uses organizational tools like to-do lists, calendars, perhaps even other habit-tracking software. We hoped that this user would be a sort of “expert” on organizational software who could give us feedback perhaps on how our product compares to what he/she is currently using and what works well in other comparable products.

We also tested with a user who wasn’t particularly interested in organization and habit-tracking. This would let us see if our system was streamlined enough to convince someone who would otherwise not care about habit-tracking to use our app. We also hoped it would expose flaws and difficulties in using our product, and offer a new perspective.

Finally, we wanted an “average” user who was not strongly interested nor opposed to habit-tracking software, as we felt this would represent how the average person would interact with our product. We aimed for a user who was comfortable with technology and had a receptive attitude towards it, so they could represent a demographic of users of novel lifestyle applications and gadgets.

  • Testing Environment:

The testing environment was situated in working spaces, to be natural for our testers. We used a paper-prototype of the iPhone app to walk the user through the process of creating and configuring tasks. For the tags, which are USB-sized bluetooth-enabled sensor devices, we used a small cardboard box the same size and shape of the sensor and gave three of these to the user, one for each task. We also had a gym bag, a pill box and a sample book as props for the tasks.

  • Testing Procedure:

After going through our consent script, we used our paper iPhone prototype to show the user how to program a simple task with Task.ly. We had a deck of paper screens, and Raymond led the user through this demo task by clicking icons, menu items, etc. Farhan changed the paper screen to reflect the result of Raymond’s actions. We then handed the paper prototype with a single screen to the test user. Farhan continued to change the paper screens in response to the user’s actions. When scheduling a task, the user had to set up a tag, which was described above.

The first task we asked users to complete was to add a new Task.ly task, “Going to the gym.” This involved the user navigating the Task.ly interface and selecting “Create a preset task.” We then gave the user a real gym bag, and the user had to properly install the sensor tag in the bag.

The second task we asked our user to do was track taking pills. This also required the user to create a new Task.ly preset task, and required the user to set up a phone reminder. Then, the user was given a pencil box to represent a pill box, and the user had to install a sensor tag underneath the lid of the pencil box.

Finally, the user had to add a “Track Reading” Task.ly task, which was the hardest task because it involved installing a sensor tag as well as a small, quarter-sized magnet on either covers of a textbook. The user was given a textbook, a cardboard sensor tag, and a magnet to perform this task.

While the user was performing these tasks, Farhan, Collin, and Dale took turns flipping the paper screens during each task and taking notes, while Raymond took continuous and comprehensive notes on the user’s experience.

<a href =”https://www.dropbox.com/s/f46suiuwml8qclv/script.rtf”>Script</a>

 

IMAG0086

User 1 tasked with tracking reading

IMG_20130408_160528IMG_20130408_214935

Results Summary:

All three users managed to complete each task, though they each had difficulties along the way. During the first task, tracking trips to the gym, our first respondent looked at the home screen of our app and remarked that some of the premade tracking options seemed to be subsets of each other (Severity: 2). When he tried to create a new task, he was frustrated with the interface for making the weekly schedule for the task. Our menu allowed him to choose how many days apart to make each tracking checkpoint, but he realized that such a system made it impossible for him to track a habit twice a week (Severity : 4). Respondent #2 noted that he liked the screens explaining how the bluetooth sensors paired to his phone, though he thought these should be fleshed out even more. Once he had to attach the sensor to his gym bag, however, he again expressed confusion when following our instructions (Severity: 4). He said that he thought the task was simple enough to forego needing instructions.

Of the three tasks, our users performed the best on tracking medication. Note, however, that this was not the last task we asked them to do, indicating that their performance was not merely a product of having greater familiarity with the app after several trials. Respondent #3 remarked that tracking medication was the most useful of the precreated tasks. All three users navigated the GUI without running into new problems unique to those experienced during the first task. All users attached the sensor tag to our demo pill box based on the directions given by the app; all performed the job as expected, and none expressed confusion. However, during the third task, tracking the opening and closing of books, new problems emerged with the sensor tags. Though two users navigated the GUI quickly (as they had during the second task), one respondent did not understand why there was a distinction made between tracking when a book was opened and tracking when a book was closed. He thought that the distinction was unnecessary clutter in the GUI. We judge this a problem of severity 2, a cosmetic problem. None of the users attached the sensor to our textbook in the way we expected. We thought that the sensor should be attached to the spine of the book, but users attached the tags to the front or back covers, and one even tried to put the sensor inside the book. Users were also confused by the necessity of attaching a thin piece of metal to either inside cover (severity: 3).

f. Results, Insights, Refinements

Our testers uniformly had problems while setting task schedules. There was no calendar functionality in the prototype; it only let the user set a number of times a task should be performed, over a certain time interval, so we are immediately considering changing this to a pop-up week/day selector, where the user highlights the day/times they want to do the task. Also, testers were confused by the sensors. The informational screens we provided to users to guide them through sensor setup were not complete enough, suggesting that we should make the sensor attachment instructions better phrased, visual, and possibly interactive. Because one user was confused by our having multiple sensor attachment pictures on one screen, we will instead offer the user a chance to swipe through different pictures of sensors being attached. Testers were also confused by the number of options for what the sensor could track, including in particular the option of being notified when a book is either open or closed. We can simply remove that choice.

Our users found the process of creating tasks to be cumbersome. Thus, we will simplify the overall process of creating a task, pre-populating more default information for general use cases, as that was the purpose of having presets in the first place. Then, we will remove the text options to choose how a sensor may be triggered. We will increase the emphasis on preset options, as above. Furthermore, we can accept feedback from the user each time he/she is reminded about a task (e.g. remind me in two days?/dont remind me for a month) to learn about how they want to schedule the task, instead of asking them to set a schedule upfront. This is a more promising model of user behavior as it distributes the work of setting a schedule over time, and lets our users be more proactively engaged. Finally, while considering how to streamline our interface, we also observed that the behavior of our system would be much more predictable to users if the reminder model was directly exposed. Rather than letting the user set a schedule, we observed we could use a countdown timer as a simpler metaphor, so that for each sensor, the user would only have to set a minimum time between triggers. If the time is exceeded, they would then receive reminders. This would be useful e.g. to provide reminders about textbooks that one leaves lying on the floor. Users may often forget about simple, low-difficulty tasks like taking vitamins, and this would make remembering to complete such tasks easier. Finally, this could be combined with deferring setting a schedule as discussed above.

g. Going Forward –  Refinements

With a low-fidelity prototype, we plan on testing two parts of our app in the future. The first part will test if the the design changes that we make from the lo-fi prototype help users navigate the app better. This specifically pertains to the process of creating a task, including improvements regarding simpler presets, deferring setting a schedule, and exposing the reminder system as a countdown, and the test will focus on if creating a task has been made substantively easier. The second major redesign to test is our sensor setup pages, since we will need to validate that increased interactivity and changes in copy allow users to better understand how to attach their sensors.

With the high-fidelity prototype, we will test the interaction of the user with the reminder screens and the information charts about their progress on different habits. This part can only really be tested with the high-fidelity prototype with data about certain tasks and hence, we will move this testing to after we have the hi-fi prototype ready. We also noticed that we couldn’t get a very good idea of actual daily usage of the app, including if a user would actually perform tasks (or not) and respond to notifications. That part of our project will be easier to test once we have a working prototype, to gather actual usage and reminder data.

 

 

 

L3 – Group 4 (Rumba)

Group 4 – Team TFCS – Collin, Dale, Raymond, Farhan

Short Description

We built a robot consisting of two “feet” connected by a paper towel tube. Each foot was a small breadboard plugged into a row of male header pins. The exposed ends of these pins were all bent at a 40 degree angle in the same direction. A DC motor was secured to the top of each foot, and each motor was attached to a single propeller blade with a small weight at the end. We put our Arduino in a “cockpit” which we built into the center of the paper towel tube. The motors were then plugged into the Arduino, and the Arduino was given a battery pack so that it could run without being tethered to a computer. Our group first decided that we wanted to build a robot that moved by vibration. We were motivated to make this decision because of the inherent weakness of DC motors. We didn’t want our robot to move randomly, however, so we decided to encourage the robot to move in a particular direction by angling the bristles on its feet so that the friction of moving forward would be less than that of moving in any other direction. Then, inspired by the neighboring car lab, we decided to go a step further and make our robot steerable by having two independently running feet. The concept is similar to that of a tank. The final product was viewed as a success. The robot moved as desired, and it could run on its own via battery power. However, it is a little slow. If we were going to redesign it, we would probably want to find a better way to get the feet to vibrate. This would probably involve a different configuration for the motor and its attached weights. We could also come up with a better design for the feet, perhaps using fewer bristles or some material other than metal.

List of Brainstormed Ideas

  1. Toothbrush Rumble Bot
  2. Three-Wheeled Vehicle
  3. Hovercraft
  4. Grappling-Hook Bot
  5. Ladder Crawler
  6. Magnetic Surface Crawler
  7. Segway
  8. Rudder Boat
  9. Fanboat
  10. Hot Air Balloon
  11. Blimp
  12. Hybrid Airship (Blimps connected to propellers)
  13. Flywheel Car

Sketches

Arduino-powered blimp is filled with helium. Fans on either side of the underbelly control which direction the blimp moves.

Back view of Arduino blimp.

Top-down view of Arduino tricycle.

Side view of Arduino tricycle.

Arduino hovercraft consists of a plastic ring with an Arduino at its center. Evenly spaced around the ring are four fans powered by motors that allow the hovercraft to “hover”.

Grappling hook bot launches a ball with a small magnet and rope attached. It attaches itself to a magnetic surface and pulls itself upwards by winding the rope around an axle.

Grappling hook consists of magnet attached to aerodynamic ball.

Air compression tube is compressed by motor and launches the grappling ball. Rope attached to motor axle pulls chassis upward along rope.

Ladder Crawler consists of two hooks attached to telescoping arms. An Arduino moves the arms in and out, and the hooks grab onto each subsequent rung of the ladder.

The magnetic surface climber moves by coordinating its arm movements with the turning on an off of two electromagnets.

This segway consists of two motors attached back to back with an Arduino hanging down beneath them.

This boat uses a servo motor to move a rudder back and forth, producing forward motion.

This boat has a backwards-facing propeller which pushes the boat over the surface of the water.

Manny other robot ideas can be seen here, including the idea we finally selected, the rumble bot.

The Product

It’s Alive

Learning to Steer

From The Robot’s Perspective

A Cool Path

Parts List

-2 Small DC Motors
-1 Paper Towel Tube
-1 Arduino
-Jumper Wires
-Tape
-2 Small Weights (like screw nuts)
-2 Mini Breadboards
-2 Rows of Male Header Pins
-Victory Flag
-1 Battery Pack
-1 9V Battery

Instructions for Creation

a. The premise of Rumba is that its two “feet”, which consist of rows of angled wire, are designed in such a way that when they are vibrated, they move in a direction determined by the angle of the wire. Rumba has two feet connected by a paper-towel-tube body. When the left foot is vibrated and the right foot is not, Rumba pivots around its right foot. Conversely when its right foot alone is vibrated. In this way, we can control which direction Rumba moves in.

b. Thus, in order to make a Rumba, we have to find a way create vibrations. To do this, attach an asymmetric servo horn to the axles of each of two small DC motors. We then attach a very small weight (like a nut for a screw) to the end of the servo horns. When the asymmetric horns turn, they continuously move the center of mass of the system. The result is that the motor vibrates. Securely attach each motor (with strong tape) and servo horn to two mini breadboards so that the servo horn hangs off the side and can rotate freely.

c. Now we must create the angled bristles for the feet of our Rumba, which will be attached to the bottom of the two breadboards. These will be made out of male-to-male headers. For each breadboard, measure one row of headers with enough pins to line the long outer edge of the breadboard. Attach the headers to the breadboard. Bend the pins that are now sticking out of the breadboard to be about 40 degrees from the breadboard. These are the breadboard feet.

d. Rumba’s body consists of a cardboard paper towel tube. Tape both breadboards to either end of this tube so that the angled feet face down, making sure to avoid the feet when taping the breadboard.

e. The brain of Rumba is an Arduino. Load the code below to the Arduino. It is programmed to turn each motor so that Rumba moves in an “interesting” way. To attach the Arduino to Rumba, first create a square tray chassis from the box that Arduino comes in. Cut a hole in the chassis where the round power plug will attach to a battery pack. To attach the chassis, we cut a square from the *top* of the center of paper towel tube, so that the bottom half of the tube is intact and the chassis can securely slide into the opening  Attach a battery pack (we used a 9V battery pack) to the Arduino and place it along with the Arduino in the chassis. Tape the chassis to the paper towel role.

f. Now we attach the motors to the Arduino, by connecting the power wire to ports 3 and 5, and the ground wire to ground.

g. Turn on the power. Your Rumba should now be functional!

h. Optional: Add victory flag.

Assignment 2 – Collin Stedman

Observations:

I would have to say that there really wasn’t any time during the day when I failed to observe the people around me. I suppose that I was particularly careful to observe people’s activities while waiting for my Computer Networks lecture to start. Classroom settings gave me the opportunity to observe both undergraduates and professors. I decided not to spend much time observing graduate students or TAs. I also observed people walking to and from classes as I traveled between Forbes and my classes. I observed people mostly between classes, but I also decided to observe my own routine as I got ready for my first class each morning. I also paid attention to the way people behaved when entering and leaving dining halls.  I generally conducted my observations alone, as I don’t often walk to classes with friends or sit near other people. A few of my ideas were inspired by fortuitous conversations with friends, as I will describe below.

My first fruitful idea came when I observed my Computer Networks professor, Michael Freedman, before class. Professor Freedman holds office hours immediately before class, which I realized was quite unusual. Most of my professors spend the first ten or so minutes before class preparing for their lecture. I realized that Freeman didn’t have to worry about setting up for lecture because he uses PowerPoint to teach his material. I then realized that my math and physics professors don’t have this same luxury because of the difficulty of displaying complicated math in PowerPoint. It occurred to me that math and physics professors might like it if they could digitally save the notes they write on whiteboards and return them to the whiteboard at a later time. In other words, the whiteboard would be a screen with memory. Once an old whiteboard is loaded, it should be editable just like any normal whiteboard. This technology would allow professors to load previously created notes to a whiteboard in seconds.

My second idea came as I was walking through Wilson on my way to the E-Quad. I saw one boy ask another, presumably his friend, if he could borrow the other’s bike. Did this boy not have his own bike? If he did, was it broken or else unusable? While I do not know the answer to this question, it made me think about the possibility of Princeton having community bikes which could be shared among the entire student body. The bikes would be checked out of special racks using our proxes, and they would then be usable for a certain period of time before they would have to be returned at the risk of incurring charges on our proxes. I assume that these bikes would mostly benefit undergraduates, as they often need bikes to get to classes on time. Graduate students usually make use of the university buses, but they may also find the bikes useful from time to time. I imagine that these bikes could save students from being late to class or even exams!

The last observation I will list here took place when Jean Jacque, one of my friends, mentioned to me after our classics class that he needed to run two errands before his next precept. He wanted to grab coffee from the café in East Pyne, but he also needed to buy a ticket to a student theater production of No Exit / The Chairs. I decided that both of these errands needed to be made faster and easier to complete within ten minutes. Rather than going to Frist to purchase tickets to student performances, one ought to be able to purchase the tickets online from one’s phone. As a frequent theater-goer myself, I know that such an app would certainly benefit me. In fact, I suspect such an app would benefit both theater-goers and theater-producers, as making ticket purchases simpler would likely increase student attendance at shows.

Online purchasing could also make it easier for students like Jean Jacque to get their morning coffee. I envision an app which allows one to place an order for coffee to pick up from numerous café locations around campus, such as Small World, CaFe, or Starbucks. In order to prevent the student body from overwhelming these establishments with online orders, a quota would have to be put in place. However, given that the establishment of your choice is accepting online orders, one could select the coffee of one’s choice and then have it ready by the time one goes to pick it up and pay for it.

Brainstormed Ideas:

  1. A flashcard app which connects to your notes and converts them to cards
  2. A shared bike service with NFC or prox checkout
  3. A bike locator app
  4. Printing documents from a phone
  5. App for rating lectures and  sending the results to professors
  6. App which alerts you to friends walking in the same area of campus
  7. App which takes pictures you snap with your phone and uploads them to a remote digital photo frame
  8. Bluetooth umbrella which flashes a light when the weather is rainy
  9. A battery which charges when you ride your bike and can plug into laptops
  10. App for remotely checking out books from Princeton libraries
  11. In-class social networks for meeting people in your classes
  12. Whiteboard screens which save and load what is written on them
  13. App for purchasing tickets to student productions
  14. Coffee app which lists available locations and allows for remote ordering and fast pickup
  15. A Princeton encyclopedia of eating clubs, extracurricular clubs, sports teams, classes, etc.

Paper Prototyped Ideas:

  1. I chose to paper prototype my flashcard app because I think there ought to be a way to transform the notes I take in class into a format that is more amenable to quick, piecemeal use between classes and at meals.
  2. I chose to paper prototype the app for uploading pictures to a remote digital photo frame because I often want to update my parents on how I am doing or what I am up to despite being too pressed for time to have a meaningful phone conversation.

Prototypes:

Notecard app:

The main screens of FlashNote. The left screen is simply the main screen. The middle screen is the screen of decks available to study prior to searching for new decks. The right screen is the screen of decks available after searching for new decks.

The COS 436 deck, showing both sides of each card.

The mythology deck, showing both sides of each card.

The various popup screens which alert the user to such events as the results of a search for decks or the completion of a deck.

The photo-frame app:

 

The main screen of the app.

 

The photo-taking interface. The left card shows the basic interface, including a target box, a slider for zoom, and a button to see the image currently displayed on the remote photo frame. The second card from the left shows the same interface with a man in focus and the zoom all the way out. The third card from the left shows the same man but with zoom all the way in. The card furthest right shows what the user sees when a photo has been snapped successfully.

 

The displayed images as seen on the app and on the photo frame. The first card from the left shows the picture that will be saved as the current image once the user takes the photo. The second card from the left shows the picture that is saved as the current image right now, before the user has taken the picture of the man. The third card from the left shows the picture of the man as it will be displayed in the remote photo frame. The card on the right shows the picture of the flowerpot as it is currently displayed in the remote photo frame.

User Testing:

Michelle Tan:

 

Michelle chose to study the COS 436 deck before searching for new decks. She then selected the mythology deck, completed that deck, and then exited the app. Michelle checked both sides of each flashcard before moving to the next.

Nate May:

Nate started by adding the mythology deck. He then studied COS 436, though he did not view both sides of each card. After viewing only the one deck, Nate exited the app.

As you can see from the second photo, Nate was confused by what to do with the first card he saw. Not only did he slide the card in the wrong direction, he never even flipped the card over.

Christina Noya:

Video of Christina

Christina first studied for COS 436, viewing both sides of each card before moving to the next. She then searched for the mythology deck and studied it, again viewing both sides of each card before moving on. After studying both decks, Christina exited the app.

As you can tell from this video, Christina didn’t understand how to interact with the flashcards either. I had to encourage her to slide finished flashcards to the left.

Insights:

  • Users did not realize that they were supposed to slide from one flashcard to the next by swiping their fingers from right to left.
  • Users were confused by what happened when they reached the end of the deck. Specifically, Nate didn’t understand that the deck ended automatically after each card had been flipped past.
  • Users confused by not being able to go back to old cards. Nate wanted to be able to go back and see skipped cards.
  • Users didn’t know if they could quit the deck early
  • Confused by where decks came from / relationship to notes taken
  • Users unsure why main screen existed

If I were to go back and make a new prototype, I would make sure to include on-screen instructions for how to flip between cards. I would also allow users to flip between cards in both directions and return to skipped cards. I would add an ‘X’ to the cards which would be the only way to exit the deck. I would also expand the entire demo to make it obvious that the decks “found” during searches are constructed by the app from the user’s very own class notes. Not a single one of my testers understood the main point of the app because my prototype only demonstrated the GUI. I would consider making the user manually feed files to the app to be converted into flashcards. Though this may seem like a step backward, given that the app currently adds new decks automatically, the functionality of the app is so mysterious currently that it probably require instructions teaching the user to drop notes into a particular folder a la Dropbox. By having the user add decks manually, there is no longer any confusion.