P3 – Name Redacted

Group 20 – Name Redacted

Brian worked on the project summary and the discussion of the three prototypes.

Ed worked on the prototype that modeled the TOY programming machine.

Matt worked on the prototype corresponding to the binary lesson.

Josh worked on the prototype for the tutorial to teach instructors how to use this system.

We all worked on the mission statement.

Rational for the Assignment: Most school districts do not teach computer science and are constrained by technological costs and teacher training.  Our project hopes to rectify these two problems by creating an easy to use, fun, and interactive computer program that allows students to learn the fundamentals of computer science without the need for expensive hardware and software.  Our end goal is to create a project that allows users to “code” without typing on a computer.  Thus, this prototype gives us a great opportunity to test the feasibility of this design.  From a user perspective there is very little difference between taping paper to a whiteboard and having us draw in the output from putting magnets on the whiteboard and have a projector show the output generated by the computer.  Thus, we hope that the low-fidelity prototype can give us not only an accurate depiction of how the user will interact with the final system but also provide valuable insight into how to improve the overall user experience (especially since the goal is to create fun and interactive experience for middle school students).

Mission Statement:  Our group strives to provide every student with a method to learn the fundamentals of computer science in a tangible, fun, and interactive way.  Most schools in the country do not teach computer science because of resource and financial limitations.  However, computer science is one of the fastest growing industries, creating a wide gap between the supply and demand of computer programmers.  By creating a cheap and effective way to teach  the fundamentals of computer science, we hope to give students from all socioeconomic backgrounds the ability to become computer programmers.

Updated Task: We switched our difficult task from the Simplified Turtle Graphics to the teaching of the TOY Lesson to instructors.  Since interviewing an instructor in P2, we realized that a large part of the success of our project relies on teaching instructors how to use the system.  Thus, since our user group expanded from only students to students and teachers, it made sense to focus one task on how instructors would use our interface.

Description of Our Prototype: Our prototype uses paper cards with “fake” AR Tags and labels that look similar to those in a real system.  We used tape on the back of the cards to mimic how users will put magnetic or otherwise adhesive cards onto a whiteboard.  Our prototype obviously does not rely on a projector or web camera, and so we used whiteboard markers to emulate the information that the program would project onto the whiteboard.  For the tutorial we drew what the whiteboard would look like as the user stepped through the program.  We have 16 number cards (for 0 to f) and labels for binary, octal, hexadecimal, decimal, LABEL, MOV, PRINT, ADD, SUB, JNZ

Completed program

Completed program

This is where the program outputs.

This is where the program outputs.

This is where the memory is displayed.  Our interviewed potential users said that understanding memory was a very difficult concept to grasp.

This is where the memory is displayed. Our interviewed potential users said that understanding memory was a very difficult concept to grasp.

Completion of program with output.

Completion of program with output.

This is where the code goes (the students places the magnetic blocks here).

This is where the code goes (the students places the magnetic blocks here).

Task 1 – Teaching Binary:

The objective of this task is to provide a simple, clean, and interactive teaching environment for students to learn the difference between different number bases (specifically binary, octal, hex, and decimal).  For user testing, we would approach this from two angles.  First, we will test what it is like to teach with this tool.  For that, we would have the tester imagine they are teaching to a classroom of students and using this as an aid in the lesson.  Second, we can see the test through the eyes of the students.  Our tool is meant to be interactive, so after a quick lesson on what the tool is and how it works, we might ask students to complete quick challenges like trying to write 10 in binary.  The point of this system in both cases it to try and simplify the teaching process and increase engagement through interactivity.

 

Imagine you are teaching a lesson.  The basic idea of our UI is that there are 4 numerical base cards:



And 16 number cards:



They have adhesive on the back and they stick to the white board.  Users then place these cards on the board.  In our real system, we would project output on top of the board and cards, but for this lo-fi prototype, we will use marker, written by the tester, instead.

 

The first way that you could use the system is to put one numerical base card down and place a string of numbers after it.  The system will interpret that string of numbers in the base and provide a representation of that quantity.  In the below example, it displays balls.



Another way using the system would be to put down more than one numerical base cards and then place a string of numbers following just one of those card.  The system would then populate strings of numbers following the other bases so that they are all equivalent.



If the user places down something that is not valid (such as a value beyond the base), we would use highlighting from the projector to let them know their error.


 

Task 2 – TOY Programming:

This lesson is used to teach the fundamentals of how all computers work. We provide users cards with a simplified version of assembly language and visual tags printed on them. The teacher or student will use this system to learn about computation and simple logic in a classroom setting. As the user places cards on the board, the projector will overlay syntax highlighting and other visual cues so the user gets feedback as he or she is writing the program. Then, when the user is done writing the program, they place the RUN card on the board. The system first checks if the program is syntactically correct and, if not, displays helpful messages on the board. Then, the system walks through the code step by step, showing the contents of memory and output of the program. As the commands are all very simple and straightforward, there is no confusing “magic” happening behind the scenes and it will be very easy for students to understand the core concepts of computation. However, the language is complete enough to make very complex programs. Our paper prototype very closely resembles the form of our final project. We created the visual tags out of paper and stuck them on a whiteboard using tape. We mimicked the actions of the projector by drawing on the board with markers.

This slideshow requires JavaScript.

Task 3 – Teaching Tutorial:

The purpose of the tutorial would be to familiarize users – the teacher in particular – with the main display interface and teach them how to properly use the instruction cards. Prior to starting the tutorial, the user will need to be given basic instructions on what the system is, why it is useful, and how to setup the system and run the program. Once started, the user should not need any additional help. Text and prompts will be projected on the screen to guide the user through the tutorial, teaching him what the different parts of the display represent and how to properly use each instruction card. The tutorial will advance to the next step when after a certain period of time has elapsed, the user has completed a designated task, or the user presses a continue key such as the space bar the the computer. Mistakes can be recognized and corrected by the system itself if the user does something wrong.

This slideshow requires JavaScript.

Discussion of Prototype:

We made our prototype by first creating blocks that were similar to how they would appear on the magnets that the users will use.  The blocks have AR Tags (which for now we made up), and a label.  There are blocks for all of the numbers from 0 to f,  and blocks with keywords that are supported in the “computer language” we are developing. Another part of our prototype was drawing on the whiteboard how the whiteboard will look during a lesson. This meant creating the three sections that will be projected – code, memory, and output. We wanted to draw these sections on the whiteboard for our prototype since they change real time for our project and thus we could emulate with markers what the user will see when they use our program. By using the whiteboard, we used a new prototyping technique. We believed that it was more suitable than paper because of the increased flexibility it gives us over paper. When we test our prototype with real users, we want the freedom to display the correct output, memory and error messages. This would have required too much paper since we have many different possible values in each of the four registers at any given moment, among other things. Also, since our system will rely on the whiteboard, it made sense to have the users interact with a whiteboard when testing our prototype.

One of the challenges that we had to confront arises from the primary user group being younger students.  Thus, we had to keep the tags simple and few enough that students could reasonably understand what they did yet we still wanted a reasonable amount of functionality.  It was difficult to come up with a good whiteboard interface for the users.  We wanted something simple that still conveyed all of the useful information.  One idea that we considered was an input tag that would allow the user to input data to the TOY program.  We decided however that this made the programming unnecessarily complex while not adding that much benefit.  Most of the difficulty in creating the prototype was similar, and the issue came from deciding what functionality to include that would offer a complete introduction to the material without overly complicating the learning process.  Even though using the whiteboard rather than paper presented some difficulties, I think it works very well in terms of simulating the program.  It was also important that our prototype not lay flat on a surface, since the final project will use a projector image on a whiteboard.  I think our prototype very closely resembles how we currently think the end product will look.

The Elite Four (#19) P3

The Elite Four (#19)
Jae (jyltwo)
Clay (cwhetung)
Jeff (jasnyder)
Michael (menewman)

Mission Statement:
We are developing a system that will ensure users do not leave their room/home without essential items such as keys, phones, or wallets. Our system will also assist users in locating lost tagged items. Currently, the burden of being prepared for the day is placed entirely on the user. Simple forgetfulness can often be troublesome in living situations with self-locking doors, such as dorms. Most users develop particular habits in order to try to remember their keys, but they often fail. By using a low-fidelity prototype, we hope to identify any obvious problems with our interface and establish how we want our system to generally be used. Hopefully, we can make this process easy and intuitive for the user.

Statement: We will develop a minimally inconvenient system to ensure that users remember to bring important items with them when they leave their residences; the system will also help users locate lost tagged items.

We all worked together to create the prototype and film the video. Jae provided the acting and product demo, Clay provided narration, Jeff was the wizard of Oz, and Michael was the cameraman. We answered the questions and wrote up the blog post together while we were still in the same room.

Prototype:
We created a cardboard prototype of our device. The device is meant to be mounted on the wall next to the exit door. Initially, the user will register separate RFID tags for each device he or she wants to keep track of. After that, the entire process will be automated. The device lights up blue in its natural state, and when the user walks past the device with all the registered RFID tags, the device lights up green and plays a happy noise. When the user walks past the device without some or any of the registered RFID tags, the device lights up red and plays a warning noise. The device is just a case that holds the Arduino, breadboard, speakers, RFID receiver, LEDs, and buttons for “Sync” and “Find” modes. The Arduino handles all of the RFID communication and will be programmed to control the LEDs and speakers. “Sync” mode will only be toggled when registering an RFID tag for the first time. “Find” mode will only be toggled when removing the device from the door in order to locate lost items.

The blue- and red-lit versions of the prototype, plus the card form-factor RFID tag

The blue- and red-lit versions of the prototype, plus the card form-factor RFID tag

The green- and red-lit versions of the prototype, plus the card form-factor RFID tag

The green- and red-lit versions of the prototype, plus the card form-factor RFID tag

Task 1 Description:
The first task (easy difficulty) is alerting the user if they try to leave the room without carrying their RFID-tagged items. For our prototype, the first step is syncing the important tagged item(s), which can be done by holding the tag near the device and holding the sync button until the lights change color. Next, the user can open the door with or without the tagged items in close proximity. If the tagged items are within the sensor’s range when the door is opened, the prototype is switched from its neutral color (blue) to its happy color (green), and the device emits happy noises (provided by Jeff). If the tagged items are not in range, the prototype is switched to its unhappy color (red), and unhappy noises are emitted (also provided by Jeff). This functionality can be seen in the first video.

The device is in door-mounted "neutral mode"; the user has not opened the door yet

The device is in door-mounted “neutral mode”; the user has not opened the door yet

When the door is opened but tagged items are not in proximity, the device lights up red and plays a warning noise

When the door is opened but tagged items are not in proximity, the device lights up red and plays a warning noise

When the tagged item(s) is/are in close proximity to the device and the door is opened, the device lights up green and plays a happy noise

When the tagged item(s) is/are in close proximity to the device and the door is opened, the device lights up green and plays a happy noise

Task 2 Description:
The second task (moderate difficulty) is finding lost tagged items within one’s room/home. For our prototype, this is accomplished by removing the device from the wall, pressing the “Find” button, and walking around with the device. The speed of beeping (provided by Jeff in the video) indicates the distance to the tagged item and increases as the user gets closer. This functionality is demonstrated in the first video.

The device can be removed from the wall and used to locate missing tagged items

The device can be removed from the wall and used to locate missing tagged items

Task 3 Description:
The third task (hard difficulty) is that of finding lost items outside of one’s residence. As before, the user removes the device from the wall and uses the frequency of beeps to locate the device. This task presents the additional challenge that the item may not be within the range of our transmitter/receiver pair. In order to overcome this, the user must have a general idea of where the object is. Our system can then help them find the lost item, with a range of up to eight meters. This range should be sufficient for most cases. This functionality is shown in the second video.

(Visually, this is identical to Task 2, so no additional photos are provided.)

Video Documentation:
Tasks 1 & 2: Syncing, forgotten item notification, & local item-finding

Task 3: Remote item-finding

Discussion:
Our project has a very simple user interface, since the device is intended to require as little user interaction as possible. There are no screens, so we used cardboard to build a lo-fi prototype of the device itself. There are three versions of the device; they differ only in the color of the LEDs as we have described above. “The device” is just a case (not necessarily closed) that holds the Arduino, breadboard, speakers, RFID receiver, LEDs, and buttons for “Sync” and “Find” modes. The functionality of each of these is described in the photos and videos. For our prototype we did not exactly come up with any new ways of prototyping, but we did rely heavily on “wizard of Oz” style prototyping, where one of our members provided sound effects and swapped different versions of the prototype in and out based on the situation.

It was somewhat difficult to find a way to effectively represent our system using only ourselves and cardboard. Since our system is not screen-based, “paper prototyping” wasn’t as easy as drawing up a web or mobile interface. The system’s interface consists mainly of button-pressing and proximity (for input) and LEDs/sound (for output), so we used a combination of cardboard/colored pen craftsmanship and human sound effects. The physical nature of the prototype worked well. It helped us visualize and understand how our device’s form factor would affect its usage. For example, using a credit card as an RFID tag (which is roughly the same size as the one we ordered) helped us understand the possible use cases for different tag form factors. While experimenting with different visual/auditory feedback for our item-finding mode, we realized that when no tagged item is detected, a slow beep, rather than no beeping at all, could help remind users that the device is still in item-finding mode.

P3: Dohan Yucht Cheong Saha

Group # 21: “Dohan Yucht Cheong Saha”

 

 

  • Miles Yucht

  • David Dohan

  • Shubhro Saha

  • Andrew Cheong

 

Mission Statement

 

The system we’re evaluating is that of user authentication with our system (hereon called “Oz”). To use Oz, users make a unique series of hand gestures in front of their computer. If the hand gesture sequence (hereon called “handshake”) is correct, the user is successfully authenticated. This prototype we’re building attempts to recreate the experience of what will be our final product. Using paper and cardboard prototypes, we present the user with screens that ask him/her to enter their handshake. Upon successful authentication, the Facebook News feed is shown as a toy example.

 

Our mission in this project is to make user authentication on computers faster and more secure. We want to do away with text passwords, which are prone to hacking by brute force. At the same time, we’d like to make the process of logging into a computer system faster than typing on a keyboard. In this assignment, David Dohan and Miles Yucht will lead the LEAP application development. Andrew Cheong will head secure password management. Shubhro Saha will lead development of the interface between LEAP and common web sites for logging in during the demo.

 

Description of Prototype

 

Our prototype is composed of a box and a Leap Controller. The box is shaped in a way so that more volume is covered at the top of the box. The Leap Controller is placed at the bottom of the box so that it will be able to detect the handshake gestures. The motivation behind the particular box design is to promote uses to place their hands slightly higher. With more volume being covered at the top, people will place their hands higher up as well. For initial authentication, the user will select his or her profile either by selecting a profile or via facial recognition. They can also reset their password through their computer.



Here is the box with the Leap Controller at the bottom. More volume is covered at the top of the box; therefore, the user naturally places his/her hand higher up in the box.

 

Using the Prototype for the Three Tasks

 

Task One: User Profile Selection / Handshake Authentication — In this scheme, most applicable to students at a university computer cluster, the user approaches the system and selects the user profile he/she wishes to authenticate into.

 

Our sample user is prepared to log in to Facebook

 

The user selects his/her account profile

 

Oz asks the user to enter their handshake

 

The user executes his/her handshake sequence inside a box that contains the LEAP controller

 

Our user is now happily logged in to Facebook.

 

Task Two: Facial Recognition / Handshake Authentication — As an alternative to user profile selection from the screen, Oz might automatically identify the user by facial recognition and ask them to enter their handshake.

 

The user walks up to the computer, and his/her profile is automatically pulled up

 

From this point on, interaction continues as described in Task One above.

 

Task Three: Handshake Reset — In this task, the user reset his/her secret handshake sequence for one of usually two reasons: (1) they forgot their previous handshake or (2) they seem to remember the handshake, but the system is not recognizing it correctly.

 

At the handshake reset screen, the user is asked to check their email for reset instructions

 

Upon clicking the link in the email, the user is asked to enter their new handshake sequence

 

Prototype Discussion

 

We grabbed a file holder and made paper linings for the sides. Because this box is aimed to prevent others from seeing your handshake, we had to cover up the holes along the sides of the file holder with the paper linings. These were taped on and the Leap Controller was placed at the bottom of the box.

 

No major prototyping innovations were created during this project. The file holder we found had a pretty neat form factor, though.

 

A few things were difficult. We had to procure a properly shaped boxed for Oz users to put their hand in and accommodating of the LEAP motion controller. Out of convenience, our first consideration was a FedEx shipping envelope (1.8 oz. 9.252”x13.189”). In short time, this solution was ruled out because of the odd shape. Second, we found a box for WB Mason printing paper. This too was ruled out, this time because of bulkiness. Finally, we found a plastic file holder in the ELE lab that had an attractive form for our application. This solution was an instant hit.

 

Once we found the box, it worked really well for our application. In addition, putting the LEAP inside it was relatively straightforward. Black-marker sketches are always an enjoyable task. All in all, things came together quite well.

 

Group 10: P3

Group Number and Name

Group 10 – Team X

Group Members

  • Osman Khwaja (okhwaja)
  • JunJun Chen (junjunc)
  • Igor Zabukovec (iz)
  •  (av)

Mission Statement

Our project aims to provide a way for dancers to interact with recorded music through gesture recognition. By using a Kinect, we can eliminate any need for the dancers to press buttons, speak commands, or generally interrupt their movement when they want to modify the music’s playback in some way. Our motivation for developing this system is twofold: first of all, it can be used to make practice routines for dancers more efficient; second of all, it will have the potential to be integrated into improvisatory dance performances, as the gestural control can be seamlessly included as part of the dancer’s movement.

Prototype Description

Our prototype includes a few screens of a computer interface which would allow the user to setup/customize the software, as well as view current settings (and initial instructions, gestures/commands). The rest of the prototype depends heavily on Wizard of Oz components, in which one member of our team would act as the kinect and recognize gestures, and then respond to them by playing music on their laptop (using a standard music player, such as itunes).

Our prototype will have three different screens to set-up the gestures. Screen 1 will be a list of preprogrammed actions that the user can do. These include stop, start, move to last breakpoint, set breakpoint, go to breakpoint, start “follow mode”, etc.

     set_gesture_main

Once the user selects a function, another screen pops up that instructs the user to go make a gesture in front of the kinect and hold it for 3 seconds or so.capturing

Once the user creates a gesture, there will be a verification screen that basically reviews what functionality is being created and prompts the user to verify its correctness or re-try to create the gesture.

save_gesture

Tasks

(So that we can also test our setup interface, we will have the user test/customize the gestures of each task beforehand, as a “task 0”. In real use, the user would only have to do this as an initial setup.)

The user selects the function that they want to set the gesture for:

choose_gesture

The user holds the position for 3 seconds:

pause_gesture

The user confirms that the desired gesture has been recorded, and saves:save_osman

An illustration of how our prototype will be tested is shown in the video below. For task 1, we will have the users set gestures for “play” and “pause”, using the simple menus shown. Then we will have them dance to a recorded piece of music, and pause / play it as desired. For task 2, we will have them set gestures for “set breakpoint” and “go to breakpoint”. Then they will dance to a the piece of music, set a breakpoint (which will not interrupt the playback), and then, whenever desired, have go to that breakpoint. For task 3, we will have the users set a gesture to “start following”, and record a gesture at normal speed. We will then have the users dance to the piece of music, set the following when desired, and then adapt the tempo of the music playing according to the speed of the repeated gesture.

Our “Wizard in the Box”, controlling the audio playback:

wizard_in_box

Discussion

i. We made our initialization screens using paper, but the bulk of it was “Wizard of Oz”, and just responding to gestures.

ii. Since our project doesn’t really have a graphic user interface, except for setup and initial instructions, we relied heavily on the Wizard of Oz technique, to recognize and respond to gestures and voice commands. Since what the user would mostly be interacting with is music and sound, which can’t be represented well on paper, we felt it was appropriate to have our prototype play music (the “wizard” would just push play/pause, etc on a laptop).

iii. It was a little awkward to try to prototype without having the kinect or even having a chance to get started creating an interface. Since users would interface with our system almost completely through the kinect, paper prototypes didn’t work well for us. We had to figure out how to show interactions with the kinect and music.

iv. The Wizard of Oz technique worked well, as we could recognize and respond to gestures. It helped us get an idea of how tasks 1 and 2 work, and we feel that those can definitely be implemented. However, task 3 might be too complicated to implement, and it might be better to replace it with a simpler “fast-forward / rewind” function

Do You Even Lift- P3

Group Number and Name

Group 12 — Do you Even Lift?

First Names of Team

Andrew, Matt, Adam, and Peter

Mission Statement

We are evaluating a system designed to monitor the form of athletes lifting free weights and offer solutions to identified problems in technique.

Some lifts are difficult to do correctly, and errors in form can make those lifts ineffective and dangerous. Some people are able to address this by lifting with an experienced partner or personal trainer, but many gym-goers do not have anyone knowledgeable to watch their form and suffer from errors as a result. Our system seeks to help these gym-goers with nowhere else to turn. In this regard, we want our prototype to offer an approachable interface and to offer advice that is understandable and useful to lifters of all experience levels.

Concise Mission Statement: We aim to help athletes of all experience levels lift free weights with correct technique, enabling them to safely and effectively build fitness and good health.

Member Roles: This was a collaborative process. We basically worked together on a shared document in the same room for most of it.

Adam: Compiled blog, wrote descriptions of 3 tasks, discussion questions…

Andrew: Drew tutorial and feedback interface, mission statement, narrated videos…

Matt: Drew web interface, took pictures, mission statement…

Peter: Mission statement, discussion questions, filmed videos…

Clear Description of the Prototype w/ Images

Our prototype is a paper prototype of the touch screen interface for our system. Users first interact with our paper prototype to select the appropriate system function. They then perform whatever exercise that system function entails. Finally, users receive feedback on their performance.

We envision our system consisting of a kinect, a computer for processing, and a touch screen display. Our touch screen display will be the only component with which users physically interact. If we do not have a touch screen computer for our prototype, we wil substitute an ordinary laptop computer.

This is our proposed startup page. From this page, users can select the exercise which they are about to perform. They also have the option to click the “What is This?” button which will give them information about the system.
After selecting an exercise, users can enter either “Quick Lift” mode or “Teach Me” mode. In “Quick Lift, “our system will watch users lift weights and then provide technical feedback about their form at the end of each set. In “Teach Me” mode, the system will give the user instructions on how to perform the lift the selected. This page of the display will also have a live camera to show users that the system is interactive.

In the top right corner of the display too, users can see that they have the option to log in. If they log in, we will track their progress so that they can view it in our web interface and so the system can remember their common mistakes for future workouts.
In “Quick Lift” mode, users have the option of receiving audio cues from our system (like “Good Job!” or “Keep your back straight!”). Users will then start performing the exercise (either receiving audio cues or not). Once they are finished with a set, we will show a screen like the one below. On the screen we will show users our analysis of each repetition in their previous set of exercises. We will highlight their worst mistakes and will allow them to see a video of themselves in action. This screen will also allow to see their result from previous sets. Likewise, if a user was logged in, this information would be saved so that they could later reference it on a web interface.

If a user selects “Teach Me”, they are taken to a screen like the one below. This screen gives a text description, photos, and a video of the exercise. After reading the page, the user can press the “Got it!” button. The system will then encourage the user to try the exercise themselves using the unweighted bar. After the user successfully performs the exercise a number of times, the system will prompt the user to try that exercise in “Quick Lift” mode.

The picture below is our web interface. Here, workout data is organized by date. By clicking on a date, users can unfold the accordion style menu to view detailed data from their workout such as weight lifted, number of sets and repetitions, and video replay. Users can filter the data for a specific exercise using the search bar at the top. Searching for a specific exercise reveals a graph of the users performance on that exercise.


Descriptions of 3 Tasks

Provide users feedback in their lifting form

We intend the paper prototype to be used in this task by having users select an exercise and then the “quick lift” option. The user will then navigate through the on screen instructions until he or she has completed the exercise. When the user performs the exercise, we will give them feedback by voice as well as simulate the type of data that would appear on the interface.

The opening page for “Quick Lift.” The kinect is watching the user and will give feed back when the user begins exercising.

The user begins exercising.

As the user exercises, the side menu accumulates data. A live video of the user is displayed in the video pane.

After the lift, users can navigate through data from each repetition from their set of exercises. The text box tell user’s their errors, the severity of the errors, and explanations as to why those errors are bad.

Track users between sessions

We intend the paper prototype type to be used in this task by having a user interact with the web interface.  First, a user will see the homepage of the web interface. The user will then click through the items on the page and the page elements will unfold to reveal more content.

The web interface is a long page of unfolding accordion menus.

In the web interface, users can look back at data from each set and repetition and evaluate their performance.

Accordion menus unfold when the users clicks to display more data.

Create a full guided tutorial for new lifters

We intend the paper prototype to be used in this task by having users select an exercise and then the “teach me” option. The user will then navigate through the on screen instructions until he or she has read through all the instructive content on the screen. Then, when the user is read to perform the exercise, he or she will press the “got it!” button.

The user selects the “teach me” option

As the can go through the steps of the exercise to see pictures and descriptions of each step.

Discussion of Prototype

i. How did you make it?

We made the prototype by evaluating tasks users perform with the system and coming up with an interface to allow for the completion of those task. It made most sense for use to create a paper interface that would display the information users would see on the display monitor. The idea is that users would use the paper interface to interact with the system as they would with the touch screen and we would use our own voice commands to provide users with audio feedback about their form if they wanted it.

ii. Did you come up with any new prototyping techniques to make something more suitable for your assignment than straight-up paper? (It’s fine if you did not, as long as paper is suitable for your system.)

We used a combination of the standard paper prototype with “Wizard of Oz” style audio and visual cues.

iii. What was difficult?

 It was difficult to compactly represent the functionality offered by the Kinect on a paper prototype. As described above, we were able to partially account for this by adopting “Wizard of Oz” style audio and visual techniques. However, our system relies on users taking advice from a computer and it was difficult to test how receptive a user would be to our advice.

iv. What worked well?

We think our paper prototype interface is pretty intuitive and makes it easy for users to choose the functionality they want. The design seems pretty self explanatory which is especially helpful when new users interact with the system. We also were pleased with the methods we chose to give users realtime feedback without distracting them from their lifts.

GaitKeeper

a) Group number and name

Group 6 – GARP

b) Group first names and roles

Gene, Alice, Rodrigo, Phil

Team Member Roles:
Gene – Editor, D.J.
Alice – Writer, Punmaster
Rodrigo – Idea man/Photographer
Phil – Artiste/Builder

c) Mission Statement

Run­ners fre­quently get injuries. Some of these injuries could be fixed or pre­vented by proper gait and form. The prob­lem is how to test a per­son’s run­ning form and gait. Cur­rent tools for gait analy­sis do not pro­vide a holis­tic pic­ture of the user’s gait. Insole pres­sure sen­sors fail to account for other bio­me­chan­i­cal fac­tors, such as leg length dif­fer­ences and pos­ture. While running, users have very lit­tle access to data — they are entirely depen­dent on their own per­cep­tion of their form while run­ning. This per­cep­tion can be rather inac­cu­rate, espe­cially if the run­ner is fatigued. We will build a system that solves these problems while causing minimal alteration to natural running movements. We hope to develop our plans for sensor placement, location of wearable components, and data visualization. We hope to discover whether or not people think this has enough or too many components. We will evaluate our product for comfort based on the intended size, weight, and shape. We will evaluate the effectiveness of depictions of the recorded data.  Our mission is to build a low-impact gait-analysis system that generates a more meaningful representation of data than the existing systems. These metrics will facilitate sports medicine and the process of buying specialized athletic footware.

d) Prototype Description

We implemented a basic version of the foot sensor, the ankle device, the wrist display, and the screens for the computer once the device has been hooked up. The foot sensor is made out of cardboard and is about the size and thickness that our insert will be. We drew on top the general layout of where the sensors will likely be. The ankle device is made out of foam, some weights, and material from a bubble wrap envelope as the strap.The foam and weights were used to create a similar sized and feeling device to an arduino with a battery pack and accelerometer.The wrist display is just three circles drawn onto a strap, again made from bubble wrap envelope. The circles represent LEDs that will light up to indicate correct, so-so, and bad gait (tentatively chosen as green, yellow, and red). For the screens we have mapped out a welcome screen which can take you to either load new data or view past analyses. Selecting to load new data will first take you to a waiting screen.  Once the device is connected, the listed device status will change to connected and the progress bar will begin to fill. From there it will go to the page that displays the analysis tools. We have depicted a heat map image of the foot showing the pressure, and information about each sensor, the runner, the data, and any additional notes the user wants to input. Selecting history from the welcome screen will take you to a page with a list of users. You can either scroll through them or select the top text bar to type in a name to narrow down the results. Clicking on a user will open a drop down menu for dates, selecting a date will take you to the analysis from that day, which is basically the same page as the one you go to after loading new data, but will load any previously made notes or user input.

The foot pad for GaitKeeper

The ankle holder for our prototype.

e) Prototype Testing Tasks

TASK 1: While running, assess gait and technique.

The motivating story for this test is the story of a runner who is worried that their form while running is unhealthy, placing them in danger of injuring themselves.

We will have the athlete install the system on themselves to determine the intuitiveness of placement of the various components. To facilitate testing, the subject will run on a treadmill. One member of our group will perform real-time gait analysis of our subject based on the evaluator’s personal running experiences. Another member will change the color of the simulated LED accordingly. The group member in charge of gait analysis will observe the runner for gait alterations. We will weigh the prototype components to the approximate weight of each component, and observe the prototype’s stability and attachment during running. Also, we will interview the user about the comfort of the system.

The athlete puts the foot pad in their shoe

The athlete straps the prototype on their ankle

The athlete runs on the treadmill

During the workout, the LEDs change color depending on the health of the gait

TASK 2: After an injury, evaluate what elements of gait can be changed in order to prevent recurrence.

The motivating story for this test is the story of a runner who has injured themselves while running. The injury may or may not be gait-related. They are working with a specialist in biomechanics and/or orthopedic medicine to alter their gait to reduce the chance of exacerbating the injury or re-injuring themselves.

We will attempt to find a runner with a current injury, and interview them about the nature of their injury ahead of time. Specialists in biomechanics have extensive demands on their time, so one group member will play that role instead, including brief research into prevention of the injury in question. After assisting the runner in placing and securing the system, we will have the runner run on a treadmill. After the run, we will simulate plugging the device into the computer, and will show simulated running data.

After the workout, the device is plugged into a computer.

The data is shown and aspects of the gait can be identified. Using the data, the doctor can see whether the gait has unhealthy characteristics and can suggest exercises that would help improve the gait for the athlete.

TASK 3: When buying running shoes, determine what sort of shoe is most compatible with your running style.

The motivating story for this test is the story of a customer who is going to a running store and looking for the perfect shoe for their gait. Even if the store allows the user to test the shoe on a treadmill in the store, it is difficult to find the right shoe from feel alone (we know this from personal experience after a team member bought a shoe in P2). A quantitative process would be less error-prone and would allow the in-store specialists to give more substantial advice to customers.

The shoe pad fits into various shoe models of the same size

A gait window for each shoe is opened on a computer in front of the treadmill. The user can see the results of the sensors as they run and the specialist, by the treadmill, can see it as well.

f) Prototype Discussion

i. How did you make it?

The prototype was constructed from found and brought materials in an ad-hoc manner. We worked to simulate the weight and feel of the actual system.

ii. Did you come up with any new prototyping techniques to make something more suitable for your assignment than straight-up paper?

Yes. Our prototype involves fewer interface screens than the projects we contemplated for the individual paper prototyping assignment. We used materials in a three-dimensional manner to model how they will fit on the body of the user.

iii. What was difficult?

It was difficult coming up with names for buttons and functions that would be intuitive for a user.

Building a simulation of physical objects is difficult because the feel of the objects is more important than the visual appearance. We couldn’t think about this physical shape and feel prototyping the same way we did about the paper prototyping we did for the individual assignment.

It was also difficult to determine the layout of the screens.  Specifically, designing the main analysis screen was a challenge because we wanted it to be as informative as possible without being cluttered.  This is clearly a central challenge of all data analysis tools, as it required us to really consider our distinct user groups and how each of them will interact with the data differently.  After a good deal of discussion, we decided that it would be effective to have a single desktop interface that all user groups interact with.  Our main concern here was that runners might be overwhelmed by the information that doctors or running company employees would want to see for their analysis.  However, we concluded from our previous interviews that the running users who would use this product would probably be relatively well informed, almost to the level of running store employees.

iv. What worked well?

We let group members who were immediately excited about prototyping and began building components without prompting continue with prototyping for the duration of the assignment. This produced prototyping material quickly, and let good ideas flow.

By connecting the intended uses of the tasks together, we were able to make an interface that addresses the multiple needs of each task simultaneously. This simplified our product and allowed us to make it more understandable and applicable to use cases we weren’t even thinking about.

The design of the physical device was also a success, as we found through testing that it did not significantly affect how we ran.  The weights added, which we made comparable to an Arduino with batteries, were not excessive.  The form factor was also acceptable, even in a context with a good deal of movement.

P3: NavBelt

Group 11 – Don’t Worry About It
Krithin, Amy, Daniel, Jonathan, Thomas

Mission Statement
Our purpose of building the NavBelt is to make navigating cityscapes safer and more convenient. People use their mobile phones to find directions and navigate through unfamiliar locations. However, constantly referencing a mobile phone with directions distracts the user from safely walking through obstacles and attracts unwanted attention. From our first low-fidelity, we hope to learn how best to give feedback to the user in a haptic manner so they can navigate more effectively. Such will include how long to vibrate, where to vibrate, and when to signal turns.

Krithin Sitaram (krithin@) – Hardware Expert
Amy Zhou (amyzhou@) – Front-end Expert
Daniel Chyan (dchyan@) – Integration Expert
Jonathan Neilan (jneilan@) – Human Expert
Thomas Truongchau (ttruongc@) – Navigation Expert

We are opening the eyes of people to the world by making navigation safer and more convenient by keeping their heads held high.

The Prototype
The prototype consists of a paper belt held together by staples and an alligator clip. Another alligator clip connects the NavBelt prototype to a mock mobile phone made of paper. The x’s and triangles mark where the vibrating elements will be placed. For clarity, the x’s mark forwards, backwards, left, and right while the triangles mark the directions in between the other four.

The Tasks
1. Identifying the correct destination (Easy Task)
2. Provide information about immediate next steps. (Hard Task)
3. Reassure user is on the right path. (Moderate Task)

Method
1. The user types his destination into his mobile phone, and verifies using a standard map interface that the destination has been correctly identified an appropriate route to it has been found.

2. One of the actuators on the NavBelt will constantly be vibrating to indicate the direction the user needs to move in; we simulated this by having one of our team repeatedly poke the user with a stapler at the appropriate point on the simulated belt. Vibration of one of the side actuators indicates that the user needs to make a turn at that point.

The following video shows how a normal user would use our prototype system to accomplish tasks 1 and 2. Observe that the user first enters his destination on his phone, then follows the direction indicated by the vibration on the belt.

The following video demonstrates that the user truly can navigate solely based on the directions from the belt. Observe that the blindfolded user here is able to follow the black line on the floor using only feedback from the simulated belt.

3. In order to reassure the user that he or she is on the correct path, the NavBelt will pulsate in the direction of the final destination; if the actuator at the user’s front is vibrating that is reassurance that the user is on the right track. Again, a tester with a stapler will poke at one of the points on the belt to simulate the vibration.

Discussion
We constructed the prototype from strips of paper and alligator clips to hold the belt together and to represent the connection between the mobile phone and the NavBelt. We also used a stapler to represent the vibrations that would direct the user where to walk. We encountered no real difficulties and the prototype was effective for guiding a user between two points.

%eiip — P3 (Low-Fidelity Prototype)

Group Number and Name

Group 18: %eiip

First Names

Bonnie, Erica, Mario, Valya

Netids (for our searches)

bmeisenm, eportnoy, mmcgil, vbarboy

Mission Statement

We are evaluating a bookshelf system for storing and retrieving books. Our system currently involves a bookshelf, with embedded RFID scanners; RFID tags, which must be inserted into each book; and a web application for mobile devices that allows a user to photograph their book to enter it into the database, as well as search through the database and view their books. The purpose of our project is to allow book owners the flexibility of avoiding a rigid organizational system while also being able to quickly find, retrieve, and store their books. With our low-fidelity prototype, we want to learn if users think that using our system is natural enough that they would be willing to use it. We want to observe how users choose to interact with the system, and whether or not it frustrates them. Specifically, we also want to observe their physical motions in order to tailor the construction of our more high-fidelity prototypes. Based on this, we state our mission as follows: We believe that there’s something uniquely special about personal libraries. Rigid organizational systems remove some intangible, valuable experience from book collections; also, we’re lazy! We want to build a way for people to keep track of their books that’s as natural and easy as possible. In this assignment, Mario is drawing interfaces for the mobile website and writing up descriptions; Bonnie is writing responses to questions from the first part (mission statement, etc), writing description of prototypes, and creating task walkthrough films; Valya is writing test user stories for each task and creating task walkthrough films; and Erica is constructing the cardboard prototype shelf, writing the prototype discussion, and putting together the blog post.

Description of Prototype, With Images

Our prototype consists of a two-shelf, cardboard “bookshelf” with paper “RFID sensors” taped to the back; index cards representing the mobile web interface for adding books to the system; some books; and some index cards representing RFID tag bookmarks.

Bookshelf with books in it.

“RFID sensors” on the back of the bookshelf.

RFID tag bookmark in book

Main book search and list screen. The user has searched for a book that is not in the collection.

Adding books, step 1: Screen that asks the user to take a picture of the book

Adding books, step 2: Asks the user to take a picture of the book’s ISBN number.

Adding books, step 3: User enters or verifies information about the book.

Adding books, steps 4 and 5: Instructs the user to insert the RFID bookmark or tag sticker and add the book to the shelf.

The user filed the book successfully!

Display all books in the user’s collection.

Detail screen for a particular book. The user can see the book’s information, and has the option to delete it from the collection.

Tasks

Easy Task:

We brainstormed a few tasks that a user interacting with our prototype could feasibly want to do. Some easy tasks that we thought of were putting a book that’s already in the system onto the shelf, checking if a given book is in a user’s collection, and retrieving a book from the shelf. Ultimately we decided that retrieving a book from the shelves was a more important task to test. Moreover, checking if a book is there could be part of the retrieval process. We would tell our testers the following: Imagine that you have acquired a new bookshelf, for storing your books. You also have a database which keeps track of the books that are on the bookshelf, and where they are, accessible via a mobile website. Suppose that you want to get a textbook for your Imagined Languages class, for example In the Land of Invented Languages by Arika Okrent. Can you get the textbook from the bookshelf? Having accomplished that, could you also get me John Milton’s Paradise Lost? We will ask the user to perform these tasks using our mobile web application. [Note: The Imagined Languages textbook is actually on the bookshelf, so getting it should be easy for the user. However, Paradise Lost is not. We want to see if the user can search the database and accurately understand the information given. If the book is on the bookshelf, the user should go and get it, if the book is not in the system we expect them to say something along the lines of “I do not have this book.”]

The bookshelf lights up to tell you where your book is.

Searching for a book that is not there.

Searching through your library.

Moderate Task:

For our moderate task we chose adding a book to our system, which would include tagging it, adding it to the database, and then placing it on the shelf. We would tell our testers the following: You’re starting a new semester, so you have lots of new classes. In particular, you just purchased Lotus Blossom of the Fine Dharma (The Lotus Sutra). You want to add this book to your collection so that you can find it in the future. We will have the user tag the book, add it to the database, and then place it on our prototype bookshelf.

Placing a tagged book onto the bookshelf.

The website tells you to RFID-tag your book.

Difficult Task:

Finally, for our difficult task we chose adding an entire collection to the system. The reason we’re concerned with this as a separate task is because it’s unclear to us how annoying this would be for a user, and we want to see if we can make it more natural and less tedious. We would tell our testers the following: Suppose that you just bought a new bookshelf that keeps track of all of your books for you. The problem is that you already own quite a few books, and you need to add all of them to the system. Here is a pile of the books you own. We will then have the user tag all of them, add them to the database, and add them to the bookshelf so that they can find them in the future.

Taking a picture of the cover to add the book to the database.


Prototype Discussion

While our prototype includes a software system, we are also extremely interested in seeing how the users interact with the physical objects in our system. Thus, we constructed a scaled-down cardboard bookshelf that can hold a few books. Since paper isn’t exactly sturdy enough, we used cardboard, duct tape, and index cards to put together a bookshelf. We constructed the bookshelf using a couple of disassembled cardboard boxes folded into a shelf-like shape. We added “RFID readers” by folding index cards into vaguely reader-like shapes and taping them onto the back. We are using index cards folded in half to simulate RFID tags. We used index cards to create a paper prototype, in the usual manner.

Putting together cardboard in a way that will hold a few books using minimal amounts of cardboard was slightly difficult but doable. We used some index cards to stabilize it, which was pretty cool. We prototyped our web application (the main interface to the system) using paper index cards, which we felt were appropriate given that the application is targeted primarily at mobile devices. Getting the specifics of all the workflows correct at first was somewhat difficult, since we had not fully fleshed them out before – for instance, our first implementation of the “add book” workflow did not allow users to verify the quality of the pictures they took before proceeding to the next step, which was an awkward design. We also had some initial struggles with conforming to standard mobile UI conventions and best-practices; thinking consciously and critically about the layout of UIs is difficult, especially for mobile contexts where screen real-estate is at a premium.