Final Project Documentation – The Cereal Killers Group 24

The Cereal Killers

Group 24: Andrew, Bereket, Lauren, Ryan

cereal_logo

Our Project

A program to create computer shortcuts that can be replayed using hand gestures via a wireless gesture bracelet.

Previous Posts

  • P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/the-cereal-killers/
  • P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-contextual-inquiry-and-task-analysis/
  • P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-brisq-the-cereal-killers/
  • P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-group-24/
  • P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-cereal-killers-team-24/
  • P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/07/p6-usability-study/

 

Our Final Project in Action

Gui:

Gesture Bracelet Design:

Gesture Recognition Demo:

Bracelet Images:


Gesture Bracelet

Gesture Bracelet

 

Side View 1

Side View 1

 

Side View 2

Side View 2

 

Bracelet on Wrist

Bracelet on Wrist

Changes since P6

  • In P6, since we did not have the gesture recognition or gesture recording part working yet,  we made another GUI with temporary buttons. Now we have a GUI with the final layout which reflects the programs ability to record gestures.
  • One button would “record the gesture,” but when pressed we used a Wizard of Oz technique by simply remembering which gesture the user assigned to an action. Now we decided to go with six predefined gestures.
  • We had another button to replay the actions. Now, the actions will be replayed when a certain gesture (which was mapped to that action) is performed.
  • We have gesture recognition.
  • We created the gesture bracelet using wireless xbee devices, an arduino uno, and accelerometer
  • We wrote code in python to receive data from the bracelet wirelessly, compare it with a library of data, and find the closest match using a nearest neighbor DTW algorithm
  • We were aslo able to make the use of the bracelet easier.  We replaced the idea of shaking before and after a gesture to signal its start and stop by instead just jerking the bracelet in any direction fast enough to trigger the start of transmission form the bracelet to the laptop, and sending roughly 3/4 a second of data to the computer.  Thus, we no longer needed the user to do an action to end the gesture
  • The keylogger handles multiple keys. In P6, it would not record when multiple keys were pressed simultaneously.
  • Now that we have predefined actions, our GUI maps gestures to action by writing the mapping to a file.
  • And therefore we have a page in the gui that lists the available gestures.

 

 

Evolution of Our Goals and Design

Our goal to create an easy to use system that would make users more connected to their laptops has not changed. However, due to feasibility, module functionality and user feedback, we made some changes to the implementation of this goal.

Six Predefined gestures vs User Created gestures

We changed some design decisions based on feedback from users.  We went with six predefined gestures, as opposed to users having to create their own gestures, based on our interviews. Our users found that it was hard to think up gestures on the spot. This way, a user does not have to worry about creating a gesture, and all the gestures are distinct. This limits the variety for the user. However, in our usability tests, users spent quite a few minutes simply trying to think of a few gestures, and they often needed suggestions, or were hesitant about their choice of movement.

 

Bracelet size

Our initial plan was to have the bracelet small and light for comfort and ease of use. In order to do this, we had planned to use bluetooth to transmit the accelerometer data to the computer. However, we had a problem with our bluetooth module not working properly, so we had to go for the bulkier radio frequency option with xbee devices. Additionally, after testing a femtoduino and arduino micro, neither of these would work with the bluetooth or xbee devices, so we had to use the larger arduino uno.  Therefore, we had to modify and increase the size the of the bracelet.

 

Evaluation

We were happy with the gui we were able to make and the ability to log both keyboard and mouse input and replay them.  We felt the gui was intuitive and provided everything the user needed to understand and interact with the program.  We were also very happy with our gesture recognition.  We could do six gestures and recognize each: left and right swipes, up and down swipes, and clockwise and counterclockwise circles.  We felt that these covered a nice range of intuitive gestures, and were different enough to be recognized effectively.The unfortunate part of our project was the inability to link the gesture recognition with the gui.  We had problems installing the necessary libraries for gesture recognition in windows, in which the gui and shortcut processing was written, so we were left with a mac that could recognize gestures and a windows laptop that contained the guy.

We definitely felt that this was applicable in the real world, and our users who tested the product agreed.  It has a place in the market for expanding computer use, and we think that the three scenarios we came up with are still applicable.  It would be exciting to see how the device would do in the real world, and since we mostly use low cost materials that could be further reduced in cost if buying in bulk, we think the device could be priced in an attractive range.  Of course, the program would need to be linked to the gesture recognition so we could run both on the same laptop, but we feel that this project is very applicable and even had some users ask if we could update them when we finished because of their interest in the product.

 

Moving Forward

We would of course like to link the gestures and the gui on the same computer.  We think that with some more time, we could either figure out how to install the problematic library (pygsl on windows), or change the machine learning library we are using to make it compatible with windows.  We would also like to investigate more wireless and microcontroller options so we could reduce the size of the bracelet.  We were happy with how compact the final product was, but we feel that it could even be further reduced for a sleeker design.  We would also like to replace the xbees, which require a receiving end hooked up to the computer, with just a bluetooth transmitter on the bracelet that could pair with any bluetooth compatible laptops.

Future testing would include observing users using the bracelet with the gui and seeing how easily they are able to pair the gestures with actions.  Additionally, we would like to see users using the bracelet.  We were happy that we were able to put code on the bracelet that activated it with just a simple jerk, and felt that this made it easier to use than the initial shake actions.  We would like to see if users agreed with this, and if they felt it had an intuitive feel and ease of use.

Code

Zip file:

https://www.dropbox.com/s/cohxt7l9eo6h378/brisq.zip

The libraries we used with our code were numpy, scipy, pyserial, pygsl, and mlpy, all python libraries.  Numpy was used to store and load data, mlpy was used to compare accelerometer vectors (x,y,and z), pyserial was used to read data from the usb port, and scipy and pygsl were required for the mlpy library.

Poster/Presentation Documents:

https://docs.google.com/presentation/d/1GAqPCLOdXqt-w-z_E4qPgnXzB4sn4m1c-HoclzbxlmE/edit?usp=sharing

Final Project — AcaKinect

Group Name: Deep Thought

Group #25: Vivian Q., Harvest Z., Neil C., Alan T.

I. A 1-SENTENCE DESCRIPTION OF YOUR PROJECT

AcaKinect is voice record­ing soft­ware that uses a Kinect for gesture-based con­trol, which allows con­tent cre­ators to record mul­ti­ple loops and beat sequences and orga­nize them into four sec­tions on the screen, creating a more effi­cient and intu­itive way of pre­sent­ing a music record­ing inter­face for those less expe­ri­enced with the tech­ni­cal side of music production.

2. HYPERLINKS TO PREVIOUS BLOG POSTS

P1: http://blogs.princeton.edu/humancomputerinterface/2013/02/22/a-cappella-solo-live/
P2: http://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-2/
P3: http://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-vahn-group-25/
P4: http://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-acakinect/
P5: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-acakinect/
P6: http://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-acakinect/

3. VIDEOS & PHOTOS

Videos:




Description:

Gesture controls:

  • Record Loop (Raise either arm above head) — Records a single loop of 8 beats at a set tempo. Block representing loop appears in the column the user is standing in, when recording is finished. Blue loading circle appears at initial command.
  • Delete Loop (Put hands together in front of torso and swipe outward) — Deletes the last recorded loop in the column user is standing in. If there are no loops, does nothing. Green loading circle appears at initial command.
  • Delete Column Loop (Put hands together above hand and swipe straight down) — Deletes all recorded loops in the column user is standing in. If there are no loops, does nothing. Pink loading circle appears at initial command.

Photos:

4. BULLETED LIST OF CHANGES

  • Blocks added to visually indicate the number of loops in each column: this is easier to read than a number in each column and provides better visual indication of the structure and layering of loops across the sections.

  • More beautiful and colorful UI: since we couldn’t get the openGL to run on Processing without distorting our loop recording timing, we just changed the UI using bare Processing to have a more visually-engaging prototype. Each section is a different color (green, purple, blue, and red), all the loop blocks are different colors between columns, beat count is more visible (a complaint from user testing!), and messages displayed clearly and colorfully. We kept in mind conventional recording colors, so the “Begin recording in…” is green and the “Recording!” message is in red.

  • More prominent highlighting of the current section: Since some users needed additional guidance to figure out the fundamental structure of recording different parts in different sections so that each part can be edited or changed separately, we indicate the current section the user is in much more clearly by highlighting the entire column on screen instead of providing a small tab on the bottom.

  • More onscreen visual cues: message notifying user to calibrate, message notifying when user has reached maximum number of loops in a column (4 loops maximum) , count down to the next loop recording, etc. We pop up a prominent and visually consistent notification in the middle of the screen that is easy to read and provides immediate feedback.

  • Better gesture recognition with microphone in hand: the delete loop and begin recording gestures were both modified in order to be easier to do while holding a microphone; the delete loop gesture was entirely changed to be more intuitive and easy to perform with a mic while the begin recording gesture received a few code tweaks to prevent false positives.

  • A “delete column” gesture: during the demo, we noticed that people felt it was tedious when they were deleting repeatedly to clear a column. This gesture makes it faster and easier to delete all loops in the section rather than one by one.

  • Gesture loading indicators: one problem we noticed during user testing for P6 was that users often didn’t know if the gestures were being recognized. To make this clear, we based our UI fix off many Kinect games that exist. When a command gesture is recognized (record/delete/delete-column), a small circle starts filling up on the user’s hand which started the command. Once the circle fills up completely, then the command executes. This ensures that users aren’t accidentally causing gestures (since they have to hold the pose for 1-2 seconds) and notifies the user that the Kinect recognizes their command. Each loading circle is different colors for clarity and differentiation between commads.

5. HOW AND WHY YOUR GOALS/DESIGN EVOLVED

The original concept for this project was much more complex. Over the course of development and testing, the specification was drastically simplified — coincidentally making development simpler, but more importantly, reaching our target audience more effectively. The original idea was to create a gesture based system, that enables highly complex music creation and manipulation intended for both live performance and recording. With some testing, we discovered that what we really should create a tool for those with little or no music production experience. For those with experience, there are hugely complex systems that offer all the detailed functionality needed. However, there are a large number of musicians who don’t even want to worry about music production at the level of Garageband; these may be very talented musicians who want to use loops and effects, but they may not be interested enough in the technical side of music production to go learn any specific piece of software or hardware. We decided that AcaKinect would slot in at the very bottom of this technological chain: simple enough to pick up and use immediately, and yet fun and flexible enough to retain users and potentially allow them to develop an interest in learning more complex music production tools.

We also realized that the format does not particularly suit recording well; if a user has progressed to the point of wanting to record, edit, and polish these creations, there are a huge number of software loopers available that offer much more flexibility, as previously mentioned; in addition, more experienced musicians who are looking to produce a record will probably turn to a full-fledged Digital Audio Workstation that allows maximum control at the cost of a rather higher learning curve. Thus, we see this as an experimental tool. One tester, who is the music director for an a cappella group on campus, commonly uses a midi keyboard for just this purpose; when arranging a piece for a cappella, he writes down each part and then plays them together on a keyboard to see how they sound. In these cases, it may be easier and more flexible to just test out these parts using AcaKinect, encouraging more exploration and experimentation. To that end, we pared down the specification to the bare minimum needed to provide a flexible looping environment with spatial awareness (columns) to suggest to the user how a piece of music might be structured. There are only two primary functions – record and delete – so the user is not confronted with anything approaching “mixing board” levels of complexity. The division into four sections, and the physical requirement of moving left and right in order to record in each section, suggests a natural split in what to record in each section; users naturally choose to record all percussive sounds in one, all of the bassline in a second, and then maybe a more prominent harmony in a third. This sets them up well for moving on to more complex musical production tools, where structuring loops and tracks in a hierarchy is very important while organizing, say, twenty tracks all at once.

6. CRITICAL EVALUATION OF PROJECT 

We believe that with further work and development, this could be a viable and useful real-world system that fills a possible gap in the market that has never really been touched. We’ve already discussed how, in terms of simplicity and feature set, AcaKinect would be slotting in under all the music production software currently available; what we haven’t really covered is the opposite end of the music spectrum, which is currently occupied by games like Rock Band and Guitar Hero. These games do introduce a musical element in the form of rhythmic competence (and in the case of Rock Band, fairly lenient vocal pitch competence), but fundamentally the music is still generated by the game, and the user just kind of tags along for the ride. The goal for AcaKinect with further iteration is a product that is almost as game-like as Rock Band; a fun system for testing out loops and riffs in the living room, and a useful tool for playing with sound and prototyping arrangements. It’s important that AcaKinect is seen as more of an exploratory product; unlike working in a DAW where the user may have a very good idea what she wants to create, AcaKinect would be a live prototyping tool that enables a lot of exploration and iteration of various sounds and harmonies. The simplicity of the controls and the lack of any real learning curve only helps to make that a fairly easy task.

The application space, as far as music production (and even more loosely, music interaction) tools go, is giant, and spans all the way from Rock Band to Logic Pro. There is no real point going for new or extra functionality; whatever arcane feature you’re looking for, it probably already exists, or if it doesn’t, it would probably be best suited to a Logic plugin rather than a whole new product, since the pros are the ones looking for new features. Those who are intimidated by the sheer number of buttons and knobs and controls in a typical piece of music software or hardware seem to much prefer a great simplification of the whole music production process (as would make sense), and we found that there is a big opening in this space for software that can be picked up and used immediately by any musician, no matter whether she has technical background in music production or not.

7.  HOW WE MIGHT MOVE FORWARD WITH THE PROJECT 

There are still quite a few implementation challenges involved in making this product one that is truly easy to use for any musician. Firstly, given its intended use as a fun and exploratory product in the living room, it’s a little problematic that it only works for one person. If, say, a family is playing with it, it would be much better to allow several people to be recognized at once (even if only one loop is recorded at a time), so that several people may collaborate on a project. SimpleOpenNI is capable of tracking limbs for two people, which is generally what Kinects are used for in Xbox games as well; we could thus implement two people without too much extra trouble, but to do more may be difficult. Secondly, this prototype uses Processing and Minim for ease of coding, testing, and iteration; however, since Minim really wasn’t designed for low-latency audio looping, it has certain issues with timing and tempo for which we have implemented hacks to “get around.” However, in an actual polished product, latency would have to be much lower and the rhythm would have to be much more solid; to that end, a more robust audio framework (hooking directly into Apple’s CoreAudio libraries, for example) would allow us to achieve much better results.

Finally, there’s the issue of physical setup; the user needs to be able to hook up a microphone and play sound out of speakers such that the balance between the current vocal and the sound from the speakers is about right, but without triggering any feedback or recording the speaker output back in the microphone. There are advanced noise-cancellation techniques that can be implemented to signal-process away the feedback, but these will sometimes add artifacts to the recording; one way is just to require that the user use a highly directional microphone with a sharp signal strength falloff, so as to reduce the chance of feedback. An on-screen notification that informs the user of feedback and turns down the levels of the speakers when feedback occurs might also be convenient. Alternate setups may also be a good thing to test; a wireless clip-on mic of the sort used on stage during live performances, for example, may prove slightly more expensive, but it may let users feel less awkward and make gestures easier.

8. LINK TO SOURCE CODE 

AcaKinect Source Code (.zip file)

9. THIRD-PARTY CODE USED

10. LINKS TO PRINTED MATERIALS

AcaKinect poster (.pdf file)

 

Final Blog Post: The GaitKeeper

Group 6 (GARP): The GaitKeeper

Group members:

Phil, Gene, Alice, Rodrigo

One sentence description:

Our project uses a shoe insole with pressure sensors to measure and track a runner’s gait, offering opportunities for live feedback and detailed post-run analysis.

Links to previous blog posts:

P1 – http://blogs.princeton.edu/humancomputerinterface/2013/02/22/team-garp-project-1/

P2 – http://blogs.princeton.edu/humancomputerinterface/2013/03/11/group-6-team-garp/

P3 – http://blogs.princeton.edu/humancomputerinterface/2013/03/27/gaitkeeper/

P4 – http://blogs.princeton.edu/humancomputerinterface/2013/04/08/gaitkeeper-p4/

P5 – http://blogs.princeton.edu/humancomputerinterface/2013/04/21/p5-garp/

P6 – http://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-the-gaitkeeper/

Pictures and Videos with Captions:

Pictures of the prototype – https://drive.google.com/folderview?id=0B4_S-8qAp4jyYk9NSjJweXBkN0E&usp=sharing .   These photos illustrate the basic use of the prototype, as well as its physical form factor.  You can see from them how the insole and wires fit in the shoe, and how they fit to the user’s body.  This was designed to make the product have a minimal effect on the user’s running patterns, so these aspects of the user interaction are especially important.

Video of computer-based user interface – Computer-Based UI .  This video (with voiceover) demonstrates the use of our user interface for saving and viewing past runs.

Video of live feedback based on machine learning – Live Feedback from Machine Learning .   This video (also with voiceover) demonstrates the live feedback element of the GaitKeeper, which tells the user if their gait is good or not.

Changes since P6:

  • Slightly thicker insole with a stronger internal structure – Thickness did not appear to be an issue for the testers, since the insole was made of a single sheet of paper.  We observed some difficulty in getting the insole into the shoe, however, and felt that making it slightly thicker would be useful in solving this issue.

  • Laminated insole – One of our testers had previously run that day, and his shoes were still slightly sweaty.  When taking off the insole, the sweat from his shoe and sock made the insole stick to him.  When removing it from his foot, the insole also was torn slightly.  We noticed that the tape part didn’t stick, and felt that making the entire insole of similar material would solve this issue.

  • Color changes in the UI heatmap – one of our testers noted that he found the colors in the heatmap to be visually distracting and different from traditional heatmaps.  This issue was corrected by choosing a new color palette.

  • Enhanced structural support for the Arduino on the waist – After user testing, we found significant wear and tear on the arduino box which is attached to the user with a waistband.  This was reinforced to make it more durable.  It was made slightly larger, which we felt was not an issue since users indicated that they found the previous implementation acceptably small and this change did  not significantly affect the form factor.

  • Ability to run without USB connection – This was an element which we had originally planned in the product, but were not able to fully execute for P6.  We used wizard of oz techniques at the time, and this replaced that.  Now, data can be imported into the computer from the arduino for analysis after a run.  Unfortunately, live feedback still requires a computer connection, but given further iteration could possibly be made mobile as well.

  • Wekinator training of live feedback during running – During testing, this was a wizard of oz element, where the lights went on and off for predetermined amounts of time to simulate feedback from the system.  This was replaced with true live feedback which is informed by the Wekinator’s machine learning abilities.

  • Ability to save and view saved data in the UI – User testing was done with a simulated run from our own testing data, rather than from actual saved runs.  We have added the ability for the user to save and view their own data imported from the arduino

  • Ability to import arduino data – User testing relied upon user simulation of the data upload process.  This is now fully implemented, and allows users to see the results of their running.

Explanation of goal and design evolution:

We began the semester with very little information about how a runner’s gait is actually assessed, but with the understanding that it was generally based on direct observation by a planned professional.  We originally planned to have a device which bridged the gait analysis demands of store employees, medical professionals, and runners themselves.  Over time, we realized that one of those three user groups had a very different set of needs, which resulted in us deciding to focus on just store employees and frequent runners.  Both user groups were considered by us to be well informed about running, and would be using the product to observe gait through a run for technique modification and product selection.  Our goals were then modified to better serve those user groups by focusing on the post-run analysis features, such as the ability to save and access old data.

Also, at the beginning of the semester, we had wanted to design the device to provide live feedback.  Over time, we came to realize that meaningful live feedback required a machine learning tool like Wekinator.  As a result, we were forced to maintain a computer connection for live feedback, which was a change from the fully mobile vision we had at the beginning.  This has slightly changed our vision for how the live feedback element of the product would be used; given the tethering requirement, live feedback would probably be most useful in a situation where the runner is on a treadmill and is trying to actively change their gait.  Other changes in design included a remake of the pressure-sensing insole, which our testers originally found to be sticky, difficult to place in a shoe, and overly fragile.  We moved from a paper-based structure to a design of mostly electrical tape, which increased durability without a significant cost in thickness.

 

Critical evaluation of project:

It is difficult to say whether this product could become a useful real-world system.  In testing, our users often found the product to be interesting, but many of the frequent runners had difficulty in really making use of the data.  They were able to accurately identify the striking features of their gait, which was the main information goal of the project.  One thing we observed, however, was that there were not many changes in gait between runs, with most changes occurring due to fatigue or natural compensation for small injuries.  That led us to conclude that the product might be better suited for the running store environment, where new users are seen frequently.  Given the relatively small number of running stores, we believe the most promising market for this product would be small, and focused on the post-run analysis features.  Live feedback was much less important to the running store employees, who were willing to tolerate a slight delay to get more detailed results.  We found that this space enjoys using technology already (such as slow motion video from multiple angles), and was quite enthusiastic about being able to show customers a new way to scientifically gather information about their gait and properly fit them for shoes.  Their main areas of focus on the product were reusability, the ability to fit multiple shoe sizes, accuracy of information, and small form factor.

We feel confident that further iteration would certainly make the product easier to use, also more focused on the running store employee user group, since they appear to be the ones most likely to purchase the product.  That being said, we are unsure that this device could progress beyond being more than a replacement for existing video systems.  Despite several conversations with running store employees, including contextual interviews while they met with actual customers, we were unable to identify any real information uses outside of the ones currently performed by the visual video analysis.  While our product is more accurate and takes a more scientific approach, achieving adoption would likely be a major hurdle due to the money such stores have already invested in video systems.

While the live feedback functionality is a quite interesting element of the project, it seems to have a less clear marketable use.  The runners we spoke to seemed to feel that live feedback was an interesting and cool feature, but not one that they would be willing to pay for.  Most (before testing) felt that their gait did not change significantly while running, and in surveys indicated that they already use a variety of electronics to track themselves while running.  These products include GPS, pedometers, and Nike+.  The runners consistently rated information feedback such as distance, location, pace, and comparison to past runs as more important than gait, running style, and foot pressure.  They also indicated an unwillingness to add additional electronic devices to their running, which already often involves issues of carrying a large phone or mp3 player.  As a result, one avenue which has some potential would be integration into an existing system.  The most likely option in this field would probably be Nike+, which is already built around a shoe.  Designing a special insole which communicates with the shoe (and through it, the iPod or iPhone) would be a potential way to implement the gait feedback device as a viable product for sale.  Clearly, this would have significant issues with licensing and product integration (with both Nike and Apple), but otherwise there does not appear to be a real opportunity.  As a result, we concluded that the product’s future would almost definitely require a stronger focus on the running store employee demographic.

 

Future steps if we had more time:

With more time, one of the things we would spend a great deal of time on would be the training of the arduino for live feedback.  Our users gave feedback several times that the two light system was not enough to really guide changes in gait, especially given that many changes in running style happen subconsciously over time as the runner gets tired.  The system did not give enough indication on how to fix the problem, only indicating the fact that a problem existed.  This could be solved through integration into a system like Nike+ or other phone apps, which would allow a heatmap gui to give directions to the runner.  Before implementing such a system, we would like to speak more with runners about how they would interact with this format of live feedback, as well as if they would want it at all.  Following that, more testing would be done about the most effective ways to convey problems and solutions in gait through a mobile system.

Although live feedback is likely the area which has the most opportunity for improvement in our prototype, our understanding of the targeted users indicates a stronger demand for the analysis portion for use in running stores.  Therefore, we would likely focus more on areas such as reusability and durability, to ensure that multiple users of different characteristics could use the product.  Furthermore, we would revisit the idea of resizing, which is currently done by folding the insole.  It is possible that multiple sizes could be made, but resizing is a more attractive option (if it is feasible) because it allows running stores to purchase only one.  This would likely involve more testing along the lines of what we already completed: having users of different shoe sizes attempt to use the product, either with or without instructions on resizing.  Additionally, for the running store application, we would seriously consider doing something to limit the amount of wires running along the user’s leg.  This could be done using a bluetooth transmitter strapped on the ankle, or through a wired connection to a treadmill.  While this is a significant implementation challenge, it seems that a feasible solution would likely exist.  Lastly, we found the machine learning tools to be quite interesting, and would also consider explore using a veteran employee’s shoe recommendations to train our device to select shoes for the runner.  This would allow the store to hire less experienced employees and save money.  Such a system would also likely require testing, in which we would gain a better understanding of how this would affect the interaction between the store employee and customer.  It would be very interesting to see if such a design undermined the authority of the employee, or if it made the customer more likely to buy the recommended shoe.


Source code and README zip file: 

 https://docs.google.com/file/d/0B4_S-8qAp4jyZFdNTEFkWnI2eG8/edit?usp=sharing

 

Third-party code list:

PDF Demo Materials:

https://docs.google.com/file/d/0ByIgykOGv7CCbGM3RXFPNTNVZjA/edit?usp=sharing

 

PFinal – Name Redacted

Group 20 – Name Redacted

Brian, Ed, Matt, and Josh

Description: Our group is cre­at­ing an inter­ac­tive and fun way for students and others who do not know computer science to learn the fun­da­men­tals of com­puter sci­ence with­out the need for expen­sive soft­ware and/or hardware.

P1 – https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-name-redacted/

P2 – https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-name-redacted/

P3 – https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-name-redacted/

P4 – https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-name-redacted/

P5 – https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-name-redacted/

P6 – https://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-name-redacted/

Video:

http://www.youtube.com/watch?v=qSJOHOulwMI

This video shows a compilation error that is detected by the TOY Program.  It tells the user where the error occurred and what the arguments of the command should be.

http://www.youtube.com/watch?v=YD4E56iZWfI

This video shows a TOY Program that prints out the numbers from 5 to 1 in descending order.The program takes 1.5 seconds to execute each step so that a user can see the program progress.  There is an arrow that is over the line that is currently being executed.

http://www.youtube.com/watch?v=RO5azUA_Vtw

Number Bases

Bulleted List of Changes:

We made these changes after watching the testers use our code.

  • Debugged some of the assembly code.

  • There is application persistence – walking in front of a tag does not cause the application to restart.

  • Write on the back of the tags what it says on the front so that users can easily pick up tags that they need.

We made these changes based on the suggestions of the users.

  • We added debugging messages to the “Assembly” program to provide feedback when a user creates a compile or runtime error.  Before we just highlighted rows.

  • Clean up the numbers display so that decimal, binary, and hexadecimal have sections.

  • The “Numbers” program shows how the numbers are generated.

  • Initialization does not require users to cover up tags.

Evolution of the Goals/Design (1-2 Paragraphs):

Goals

Our idea began with the a problem: students in America are not learning enough about Computer Science.  The goal that we set during P1 was to create a technology that helped facilitate CS education to students of all ages.  The first step we took toward that goal was to refine it further.  At first we were thinking about how we could make learning CS more engaging.  We toyed with games and other teaching tools.  Ultimately, however, we decided that we wanted our goal to be to make a lightweight, relatively inexpensive teaching tool that could be used by teachers to engage students in the classroom and provide students with a tactile interaction with the concepts they are learning.

Design

With this goal in mind, we set out on the process of design.  We immediately took up the technology of AR tags.  We really liked the fact that you could print them out for essentially pennies and make new lessons really cheaply.  We also liked the idea the teachers could throw together this tool from technology they might already have floating around a classroom (i.e. an old webcam or projector).  Again the process of design, like the process of goal making above was about specificity.  Once we have the basic design of AR Tags, we delved deeper into how this design will actually look.  We created metaphors such as “Tag Rows” where our system identifies horizontal rows of tags automatically.  We played with various types of user feedback.  Displaying lines and debug messages on top of the tags.

HCI as a Platform

Moving beyond the scope of the required sections of our projects, I think that one of the areas where we went above and beyond is not only creating a stellar interface but actually creating a usable platform for interface development.  We created sandboxes, metaphors, and ways that developers could safely create applications on our platform quickly.  We expose data structures (like the Tag Library) that allows our development users a myriad of options for accessing the data on the page.  Although clearly not as advanced or fleshed out, our platform is akin to the iOS platform which allows applications to take the complicated problems of capacitive touch and exposes them as simple events for developers.  In the process of making this platform we have learned a lot about the possibilities of our project.

Critical Evaluation of the Project (2-3 Paragraphs):

After working on this project for a few months, a future iteration of this project could be turned into a useful real-world system.  There is certainly a need for teaching computer science to people, particularly young students.  Very few middle schools teach computer science since they do not have the resources or the teacher training.  Many middle schools cannot afford to have computers for every students or even multiple computers per classrooms.  These schools do usually have one projector per classroom and one laptop for the teacher.  Our project combines the projector, the laptop, and a $30 webcam and would allow schools to teach computer science without expensive hardware and/or software.  There is certainly a need for computer science instruction, and a future iteration of our project could certainly help eliminate some of the obstacles that currently prevent the teaching of computer science.

We made our program modular so that future users could develop programs to teach what they wanted.  Just as the we created the “Numbers” program and the “Assembly” program, a teacher could create another program very easily.  All they would have to do is create a few tags corresponding to their lesson and some source code that would interpret the tags as needed.  Our program could provide them with a list of tags rows and a graphic.  They could change the graphic with their program and parse the tag rows to create a reasonable image.  If people ever used our program in the real world, they could easily customize the interfaces we provided to suit their needs.  Thus, although we only created a few lessons, other users could create many lessons to teach multiple topics.

Even outside of the classroom, there is a great need for computer scientists and people who know how to program.  Some of these people have taken some computer science courses in college but want to further their knowledge and understanding of the subject.  Many of our testers varied in their backgrounds in computer science.  However, they all were able to learn and practice by using our system.  By creating this project and interviewing and testing potential users, we have learned a lot about the difficulties of computer science and the topics that students find most difficult.  We geared our project towards teaching these topics better.  We did this by creating a binary program and by creating a TOY program which very easily shows memory management with the registers.  Thus, with our interviews and tests, we learned a great deal about the trouble with learning computer science and geared our project towards teaching those concepts.

 

Moving forward with the Project (1-2 Paragraphs):

Looking ahead, there are many ways we could improve our project design and develop it into a fully functional retail product. Currently our project is still a basic proof of concept, demonstrating its capabilities as a useful teaching tools. Keeping in mind our initial goal to develop an interface that would be used in a classroom environment by both teachers and students, our primary focus in developing our project further would be to improve the interface, make more built in lessons, and develop a platform for teachers to easily create new lessons.

In addition to making the overall presentation cleaner and more professional looking, the interface improvements would primarily be graphical additions such as animations and indicators that communicate to the user what is currently happening as they interact with the system. These improvements would not only make the system more intuitive, but also more entertaining for students. Additional useful features we could add may be things such as voice overs of instructions and demonstration videos on how to use various features of the system.

One of the most important things we would like to improve is the library of built in lessons available for users to chose from. Currently we only have two basic lesson: base-conversion and a TOY-programming environment. A more developed lesson library may contain tasks involving more advanced programming topics, teaching various programming languages, or even educational games. There would also be no need for lesson topics to be restricted to CS related topics. Our system could be effectively used to teach virtually any subject provided the right lessons are created.

Another key improvement we would like to make is developing a lesson creation environment where teachers can quickly and easily design their own lessons for their students. Any lesson library we make on are our will inevitably be lacking in some aspect. Giving users the power to create their own lessons and even share them online with each other will vastly improve the potential of our system as a teaching tool.

Our code base can be found at: https://docs.google.com/file/d/0B9H9YRZVb0bMY2JCZzhPa19ZZE0/edit?usp=sharing

Our README can be found at: https://docs.google.com/file/d/0B9H9YRZVb0bMNlllTElqdHdBYnc/edit?usp=sharing

Bulleted list of Third Party Code (including libraries):

  • Homography

  • Used to calculate transformation between camera coordinates and projector coordinates

 

Our Poster:

https://docs.google.com/file/d/0B9H9YRZVb0bMb2NvbUFhZWZ6VTA/edit?usp=sharing

Additional Demo Print Outs:

https://docs.google.com/file/d/0B9H9YRZVb0bMS2pxZmQ1cjJZSTg/edit?usp=sharing

The Elite Four – Final Project Documentation

The Elite Four (#19)
Jae (jyltwo)
Clay (cwhetung)
Jeff (jasnyder)
Michael (menewman)

Project Summary

We have developed the Beepachu, a minimally intrusive system to ensure that users remember to bring important items with them when they leave their residences; the system also helps users locate lost tagged items, either in their room or in the world at large

Our Journey

P1: http://blogs.princeton.edu/humancomputerinterface/2013/02/22/elite-four-brainstorming/
P2: http://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-elite-four/
P3: http://blogs.princeton.edu/humancomputerinterface/2013/03/29/the-elite-four-19-p3/
P4: http://blogs.princeton.edu/humancomputerinterface/2013/04/08/the-elite-four-19-p4/
P5: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/the-elite-four-19-p5/
P6: http://blogs.princeton.edu/humancomputerinterface/2013/05/06/6008/

Videos

Photos

Changes Since P6

  • We added sound to the prototype’s first function: alerting the user if important items are left behind when s/he tries to leave the room. The system now plays a happy noise when the user opens the door with tagged items nearby; it plays a warning noise when the user opens the door without tagged items in proximity.

  • We added sound to the prototype’s item-finding feature. At first, we had used the blinking rate of the two LEDs to indicate how close the user was to the lost item. We improved that during P6 by lighting up the red LED when the item was completely out of range, and using the green LED’s blinking rate to indicate proximity. We now have sound to accompany this. The speaker only starts beeping when the lost item is in range, and the beeping rate increases as the user gets closer to the lost item.

By adding sound to the prototype, we made our system better able to get the user’s attention–after all, it is easy to overlook a small, flashing LED, but it is more difficult to both overlook the LED and ignore beeping. For the second function, the system allows the user to operate the prototype without taking their eyes off of their surroundings, thus improving their ability to visually look for missing items.

Goals and Design

Our primary goal has not changed since the beginning of the semester. Our prototype’s main function, as before, is to save users from their own faulty memories by reminding them not to forget important items when they leave their residences. However, our prototype has evolved to include a related secondary goal/task, which is finding items once they are already lost/forgotten. Because our hardware already incorporated item-detection via RFID, this was a natural extension not only of our stated goals, but also of our hardware’s capabilities.

Critical Evaluation

From our work on this project, it appears that a system such as ours could become a viable real-world system. The feedback we received from users about the concept shows that this has the potential to become a valuable product. We were able to address most of our testers’ interface concerns with little difficulty, but we still suffered from hardware-based issues. With improved technology and further iterations we could create a valuable real-world system.

The primary issue with the feasibility of developing this project into a real world system comes from hardware and cost constraints, rather than user interest. The current state of RFID presents two significant issues for our system. The first is that, in order for our system to function optimally, it needs a range achievable only with active RFID or extremely sophisticated passive RFID. That level of passive RFID would be unreasonable in our system due to its astronomical cost (many thousands of dollars). Active RFID, which our prototype used, is also quite expensive but feasible. Its primary issue is that the sizable form factor of most high quality transmitters does not allow for easy attachment to keys, phones, wallets, etc. Therefore, ideally, our system would have a high-powered and affordable passive RFID, but currently that technology appears to be unavailable. EAS, the anti-theft system commonly used in stores, is another feasible alternative, but its high cost is also prohibitive.

Moving Forward

As stated in the previous section, the best option for moving forward would be to improve the hardware, specifically by switching to more expensive high-powered passive RFID. Other avenues of exploration include refinement of range-detection for our first task, which would become increasingly important with increasingly powerful RFID detection systems, and implementation of tag syncing. Range limiting is important because if our system can detect tags from a significant distance, it is important not to give a “false positive” if a user has left items somewhere else in their relatively small room but does not have them at the door. Syncing of tagged items would become important for a system with multiple tags; it would allow users to intentionally leave behind certain items, or for multiple residents of a single room/residence to have different “tag profiles.” Syncing could also permit item-detection of particular items, which would allow greater specificity for our second and third tasks. Finally, for most of our testing we used laptops as a power source for the Arduino. This was fine for the prototype, but in a real version of this product, ideally the system would have its own power source (e.g., a rechargeable battery).

Code

https://drive.google.com/folderview?id=0B5LpfiwuMzttMHhna0tQZ1hXM0E&usp=sharing

Third Party Code

Demo Materials

https://drive.google.com/folderview?id=0B5LpfiwuMzttX2ZPRHBVcmlhMUk&usp=sharing

The Backend Cleaning Inspectors: Final Project

Group 8 – Backend Cleaning Inspectors

Dylan

Tae Jun

Green

Peter 

One-sentence description
Our project is to make a device to improve security and responsibility in the laun­dry room. 

Links to previous projects

P1P2P3P4P5P6 

Videos

Task 1: Current User Locking the Machine

http://www.youtube.com/watch?v=U3c_S24bCTs&feature=youtu.be

Task 2.1: Waiting User Sending Alert During Wash Cycle

http://www.youtube.com/watch?v=0TzK4zgQg28&feature=youtu.be

-If the waiting user sends an alert during the current wash cycle, the alert will queue until the cycle is done. When the cycle is done, the current washing user will immediately receive a waiting user alert in addition to the wash cycle complete notification. The grace period will begin immediately after the wash cycle ends.

Task 2.2: Waiting User Sending Alert When Wash Cycle Complete

http://www.youtube.com/watch?v=JoSIdzqKBC4&feature=youtu.be

-If the waiting user sends an alert after the current wash cycle is complete, the current washing user will immediately receive a waiting user alert and the grace period will begin.

Task 3: Current User Unlocking Machine to Retrieve Laundry

http://www.youtube.com/watch?v=UP3rVQB4EqM&feature=youtu.be

Changes made since the working prototype used in P6

  • Improved our instructions that are displayed on the lcd screen for each task yet again in order to enhance usability based on the feedback we received from the test users

How and why your goals and/or design evolved

Since the beginning, we knew pretty much what our final prototype was going to look like.  Along the way though we have evolved it little by little according to feedback we’ve received from the user studies.  One important thing we have developed over the semester is the manner in which our locking mechanism physically locks the machine.  We thought of a lot of creative ideas at the beginning, but eventually narrowed it down to using a simple servo motor inside a carved piece of balsa wood that rotates the lock closed when the user locks the machine.  This made the most sense; however it would have to be improved if the product were to be commercially produced and sold in the future, as it would not be that hard to break the lock and thus gain access to the clothes within.  Another important feature that we have improved along the way is the set of directions that are displayed on the LCD screen as the user attempts each of the three tasks.  The instructions and information displayed have grown from the bare minimum in the beginning to become significantly helpful now with timers, error recognition, and more.

The most significant development in terms of our overall goals was that, as we implemented our system and received feedback from our test users, we shifted our focus more from the security of the laundry itself to the responsibility of the interacting users. In the beginning, our priority was mainly to keep the current laundry in the machine from being tampered with, even at the expense of the waiting next user. However, we began to realize that our target audience preferred that we focus more on responsibility between users. This resulted in shorter grace periods for current laundry users, as our testers generally expressed that they would be fine with their laundry being opened if they had warnings that they failed to heed. They expressed that, from the standpoint of the waiting user, it was more fair that they waited less time for the machine to unlock, given that the responsibility lay with the late current user.

Critical evaluation of our project

Overall, we view our project as a definite success. For such a short period of time, we have developed a working prototype that accurately resembles the system we envisioned in the first planning stages, complete with improvements and minor revisions along the way. With further iteration and development this could definitely be made into a useful real-world system. The only parts that are lacking in terms of production viability is an improvement of the physical locking mechanism that would guarantee the machine remained locked, as well as a more robust backend setup that would coordinate user interactions (through email servers, etc.) more effectively. Once these components have been implemented, our code would take care of the rest, and the product would be close to being complete for commercial use with a few more user tests along the way. 

We have learned that our original idea was actually a very good one. The application space definitely exists, as there is a high demand for security and responsibility in the laundry room. We have observed many emails on the residential college list serves about lost/stolen/moved laundry problems, and this is the exact problem our system sets out to solve. Our test users have also expressed enthusiasm and support for our project, and at one point we were even offered an interview by the Daily Princetonian. Our system seems pretty intuitive from the user tests, and we think we have designed a pretty good prototype for a system that serves the purpose it was set out to accomplish: protecting students’ laundry from irresponsible users and giving users peace of mind.

Future plans given more time

The most important implementation challenge for production to be faced before the product will be close to commercially viable is the physical locking mechanism.  Right now it is currently just a weak servo motor encased in a block of balsa wood.  This would need to be improved or changed entirely, for example by using an industrial electromagnetic lock. Upon finding an appropriate production lock, we would also need to find a secure, minimally-invasive way to mount the system to existing laundry machines.

Another component that we would improve would be our backend setup. We are currently hosting our email warning system on a Django server hosted on a free Heroku trial server. This system currently only sends warnings to a few hard-coded email address, as we do not yet have access to the school’s database of current student account numbers and their corresponding NetID’s.

Code

https://www.dropbox.com/s/eoi09q30m2r4mvs/laundry_protector_program.zip

 

List of third-party code used in our project

  • Key­pad: to control our keypad (http://playground.arduino.cc/Code/Keypad)

  • Liq­uid­Crys­tal: to control our LCD screen. It comes with Arduino

  • Servo: to control our servo motor used in the locking mechanism. Included in Arduino

  • WiFly Shield: to control our WiFi shield. https://github.com/sparkfun/WiFly-Shield

  • Django: to send out emails to users upon requests from Arduino. https://www.djangoproject.com/

  • Heroku: to host our email server. https://www.heroku.com/

Links to PDF versions of all printed materials for demo

https://www.dropbox.com/s/x02pwoij7sdhoob/436Poster.pdf

Final Project – Team X

Group 10 — Team X
–Jun­jun Chen (junjunc),
–Osman Khwaja (okhwaja),
–Igor Zabukovec (iz),
–Ale­jan­dro Van Zandt-Escobar (av)

Description:
A “Kinect Juke­box” that lets you con­trol music using gestures.

Previous Posts:
P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-team-x/
P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-team-x/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/group-10-p3/
P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/09/group-10-p4/
P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-10-team-x/
P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/06/6085/

Video Demo:

Our system uses gesture recognition to control music. The user can choose a music file with our Netbeans GUI. Then, they can use gestures for controls such as pause and play. We also have functionality for setting “break­points” with a gestures. When the dancer reaches a point in the music that he may want to go back to, he uses a gesture to set a breakpoint. Then, later, he can use another ges­ture to go back to that point in the music easily. The user is also to be able to change the speed of the music on the fly, by hav­ing the sys­tem fol­low gestures for speed up, slow down, and return to normal. Every time the slow down or speed up gesture is performed, the music incrementally slows down or speeds up.

Youtube Video Link

Changes

  • Improved the weights for predefined gestures in Kinetic Space, as our testing from P6 indicated that our system struggled to recognize some gestures, and this was the main source of frustration for our users. By changing the weights Kinetic Space places on certain parts of the body (for example, by weighing arms more when the gesture is mainly based on arm movement), we can make the recognition better.
  • Finished and connected an improved GUI made in Netbeans to our MAX/MSP controller. We want the interface to be as simple and easy to use as possible.

Goals and Design Evolution:
While our main goal (to create a system making it easier for dancers to interact with their music during practices) and design (using a Kinect and gesture recognition) has not changed much over the semester, there has been some evolution in the lower level tasks we wanted the system to be able to accomplish. The main reasons for this have been technical: we found early on through testing that for the system to be useful, rather than an hinderance, it must be able to recognize gestures with a very high fidelity. This task is further complicated by the fact that the dancer would be moving a lot during practice.
For example, one of our original goals was to be able to have our system follow the speed of the dancer, without the dancer having to make any specific gestures. We found that this was not feasible within the timeframe of the semester, however, as many of the recognition algorithms we looked at used machine learning (so worked better with many examples of an gesture) and many required knowing, generally, the beginning and end of the gesture (so would not work well with gestures tied into a dance, for example).

Also, we had to essentially abandon one of our proposed functionalities. We thought we would be able to implement a system that would make configuring a recognizable gesture a simple task, but after working with the gesture recognition software, we saw that setting up a gesture requires finely tuning the customizable weights of the different body parts to get even basic functionality. Implementing a system that automated that customization, we quickly realized, would take a very long time

Critical Evaluation:
Based on the positive feedback we received during testing, we feel that this could be turned into an useful real-world system. Many of our users have said that being able to control music easily would be useful for dancers and choreographers, and as a proof of concept, we believe our prototype has worked well. However, from our final testing in P6, we found that the user’s frustration levels would increase if they had to repeat a gesture even once. Therefore, there is a large gap between our current prototype and a real world system. Despite the users’ frustrations during testing, they did indicated in the post-evaluation survey that they would be interested in trying an improved iteration of our system and that they thought it could be useful for dancers.
We’ve learned several things from our design, implementation, and evaluation efforts. Firstly, we’ve learned that while the Kinect was launched in 2010, there actually isn’t a great familiarity with it in the general population. Secondly, we’ve found that the Kinect development community, while not small, is quite new. Microsoft support for development, with SDKs, is many for Windows, though there are community SDKs for Mac. From testing, we’ve found that users are less familiar with this application space than with windows based GUIs, but that they are generally very interested in gesture based applications.

Next Steps:

  • In moving forward, we’d like to make the system more customizable. We have not found a way to do so with the Kinetic Space gesture recognition software we’re using (we don’t see any way to pass info, such as user defined gestures, into the system), so to do so, we may have to implement our own gesture recognition. The basic structure gesture recognition algorithms we looked at seemed to involve looking at the x,y,z positions of various points and limbs, and comparing their movement (with margins of error perhaps determined through machine learning). We did not tackle this implementation challenge for the prototype, as we realized that the gesture recognition would need to be rather sophisticated for the system to work well. With more time, however, we would like to do our gesture recognition and recording in Max/MSP so that we could integrate our music playing software, and then maybe imbed the Kinect video feed in the Netbeans interface.
  • We still like the idea of having the music follow the dancer, without any other input, and that would be something we’d like to implement if we had more time. To do so, we would need the user to provide a “prototype” of the dance at regular speed. Then, we might extract specific “gestures” or moves from the dance, and change the speed of the music according to the speed of those gestures.
  • As mentioned before, we would also like to implement a configure gesture functionality. It may even be easier after we moved to Max/MSP for gesture recognition, but at this point, it’s only speculation.
  • We’d also like to do some further testing. In the testing we’ve done so far, we’ve had users come to a room we’ve set up and use the system as we’ve asked them to. We’d like to ask dancers if we can go into their actual practice sessions, and see how they use the system without our guidance. It would be informative to even leave the system with users for a few days, have them use it, and then get any feedback they have.

Source Code:
Source Code

Third Party Code:

Demo Materials:
Demo Materials

Team Colonial Club Final Project – PostureParrot

Team Colonial Club, Group 7

David, John and Horia

Project Description

The Pos­turePar­rot helps users maintain good back posture while sitting at a desk with a computer.

Project Blog Posts

P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/team-colonial-p1/
P2: http://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-3/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-backtracker/
P4: http://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-backtracker/
P5: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-postureparrot/
P6: http://blogs.princeton.edu/humancomputerinterface/2013/05/07/p6-postureparrot/

Video

  1. 0:00 – 0:05
    1. User positions back into desired default posture and hits “Set Default Posture” on the GUI
  2. 0:00 – 0:20
    1. User deviates back from default posture and responds to the device’s audible feedback to correct his back posture.
  3. 0:20 – 0:35
    1. User adjusts the device’s “Wiggle Room” through the GUI.
  4. 0:35 – 0:50
    1. User tests the limits of back posture deviation that are allowed by the new “Wiggle Room” setting.  It is noticeably more lenient.
  5. 0:50 – 1:00
    1. User adjusts the device’s “Time Allowance” through the GUI.
  6. 1:00 – 1:11
    1. User tests tests the time it takes to receive feedback from the device with the new “Time Allowance” setting.  It takes slightly longer to receive feedback than before, meaning that short back posture deviations are tolerated more often.

Changes in Working Prototype

Bulleted list of changes that we have made since P6 and a brief explanation of why we made each change

  • We now play an audible confirmation when a new default back posture is set.  This is done through the Piezo sensor at a different pitch than the back posture deviation audible feedback pitch.

    • Through our obser­va­tional notes, we noticed that users were con­fused when asked to set the default back pos­ture. This could poten­tially because there was no con­fir­ma­tion when a user selected the but­ton to set their desired back pos­ture.

  • Wiggle Room and Time Allowance increment and decrement buttons are now working correctly. We also added color changes to the buttons for user-feedback.

    • Previously, val­ues for both wig­gle room and time allowance became dis­torted when the increment / decrement but­tons were quickly selected.

  • Added text snippets to explain wiggle room and time allowance.

    • Through our obser­va­tional notes, we learned that wig­gle room and time allowance were not intrin­si­cally intu­itive and require addi­tional expla­na­tion.

  • Refined default wiggle room and time allowance values.

    • We also found that our default value for wig­gle room was far too lenient.

  • Added multi-toned feedback based on the amount of deviation from the default back posture.  In other words, one tone will play from the device when you are close to the default back posture, but another will play while you are further.

    • We also dis­cov­ered that some­times it becomes dif­fi­cult find­ing your orig­i­nal pos­ture, espe­cially when the wig­gle room is rel­a­tively unfor­giv­ing.

Evolution of Project Goals

Overall, our goal has always been to help people maintain good back posture while they are seated.  However, the design of our product has evolved greatly since its early conceptualization.  Initially, we were really interested in having the system simultaneously evaluate and provide feedback for the three main regions of the back: top, middle and lower.  We wanted the system to give vibrational feedback to the area of the back that was causing bad back posture and we wanted our GUI to display their performance in each of these areas over time.

What we found was that these objectives added complexity to the physical product and the GUI.  In particular, they required that the device span the entire back, which would correspond with a complicated procedure for attaching a large wearable device and taking it off.  This could keep users from wanting to use the product in the first place.  We also found that users quickly responded to feedback from the device and positioned their back to the correct posture.  Given this, it didn’t make sense to display a visualization of the user’s back posture over time, since it would be very close to the correct posture as long as the user wore the device and responded to the feedback.  What we ended up with was a device that achieves our overall goal without adding unnecessary complexity, and a GUI that simply helps with usage of the device.

Critical Evaluation of Project

Our work suggests that, with further iteration, this system could be turned into a useful real-world system, especially given that back posture is such a relevant issue for today’s society.  Many people spend most of their day sitting at desks.  From our observations, we found that people who would start with good back posture did not stay that way for long.  When our test users attached the device, they were pleasantly surprised about how useful a simple reminder is for maintaining good back posture.

When we first described the concept to our test users, they often responded with “I would buy that!”  However, one of our most challenging tasks was to find an implementation that was very unobtrusive to the experience of sitting down and working at a desk.  The user was often distracted by complexity and device attachment issues.  Fortunately, we have gotten very close to making the system part of a seamless effort by the user to maintain good back posture.  Once the device is standalone, has vibrational motors and has a simple shoulder clip-on mechanism, the PostureParrot will be a useful real-world system.

Moving Forward

There is certainly room for improvement for the PostureParrot in terms of implementation.  In particular, we were limited by the Arduino’s size, a lack of vibrational motors and battery-power options.  It would have been ideal to drastically shrink the size and weight of the device through a specific system other than an Arduino.  The general-purpose Arduino forced the entire system to be much larger.  In addition to making the device very small and light for the user’s shoulder, the system could have been improved by providing vibrational rather than audible feedback through vibrational motors.  This would enable the user to use the device in libraries and other public areas.

To make the perfect system, we would also want to make the PostureParrot wireless.  Having a long USB cable extend from the user’s shoulder to a laptop is not only messy, but also limiting in that it requires that the user be near a computer.  A  better setup would be to have the PostureParrot communicate with a desktop or mobile application via wireless internet or bluetooth.  Any additional configuration could be done through the application.  The resulting stand-alone small and light device could be very appealing.  Other than these optimizations, it would be nice to come up with a solid adhesive solution for sticking the device to a user’s shoulder.

Source Code

https://docs.google.com/file/d/0B_LCk7FghqrJdDUyblFpSFlqa2M/edit?usp=sharing

Third-Party Code

Third-Party Code was not used in our project.

Demo Session Materials

https://docs.google.com/file/d/0B_LCk7FghqrJSExTd000TndiTEE/edit?usp=sharing

Team Epple Final Project – Portal

Group 16 – Epple
Andrew, Brian, Kevin, Saswathi

Project Summary:
Our project is to use Kinect to make an intuitive interface for controlling web cameras through using body orientation.

Blog post links:

P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-epple/
P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-group-epple-16/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-epple-group-16/
P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-epple-portal/
P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-16-epple/
P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-epple/

Videos and Images:

Remote webcam side of interface

Arduino controlling webcam

Team Epple with Kinect side of interface

Changes since P6:

  • Added networking code to send face tracking data from the computer attached to the Kinect to the computer attached to the Arduino/webcam. This was necessary to allow our system to work with remote webcams.
  • Mounted mobile screen on a swivel chair so that the user would not be required to hold it in front of them while changing their body orientation. This was due to comments during from P6 indicating that it was tiring and confusing to change body orientation while also moving the mobile screen into view.

Design Evolution:

Over the course of the semester, both our design and project goals have been modified. We started with an idea for a type of camera that would help us scan a public place, such as the Frist Student Center. Our initial goal was to create something where a person could remain in their room and be able to check if a person of interest was in a certain remote area without having to physically walk there. From identification of other relevant tasks, we have now modified the goal of the project to improve the web chat experience, in addition to being able to find people in a remote room. We changed this goal because we found that the one function of searching for a distant friend was too narrow and that a rotating camera could allow for completion of many other unique tasks. These tasks include having a webchat user use our product to follow a chat partner that is moving around and talk to multiple people.

On the design side, we originally envisioned that the camera would be moved by turning one’s head instead of clicking buttons. This is intended to make the interface more intuitive. The main function of turning one’s head to rotate the camera has remained the same, but through user testing, we learned that users found the act of constantly keeping a mobile screen in front of them while changing their head orientation was confusing and tiring. Most would rather have some way to automatically have the mobile screen move into view as they changed their head orientation. For this reason, we have decided to mount the mobile screen on a swivel chair so that the user can swivel to change their body orientation, to control the remote camera, while having the mobile screen constantly mounted in front of them. Also, we initially intended to implement both horizontal and vertical motion, but we decided that for the prototype, implementing only the horizontal motion would be sufficient to show a working product. This simplified our design to only needing a single motor instead of two motors attached to each other, and we also did not have to implement the vertical head motion into our code. We chose to implement only horizontal motion instead of only vertical motion because horizontal motion gives the user more of a realistic experience of how the device will be used. The user can currently use our system to swivel left or right and turn a remote camera to scan a room at a single height, allowing the user to see different people spread around the room, or moving around at the same height. Vertical motion would have restricted users to only see one person or space from top to bottom which is not as useful or representative of the product’s intended function.

Critical Evaluation:

With further iteration, our product can definitely be turned into a useful real-world system. We believe that our product serves a purpose that is currently not implemented in the mainstream web camera and video chat userspace. We know that there are some cameras that are controlled remotely through use of keys, buttons, or some sort of hand control, but we have never encountered them in the mainstream market. We also believe that our product is more intuitive, as the user simply has to turn their head to control the camera. Based on user testing, we have observed that users are able to easily master use of our product. After a small initial learning curve, users are able to accomplish tasks involving rotating a remote camera almost as easily as if they were in the remote area, in person, and turning their heads. We thus strongly believe that our product will have a user base if we choose to develop and market it further.

As a result of our design, we have learned quite a bit about this specific type of application space of detecting the movement of one’s head as well as moving a camera. We found that, in the mechanical aspect of the design, it is difficult to implement both horizontal and vertical motion for the camera. We are still trying to figure out the most effective way to combine two servo motors and a webcam into a single mechanical device. On the other hand, we found that implementing the Kinect, Processing, and Arduino code necessary in this application space was fairly straightforward, as there is an abundance of tutorials and examples for these on the internet. From evaluation, we found that computer users are very used to statically sitting in front of their computers. Changing the way they webchat to accommodate our system thus involves a small learning curve, as there is quite simply nothing similar to our system in the application space aside from the still highly-experimental Google Glass and Oculus Rift. Users are particularly not accustomed to rotating their head while keeping a mobile screen in front of their head. They instead expect that the mobile screen will automatically move on its own to be in front of their heads. One user also, for some reason, would occasionally expect that the remote camera would turn on its own, and track people on the other end without the user turning his head at all. We suspect that this may be due to the way many video game interfaces work with regards to automatic locking in first person shooters. Based on user initial reactions, we realized that if we were to market our product, we would have to work very hard to make sure users understand the intended use of the product and not see it as a breach of privacy. Users usually don’t initially see that we are trying to make an intuitive web chat experience. Instead, they suspect our system is for controlling spy cameras and invading personal spaces. A lot of what we have learned comes from user testing and interviews, so we found that the evaluation process is just as important to the development of a product, if not more important than the physical design of the product.

Future Work:

There are quite a few things that we still need to do to if we were to move forward with this project and make it into a full fledged final product. One is the implementation challenge of adding support for rotating the camera vertically, as we currently only have one motor moving the camera horizontally. Another would be to create a custom swivel chair tailored for Portal that would have a movable arm on which you could attach the mobile screen. This would keep the screen in front of the user naturally, rather than our current implementation of taping the screen onto the back of the chair. If Google Glass or Oculus Rift ever become affordable, we intend intend to explore the space of incorporating such wearable screens into Portal as a replacement to our mobile iPad screen. We could also implement 3D sound so the user actually feels like they are in a space with directional audio cues, rather than just having normal, non-directional sound coming from speakers. This would be a great addition to our custom chair design, something like a surround sound that makes the user feel transported to a different place. We might also want to implement a function that suggests the direction for the user to turn based on where a sound is coming from. We also should make the packaging for the product much more robust.

In addition, not just at the design end, we also expect that further user testing would help us evaluate how users would react to living with a moving camera in their rooms and spaces. If necessary, we could implement a function for people on the remote side to prevent camera motion if they so choose to due to reasons such as privacy concerns. Most of the suggestions on this list for future work, such as the chair, sound system, and the screen were brought to our attention through user evaluations, and our future work would definitely benefit from more user evaluations. These user evaluations would be focused on how users react to actually living with the system, what they find inconvenient, unintuitive and how to improve these aspect of the system.

.zip File:
https://dl.dropboxusercontent.com/u/801068/HCI%20final%20code.zip

Third-Party Code:

Printed Materials:
https://www.dropbox.com/s/r9337w7ycnwu01r/FinalDocumentationPics.pdf