Final Project Documentation – The Cereal Killers Group 24

The Cereal Killers

Group 24: Andrew, Bereket, Lauren, Ryan

cereal_logo

Our Project

A program to create computer shortcuts that can be replayed using hand gestures via a wireless gesture bracelet.

Previous Posts

  • P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/the-cereal-killers/
  • P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-contextual-inquiry-and-task-analysis/
  • P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-brisq-the-cereal-killers/
  • P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-group-24/
  • P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-cereal-killers-team-24/
  • P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/07/p6-usability-study/

 

Our Final Project in Action

Gui:

Gesture Bracelet Design:

Gesture Recognition Demo:

Bracelet Images:


Gesture Bracelet

Gesture Bracelet

 

Side View 1

Side View 1

 

Side View 2

Side View 2

 

Bracelet on Wrist

Bracelet on Wrist

Changes since P6

  • In P6, since we did not have the gesture recognition or gesture recording part working yet,  we made another GUI with temporary buttons. Now we have a GUI with the final layout which reflects the programs ability to record gestures.
  • One button would “record the gesture,” but when pressed we used a Wizard of Oz technique by simply remembering which gesture the user assigned to an action. Now we decided to go with six predefined gestures.
  • We had another button to replay the actions. Now, the actions will be replayed when a certain gesture (which was mapped to that action) is performed.
  • We have gesture recognition.
  • We created the gesture bracelet using wireless xbee devices, an arduino uno, and accelerometer
  • We wrote code in python to receive data from the bracelet wirelessly, compare it with a library of data, and find the closest match using a nearest neighbor DTW algorithm
  • We were aslo able to make the use of the bracelet easier.  We replaced the idea of shaking before and after a gesture to signal its start and stop by instead just jerking the bracelet in any direction fast enough to trigger the start of transmission form the bracelet to the laptop, and sending roughly 3/4 a second of data to the computer.  Thus, we no longer needed the user to do an action to end the gesture
  • The keylogger handles multiple keys. In P6, it would not record when multiple keys were pressed simultaneously.
  • Now that we have predefined actions, our GUI maps gestures to action by writing the mapping to a file.
  • And therefore we have a page in the gui that lists the available gestures.

 

 

Evolution of Our Goals and Design

Our goal to create an easy to use system that would make users more connected to their laptops has not changed. However, due to feasibility, module functionality and user feedback, we made some changes to the implementation of this goal.

Six Predefined gestures vs User Created gestures

We changed some design decisions based on feedback from users.  We went with six predefined gestures, as opposed to users having to create their own gestures, based on our interviews. Our users found that it was hard to think up gestures on the spot. This way, a user does not have to worry about creating a gesture, and all the gestures are distinct. This limits the variety for the user. However, in our usability tests, users spent quite a few minutes simply trying to think of a few gestures, and they often needed suggestions, or were hesitant about their choice of movement.

 

Bracelet size

Our initial plan was to have the bracelet small and light for comfort and ease of use. In order to do this, we had planned to use bluetooth to transmit the accelerometer data to the computer. However, we had a problem with our bluetooth module not working properly, so we had to go for the bulkier radio frequency option with xbee devices. Additionally, after testing a femtoduino and arduino micro, neither of these would work with the bluetooth or xbee devices, so we had to use the larger arduino uno.  Therefore, we had to modify and increase the size the of the bracelet.

 

Evaluation

We were happy with the gui we were able to make and the ability to log both keyboard and mouse input and replay them.  We felt the gui was intuitive and provided everything the user needed to understand and interact with the program.  We were also very happy with our gesture recognition.  We could do six gestures and recognize each: left and right swipes, up and down swipes, and clockwise and counterclockwise circles.  We felt that these covered a nice range of intuitive gestures, and were different enough to be recognized effectively.The unfortunate part of our project was the inability to link the gesture recognition with the gui.  We had problems installing the necessary libraries for gesture recognition in windows, in which the gui and shortcut processing was written, so we were left with a mac that could recognize gestures and a windows laptop that contained the guy.

We definitely felt that this was applicable in the real world, and our users who tested the product agreed.  It has a place in the market for expanding computer use, and we think that the three scenarios we came up with are still applicable.  It would be exciting to see how the device would do in the real world, and since we mostly use low cost materials that could be further reduced in cost if buying in bulk, we think the device could be priced in an attractive range.  Of course, the program would need to be linked to the gesture recognition so we could run both on the same laptop, but we feel that this project is very applicable and even had some users ask if we could update them when we finished because of their interest in the product.

 

Moving Forward

We would of course like to link the gestures and the gui on the same computer.  We think that with some more time, we could either figure out how to install the problematic library (pygsl on windows), or change the machine learning library we are using to make it compatible with windows.  We would also like to investigate more wireless and microcontroller options so we could reduce the size of the bracelet.  We were happy with how compact the final product was, but we feel that it could even be further reduced for a sleeker design.  We would also like to replace the xbees, which require a receiving end hooked up to the computer, with just a bluetooth transmitter on the bracelet that could pair with any bluetooth compatible laptops.

Future testing would include observing users using the bracelet with the gui and seeing how easily they are able to pair the gestures with actions.  Additionally, we would like to see users using the bracelet.  We were happy that we were able to put code on the bracelet that activated it with just a simple jerk, and felt that this made it easier to use than the initial shake actions.  We would like to see if users agreed with this, and if they felt it had an intuitive feel and ease of use.

Code

Zip file:

https://www.dropbox.com/s/cohxt7l9eo6h378/brisq.zip

The libraries we used with our code were numpy, scipy, pyserial, pygsl, and mlpy, all python libraries.  Numpy was used to store and load data, mlpy was used to compare accelerometer vectors (x,y,and z), pyserial was used to read data from the usb port, and scipy and pygsl were required for the mlpy library.

Poster/Presentation Documents:

https://docs.google.com/presentation/d/1GAqPCLOdXqt-w-z_E4qPgnXzB4sn4m1c-HoclzbxlmE/edit?usp=sharing

Final Project — AcaKinect

Group Name: Deep Thought

Group #25: Vivian Q., Harvest Z., Neil C., Alan T.

I. A 1-SENTENCE DESCRIPTION OF YOUR PROJECT

AcaKinect is voice record­ing soft­ware that uses a Kinect for gesture-based con­trol, which allows con­tent cre­ators to record mul­ti­ple loops and beat sequences and orga­nize them into four sec­tions on the screen, creating a more effi­cient and intu­itive way of pre­sent­ing a music record­ing inter­face for those less expe­ri­enced with the tech­ni­cal side of music production.

2. HYPERLINKS TO PREVIOUS BLOG POSTS

P1: http://blogs.princeton.edu/humancomputerinterface/2013/02/22/a-cappella-solo-live/
P2: http://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-2/
P3: http://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-vahn-group-25/
P4: http://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-acakinect/
P5: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-acakinect/
P6: http://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-acakinect/

3. VIDEOS & PHOTOS

Videos:




Description:

Gesture controls:

  • Record Loop (Raise either arm above head) — Records a single loop of 8 beats at a set tempo. Block representing loop appears in the column the user is standing in, when recording is finished. Blue loading circle appears at initial command.
  • Delete Loop (Put hands together in front of torso and swipe outward) — Deletes the last recorded loop in the column user is standing in. If there are no loops, does nothing. Green loading circle appears at initial command.
  • Delete Column Loop (Put hands together above hand and swipe straight down) — Deletes all recorded loops in the column user is standing in. If there are no loops, does nothing. Pink loading circle appears at initial command.

Photos:

4. BULLETED LIST OF CHANGES

  • Blocks added to visually indicate the number of loops in each column: this is easier to read than a number in each column and provides better visual indication of the structure and layering of loops across the sections.

  • More beautiful and colorful UI: since we couldn’t get the openGL to run on Processing without distorting our loop recording timing, we just changed the UI using bare Processing to have a more visually-engaging prototype. Each section is a different color (green, purple, blue, and red), all the loop blocks are different colors between columns, beat count is more visible (a complaint from user testing!), and messages displayed clearly and colorfully. We kept in mind conventional recording colors, so the “Begin recording in…” is green and the “Recording!” message is in red.

  • More prominent highlighting of the current section: Since some users needed additional guidance to figure out the fundamental structure of recording different parts in different sections so that each part can be edited or changed separately, we indicate the current section the user is in much more clearly by highlighting the entire column on screen instead of providing a small tab on the bottom.

  • More onscreen visual cues: message notifying user to calibrate, message notifying when user has reached maximum number of loops in a column (4 loops maximum) , count down to the next loop recording, etc. We pop up a prominent and visually consistent notification in the middle of the screen that is easy to read and provides immediate feedback.

  • Better gesture recognition with microphone in hand: the delete loop and begin recording gestures were both modified in order to be easier to do while holding a microphone; the delete loop gesture was entirely changed to be more intuitive and easy to perform with a mic while the begin recording gesture received a few code tweaks to prevent false positives.

  • A “delete column” gesture: during the demo, we noticed that people felt it was tedious when they were deleting repeatedly to clear a column. This gesture makes it faster and easier to delete all loops in the section rather than one by one.

  • Gesture loading indicators: one problem we noticed during user testing for P6 was that users often didn’t know if the gestures were being recognized. To make this clear, we based our UI fix off many Kinect games that exist. When a command gesture is recognized (record/delete/delete-column), a small circle starts filling up on the user’s hand which started the command. Once the circle fills up completely, then the command executes. This ensures that users aren’t accidentally causing gestures (since they have to hold the pose for 1-2 seconds) and notifies the user that the Kinect recognizes their command. Each loading circle is different colors for clarity and differentiation between commads.

5. HOW AND WHY YOUR GOALS/DESIGN EVOLVED

The original concept for this project was much more complex. Over the course of development and testing, the specification was drastically simplified — coincidentally making development simpler, but more importantly, reaching our target audience more effectively. The original idea was to create a gesture based system, that enables highly complex music creation and manipulation intended for both live performance and recording. With some testing, we discovered that what we really should create a tool for those with little or no music production experience. For those with experience, there are hugely complex systems that offer all the detailed functionality needed. However, there are a large number of musicians who don’t even want to worry about music production at the level of Garageband; these may be very talented musicians who want to use loops and effects, but they may not be interested enough in the technical side of music production to go learn any specific piece of software or hardware. We decided that AcaKinect would slot in at the very bottom of this technological chain: simple enough to pick up and use immediately, and yet fun and flexible enough to retain users and potentially allow them to develop an interest in learning more complex music production tools.

We also realized that the format does not particularly suit recording well; if a user has progressed to the point of wanting to record, edit, and polish these creations, there are a huge number of software loopers available that offer much more flexibility, as previously mentioned; in addition, more experienced musicians who are looking to produce a record will probably turn to a full-fledged Digital Audio Workstation that allows maximum control at the cost of a rather higher learning curve. Thus, we see this as an experimental tool. One tester, who is the music director for an a cappella group on campus, commonly uses a midi keyboard for just this purpose; when arranging a piece for a cappella, he writes down each part and then plays them together on a keyboard to see how they sound. In these cases, it may be easier and more flexible to just test out these parts using AcaKinect, encouraging more exploration and experimentation. To that end, we pared down the specification to the bare minimum needed to provide a flexible looping environment with spatial awareness (columns) to suggest to the user how a piece of music might be structured. There are only two primary functions – record and delete – so the user is not confronted with anything approaching “mixing board” levels of complexity. The division into four sections, and the physical requirement of moving left and right in order to record in each section, suggests a natural split in what to record in each section; users naturally choose to record all percussive sounds in one, all of the bassline in a second, and then maybe a more prominent harmony in a third. This sets them up well for moving on to more complex musical production tools, where structuring loops and tracks in a hierarchy is very important while organizing, say, twenty tracks all at once.

6. CRITICAL EVALUATION OF PROJECT 

We believe that with further work and development, this could be a viable and useful real-world system that fills a possible gap in the market that has never really been touched. We’ve already discussed how, in terms of simplicity and feature set, AcaKinect would be slotting in under all the music production software currently available; what we haven’t really covered is the opposite end of the music spectrum, which is currently occupied by games like Rock Band and Guitar Hero. These games do introduce a musical element in the form of rhythmic competence (and in the case of Rock Band, fairly lenient vocal pitch competence), but fundamentally the music is still generated by the game, and the user just kind of tags along for the ride. The goal for AcaKinect with further iteration is a product that is almost as game-like as Rock Band; a fun system for testing out loops and riffs in the living room, and a useful tool for playing with sound and prototyping arrangements. It’s important that AcaKinect is seen as more of an exploratory product; unlike working in a DAW where the user may have a very good idea what she wants to create, AcaKinect would be a live prototyping tool that enables a lot of exploration and iteration of various sounds and harmonies. The simplicity of the controls and the lack of any real learning curve only helps to make that a fairly easy task.

The application space, as far as music production (and even more loosely, music interaction) tools go, is giant, and spans all the way from Rock Band to Logic Pro. There is no real point going for new or extra functionality; whatever arcane feature you’re looking for, it probably already exists, or if it doesn’t, it would probably be best suited to a Logic plugin rather than a whole new product, since the pros are the ones looking for new features. Those who are intimidated by the sheer number of buttons and knobs and controls in a typical piece of music software or hardware seem to much prefer a great simplification of the whole music production process (as would make sense), and we found that there is a big opening in this space for software that can be picked up and used immediately by any musician, no matter whether she has technical background in music production or not.

7.  HOW WE MIGHT MOVE FORWARD WITH THE PROJECT 

There are still quite a few implementation challenges involved in making this product one that is truly easy to use for any musician. Firstly, given its intended use as a fun and exploratory product in the living room, it’s a little problematic that it only works for one person. If, say, a family is playing with it, it would be much better to allow several people to be recognized at once (even if only one loop is recorded at a time), so that several people may collaborate on a project. SimpleOpenNI is capable of tracking limbs for two people, which is generally what Kinects are used for in Xbox games as well; we could thus implement two people without too much extra trouble, but to do more may be difficult. Secondly, this prototype uses Processing and Minim for ease of coding, testing, and iteration; however, since Minim really wasn’t designed for low-latency audio looping, it has certain issues with timing and tempo for which we have implemented hacks to “get around.” However, in an actual polished product, latency would have to be much lower and the rhythm would have to be much more solid; to that end, a more robust audio framework (hooking directly into Apple’s CoreAudio libraries, for example) would allow us to achieve much better results.

Finally, there’s the issue of physical setup; the user needs to be able to hook up a microphone and play sound out of speakers such that the balance between the current vocal and the sound from the speakers is about right, but without triggering any feedback or recording the speaker output back in the microphone. There are advanced noise-cancellation techniques that can be implemented to signal-process away the feedback, but these will sometimes add artifacts to the recording; one way is just to require that the user use a highly directional microphone with a sharp signal strength falloff, so as to reduce the chance of feedback. An on-screen notification that informs the user of feedback and turns down the levels of the speakers when feedback occurs might also be convenient. Alternate setups may also be a good thing to test; a wireless clip-on mic of the sort used on stage during live performances, for example, may prove slightly more expensive, but it may let users feel less awkward and make gestures easier.

8. LINK TO SOURCE CODE 

AcaKinect Source Code (.zip file)

9. THIRD-PARTY CODE USED

10. LINKS TO PRINTED MATERIALS

AcaKinect poster (.pdf file)

 

Final Blog Post: The GaitKeeper

Group 6 (GARP): The GaitKeeper

Group members:

Phil, Gene, Alice, Rodrigo

One sentence description:

Our project uses a shoe insole with pressure sensors to measure and track a runner’s gait, offering opportunities for live feedback and detailed post-run analysis.

Links to previous blog posts:

P1 – http://blogs.princeton.edu/humancomputerinterface/2013/02/22/team-garp-project-1/

P2 – http://blogs.princeton.edu/humancomputerinterface/2013/03/11/group-6-team-garp/

P3 – http://blogs.princeton.edu/humancomputerinterface/2013/03/27/gaitkeeper/

P4 – http://blogs.princeton.edu/humancomputerinterface/2013/04/08/gaitkeeper-p4/

P5 – http://blogs.princeton.edu/humancomputerinterface/2013/04/21/p5-garp/

P6 – http://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-the-gaitkeeper/

Pictures and Videos with Captions:

Pictures of the prototype – https://drive.google.com/folderview?id=0B4_S-8qAp4jyYk9NSjJweXBkN0E&usp=sharing .   These photos illustrate the basic use of the prototype, as well as its physical form factor.  You can see from them how the insole and wires fit in the shoe, and how they fit to the user’s body.  This was designed to make the product have a minimal effect on the user’s running patterns, so these aspects of the user interaction are especially important.

Video of computer-based user interface – Computer-Based UI .  This video (with voiceover) demonstrates the use of our user interface for saving and viewing past runs.

Video of live feedback based on machine learning – Live Feedback from Machine Learning .   This video (also with voiceover) demonstrates the live feedback element of the GaitKeeper, which tells the user if their gait is good or not.

Changes since P6:

  • Slightly thicker insole with a stronger internal structure – Thickness did not appear to be an issue for the testers, since the insole was made of a single sheet of paper.  We observed some difficulty in getting the insole into the shoe, however, and felt that making it slightly thicker would be useful in solving this issue.

  • Laminated insole – One of our testers had previously run that day, and his shoes were still slightly sweaty.  When taking off the insole, the sweat from his shoe and sock made the insole stick to him.  When removing it from his foot, the insole also was torn slightly.  We noticed that the tape part didn’t stick, and felt that making the entire insole of similar material would solve this issue.

  • Color changes in the UI heatmap – one of our testers noted that he found the colors in the heatmap to be visually distracting and different from traditional heatmaps.  This issue was corrected by choosing a new color palette.

  • Enhanced structural support for the Arduino on the waist – After user testing, we found significant wear and tear on the arduino box which is attached to the user with a waistband.  This was reinforced to make it more durable.  It was made slightly larger, which we felt was not an issue since users indicated that they found the previous implementation acceptably small and this change did  not significantly affect the form factor.

  • Ability to run without USB connection – This was an element which we had originally planned in the product, but were not able to fully execute for P6.  We used wizard of oz techniques at the time, and this replaced that.  Now, data can be imported into the computer from the arduino for analysis after a run.  Unfortunately, live feedback still requires a computer connection, but given further iteration could possibly be made mobile as well.

  • Wekinator training of live feedback during running – During testing, this was a wizard of oz element, where the lights went on and off for predetermined amounts of time to simulate feedback from the system.  This was replaced with true live feedback which is informed by the Wekinator’s machine learning abilities.

  • Ability to save and view saved data in the UI – User testing was done with a simulated run from our own testing data, rather than from actual saved runs.  We have added the ability for the user to save and view their own data imported from the arduino

  • Ability to import arduino data – User testing relied upon user simulation of the data upload process.  This is now fully implemented, and allows users to see the results of their running.

Explanation of goal and design evolution:

We began the semester with very little information about how a runner’s gait is actually assessed, but with the understanding that it was generally based on direct observation by a planned professional.  We originally planned to have a device which bridged the gait analysis demands of store employees, medical professionals, and runners themselves.  Over time, we realized that one of those three user groups had a very different set of needs, which resulted in us deciding to focus on just store employees and frequent runners.  Both user groups were considered by us to be well informed about running, and would be using the product to observe gait through a run for technique modification and product selection.  Our goals were then modified to better serve those user groups by focusing on the post-run analysis features, such as the ability to save and access old data.

Also, at the beginning of the semester, we had wanted to design the device to provide live feedback.  Over time, we came to realize that meaningful live feedback required a machine learning tool like Wekinator.  As a result, we were forced to maintain a computer connection for live feedback, which was a change from the fully mobile vision we had at the beginning.  This has slightly changed our vision for how the live feedback element of the product would be used; given the tethering requirement, live feedback would probably be most useful in a situation where the runner is on a treadmill and is trying to actively change their gait.  Other changes in design included a remake of the pressure-sensing insole, which our testers originally found to be sticky, difficult to place in a shoe, and overly fragile.  We moved from a paper-based structure to a design of mostly electrical tape, which increased durability without a significant cost in thickness.

 

Critical evaluation of project:

It is difficult to say whether this product could become a useful real-world system.  In testing, our users often found the product to be interesting, but many of the frequent runners had difficulty in really making use of the data.  They were able to accurately identify the striking features of their gait, which was the main information goal of the project.  One thing we observed, however, was that there were not many changes in gait between runs, with most changes occurring due to fatigue or natural compensation for small injuries.  That led us to conclude that the product might be better suited for the running store environment, where new users are seen frequently.  Given the relatively small number of running stores, we believe the most promising market for this product would be small, and focused on the post-run analysis features.  Live feedback was much less important to the running store employees, who were willing to tolerate a slight delay to get more detailed results.  We found that this space enjoys using technology already (such as slow motion video from multiple angles), and was quite enthusiastic about being able to show customers a new way to scientifically gather information about their gait and properly fit them for shoes.  Their main areas of focus on the product were reusability, the ability to fit multiple shoe sizes, accuracy of information, and small form factor.

We feel confident that further iteration would certainly make the product easier to use, also more focused on the running store employee user group, since they appear to be the ones most likely to purchase the product.  That being said, we are unsure that this device could progress beyond being more than a replacement for existing video systems.  Despite several conversations with running store employees, including contextual interviews while they met with actual customers, we were unable to identify any real information uses outside of the ones currently performed by the visual video analysis.  While our product is more accurate and takes a more scientific approach, achieving adoption would likely be a major hurdle due to the money such stores have already invested in video systems.

While the live feedback functionality is a quite interesting element of the project, it seems to have a less clear marketable use.  The runners we spoke to seemed to feel that live feedback was an interesting and cool feature, but not one that they would be willing to pay for.  Most (before testing) felt that their gait did not change significantly while running, and in surveys indicated that they already use a variety of electronics to track themselves while running.  These products include GPS, pedometers, and Nike+.  The runners consistently rated information feedback such as distance, location, pace, and comparison to past runs as more important than gait, running style, and foot pressure.  They also indicated an unwillingness to add additional electronic devices to their running, which already often involves issues of carrying a large phone or mp3 player.  As a result, one avenue which has some potential would be integration into an existing system.  The most likely option in this field would probably be Nike+, which is already built around a shoe.  Designing a special insole which communicates with the shoe (and through it, the iPod or iPhone) would be a potential way to implement the gait feedback device as a viable product for sale.  Clearly, this would have significant issues with licensing and product integration (with both Nike and Apple), but otherwise there does not appear to be a real opportunity.  As a result, we concluded that the product’s future would almost definitely require a stronger focus on the running store employee demographic.

 

Future steps if we had more time:

With more time, one of the things we would spend a great deal of time on would be the training of the arduino for live feedback.  Our users gave feedback several times that the two light system was not enough to really guide changes in gait, especially given that many changes in running style happen subconsciously over time as the runner gets tired.  The system did not give enough indication on how to fix the problem, only indicating the fact that a problem existed.  This could be solved through integration into a system like Nike+ or other phone apps, which would allow a heatmap gui to give directions to the runner.  Before implementing such a system, we would like to speak more with runners about how they would interact with this format of live feedback, as well as if they would want it at all.  Following that, more testing would be done about the most effective ways to convey problems and solutions in gait through a mobile system.

Although live feedback is likely the area which has the most opportunity for improvement in our prototype, our understanding of the targeted users indicates a stronger demand for the analysis portion for use in running stores.  Therefore, we would likely focus more on areas such as reusability and durability, to ensure that multiple users of different characteristics could use the product.  Furthermore, we would revisit the idea of resizing, which is currently done by folding the insole.  It is possible that multiple sizes could be made, but resizing is a more attractive option (if it is feasible) because it allows running stores to purchase only one.  This would likely involve more testing along the lines of what we already completed: having users of different shoe sizes attempt to use the product, either with or without instructions on resizing.  Additionally, for the running store application, we would seriously consider doing something to limit the amount of wires running along the user’s leg.  This could be done using a bluetooth transmitter strapped on the ankle, or through a wired connection to a treadmill.  While this is a significant implementation challenge, it seems that a feasible solution would likely exist.  Lastly, we found the machine learning tools to be quite interesting, and would also consider explore using a veteran employee’s shoe recommendations to train our device to select shoes for the runner.  This would allow the store to hire less experienced employees and save money.  Such a system would also likely require testing, in which we would gain a better understanding of how this would affect the interaction between the store employee and customer.  It would be very interesting to see if such a design undermined the authority of the employee, or if it made the customer more likely to buy the recommended shoe.


Source code and README zip file: 

 https://docs.google.com/file/d/0B4_S-8qAp4jyZFdNTEFkWnI2eG8/edit?usp=sharing

 

Third-party code list:

PDF Demo Materials:

https://docs.google.com/file/d/0ByIgykOGv7CCbGM3RXFPNTNVZjA/edit?usp=sharing

 

PFinal – Name Redacted

Group 20 – Name Redacted

Brian, Ed, Matt, and Josh

Description: Our group is cre­at­ing an inter­ac­tive and fun way for students and others who do not know computer science to learn the fun­da­men­tals of com­puter sci­ence with­out the need for expen­sive soft­ware and/or hardware.

P1 – https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-name-redacted/

P2 – https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-name-redacted/

P3 – https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-name-redacted/

P4 – https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-name-redacted/

P5 – https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-name-redacted/

P6 – https://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-name-redacted/

Video:

http://www.youtube.com/watch?v=qSJOHOulwMI

This video shows a compilation error that is detected by the TOY Program.  It tells the user where the error occurred and what the arguments of the command should be.

http://www.youtube.com/watch?v=YD4E56iZWfI

This video shows a TOY Program that prints out the numbers from 5 to 1 in descending order.The program takes 1.5 seconds to execute each step so that a user can see the program progress.  There is an arrow that is over the line that is currently being executed.

http://www.youtube.com/watch?v=RO5azUA_Vtw

Number Bases

Bulleted List of Changes:

We made these changes after watching the testers use our code.

  • Debugged some of the assembly code.

  • There is application persistence – walking in front of a tag does not cause the application to restart.

  • Write on the back of the tags what it says on the front so that users can easily pick up tags that they need.

We made these changes based on the suggestions of the users.

  • We added debugging messages to the “Assembly” program to provide feedback when a user creates a compile or runtime error.  Before we just highlighted rows.

  • Clean up the numbers display so that decimal, binary, and hexadecimal have sections.

  • The “Numbers” program shows how the numbers are generated.

  • Initialization does not require users to cover up tags.

Evolution of the Goals/Design (1-2 Paragraphs):

Goals

Our idea began with the a problem: students in America are not learning enough about Computer Science.  The goal that we set during P1 was to create a technology that helped facilitate CS education to students of all ages.  The first step we took toward that goal was to refine it further.  At first we were thinking about how we could make learning CS more engaging.  We toyed with games and other teaching tools.  Ultimately, however, we decided that we wanted our goal to be to make a lightweight, relatively inexpensive teaching tool that could be used by teachers to engage students in the classroom and provide students with a tactile interaction with the concepts they are learning.

Design

With this goal in mind, we set out on the process of design.  We immediately took up the technology of AR tags.  We really liked the fact that you could print them out for essentially pennies and make new lessons really cheaply.  We also liked the idea the teachers could throw together this tool from technology they might already have floating around a classroom (i.e. an old webcam or projector).  Again the process of design, like the process of goal making above was about specificity.  Once we have the basic design of AR Tags, we delved deeper into how this design will actually look.  We created metaphors such as “Tag Rows” where our system identifies horizontal rows of tags automatically.  We played with various types of user feedback.  Displaying lines and debug messages on top of the tags.

HCI as a Platform

Moving beyond the scope of the required sections of our projects, I think that one of the areas where we went above and beyond is not only creating a stellar interface but actually creating a usable platform for interface development.  We created sandboxes, metaphors, and ways that developers could safely create applications on our platform quickly.  We expose data structures (like the Tag Library) that allows our development users a myriad of options for accessing the data on the page.  Although clearly not as advanced or fleshed out, our platform is akin to the iOS platform which allows applications to take the complicated problems of capacitive touch and exposes them as simple events for developers.  In the process of making this platform we have learned a lot about the possibilities of our project.

Critical Evaluation of the Project (2-3 Paragraphs):

After working on this project for a few months, a future iteration of this project could be turned into a useful real-world system.  There is certainly a need for teaching computer science to people, particularly young students.  Very few middle schools teach computer science since they do not have the resources or the teacher training.  Many middle schools cannot afford to have computers for every students or even multiple computers per classrooms.  These schools do usually have one projector per classroom and one laptop for the teacher.  Our project combines the projector, the laptop, and a $30 webcam and would allow schools to teach computer science without expensive hardware and/or software.  There is certainly a need for computer science instruction, and a future iteration of our project could certainly help eliminate some of the obstacles that currently prevent the teaching of computer science.

We made our program modular so that future users could develop programs to teach what they wanted.  Just as the we created the “Numbers” program and the “Assembly” program, a teacher could create another program very easily.  All they would have to do is create a few tags corresponding to their lesson and some source code that would interpret the tags as needed.  Our program could provide them with a list of tags rows and a graphic.  They could change the graphic with their program and parse the tag rows to create a reasonable image.  If people ever used our program in the real world, they could easily customize the interfaces we provided to suit their needs.  Thus, although we only created a few lessons, other users could create many lessons to teach multiple topics.

Even outside of the classroom, there is a great need for computer scientists and people who know how to program.  Some of these people have taken some computer science courses in college but want to further their knowledge and understanding of the subject.  Many of our testers varied in their backgrounds in computer science.  However, they all were able to learn and practice by using our system.  By creating this project and interviewing and testing potential users, we have learned a lot about the difficulties of computer science and the topics that students find most difficult.  We geared our project towards teaching these topics better.  We did this by creating a binary program and by creating a TOY program which very easily shows memory management with the registers.  Thus, with our interviews and tests, we learned a great deal about the trouble with learning computer science and geared our project towards teaching those concepts.

 

Moving forward with the Project (1-2 Paragraphs):

Looking ahead, there are many ways we could improve our project design and develop it into a fully functional retail product. Currently our project is still a basic proof of concept, demonstrating its capabilities as a useful teaching tools. Keeping in mind our initial goal to develop an interface that would be used in a classroom environment by both teachers and students, our primary focus in developing our project further would be to improve the interface, make more built in lessons, and develop a platform for teachers to easily create new lessons.

In addition to making the overall presentation cleaner and more professional looking, the interface improvements would primarily be graphical additions such as animations and indicators that communicate to the user what is currently happening as they interact with the system. These improvements would not only make the system more intuitive, but also more entertaining for students. Additional useful features we could add may be things such as voice overs of instructions and demonstration videos on how to use various features of the system.

One of the most important things we would like to improve is the library of built in lessons available for users to chose from. Currently we only have two basic lesson: base-conversion and a TOY-programming environment. A more developed lesson library may contain tasks involving more advanced programming topics, teaching various programming languages, or even educational games. There would also be no need for lesson topics to be restricted to CS related topics. Our system could be effectively used to teach virtually any subject provided the right lessons are created.

Another key improvement we would like to make is developing a lesson creation environment where teachers can quickly and easily design their own lessons for their students. Any lesson library we make on are our will inevitably be lacking in some aspect. Giving users the power to create their own lessons and even share them online with each other will vastly improve the potential of our system as a teaching tool.

Our code base can be found at: https://docs.google.com/file/d/0B9H9YRZVb0bMY2JCZzhPa19ZZE0/edit?usp=sharing

Our README can be found at: https://docs.google.com/file/d/0B9H9YRZVb0bMNlllTElqdHdBYnc/edit?usp=sharing

Bulleted list of Third Party Code (including libraries):

  • Homography

  • Used to calculate transformation between camera coordinates and projector coordinates

 

Our Poster:

https://docs.google.com/file/d/0B9H9YRZVb0bMb2NvbUFhZWZ6VTA/edit?usp=sharing

Additional Demo Print Outs:

https://docs.google.com/file/d/0B9H9YRZVb0bMS2pxZmQ1cjJZSTg/edit?usp=sharing

Final Submission – Team TFCS

Group Num­ber: 4
Group Name: Team TFCS
Group Mem­bers: Collin, Dale, Farhan, Ray­mond

Sum­mary: We are mak­ing a hard­ware plat­form which receives and tracks data from sen­sors that users attach to objects around them, and sends them noti­fi­ca­tions e.g. to build and rein­force habits and track activity.

Previous Posts:

P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/06/p6-team-tfcs/
P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/5642/
P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/08/p4-team-tfcs/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/p3-2/
P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-tfcs/
P1: http://blogs.princeton.edu/humancomputerinterface/2013/02/22/964/

Final Video

http://www.youtube.com/watch?v=1j8ZQd-cJJw&feature=em-upload_owner

Changes from P6

Added a “delete” function and prompt to remove tracked tasks
This was a usability issue that we discovered while testing the app.

Improved the algorithm that decided when a user performed a task
The previous version had a very sensitive threshold for detecting tasks. We improved the threshold and also used a vector of multiple sensor values to decide what to use asa cutoff instead of only one sensor.

– Simplified choice of sensors to include only accelerometer and magnetometer
This was a result of the user testing which indicated that the multiple sensor choices vastly confused people. We simplified it to two straightforward choices.

– Updated text, descriptions and tutorial within the app to be more clear, based on user input from P6
– Updated each individual sensortag page to display an icon representative of the senor type, simplified the information received from the sensortag in realtime, and added a view of user’s progress in completing tasks

Goal/Design Evolution

At the beginning of the semester, our goal was to make an iPhone application that allowed users to track tasks with TI sensortags, but in a much more general way than we actually implemented. For example, we wanted users to decide which sensors on our sensortag–gyroscope, magnetometer, barometer, thermometer, etc–they would use and how, and we would simply assume that users would be able to figure out how best to use these readings to fit their needs.  This proved to be a poor assumption, because it was not obvious to nontechnical users how these sensors would be used to track tasks they cared about.

We quickly reoriented ourselves to provide not a sensortag tracking app but a *task* tracking app, where the focus of the app was in registering when users took certain actions–opening a book, taking a pill from a pillbox, going to the gym with a gym bag–rather than activated the sensors on certain sensortags. Within this framework, however, we made the model for our application more general, exposing more of how the system functions by allowing them to set up sensors for each task, rather than choose from a menu of tasks within each application. This made our system’s function easier to understand for the end user, which was reflected in our second set of interviews.

Critical Evaluation
Our work over the semester provided strong evidence that this type of HCI device is quite feasible and useful. Most of our tested users expressed an interest in an automated task-tracking application and said that they would use Taskly personally. Still, one of the biggest problems of our implementation of a sensor-based habit tracking system was the size and shape of the sensors themselves. We used a sensortag designed by TI which was large and clunky, and although we built custom enclosures to make the devices less intrusive and easier to use, they were still not “ready for production.” However, as mentioned above, this is something that could easily be fixed in more mature iterations of Taskly. One reason to believe that our system might function well in the real world is that the biggest problems we encountered–the sensortag enclosures and the lack of a fully-featured iPhone app–are things we would naturally fix if we were to continue to develop Taskly. We learned a significant amount about the Bluetooth APIs through implementing this project, as well as about the specific microcontroller we used; we expect BLE devices, now supported only by the iPhone 4S and later phones, will gain significant adoption.

The project ended up being short on time; our lack of iOS experience (initially) made it difficult to build a substantively complex system. The iPhone application, for example, does not have all of the features we showed in our early paper-prototypes. This was partly because those interfaces revealed themselves to be excessively complicated for a system that was simple on the hardware side; however, we lost configurability and certain features in the process. On the other hand, we found learning new hardware platforms (for both iOS and the SensorTag) to be something that could definitely be accomplished over the course of weeks, especially making use of previous computer science knowledge.

One final observation that was reinforced as a result of our user testing was that the market for habit-forming apps is very narrow. People were very satified with the use cases that we presented to them and their recommendations for future applications for the system very closely aligned to the tasks we believed to be applicable for Taskly. Working on this project helped us recognize the diversity of people and needs that exist for assistive HCI-type technologies like this one, and helped us gain a better idea of what kind of people would be most receptive towards systems where they interact with embedded computers.

Moving Forward

One of the things we’d most like to improve upon in later iterations of Taskly are custom sensortags. The current sensortags we use are made by Texas Instruments as a prototyping platform, but they’re rather clunky. Even though we’ve made custom enclosures for attaching these sensors to textbooks, bags, and pillboxes, they are likely still too intrusive to be useful. In a late iteration, we could create a custom sensor that uses just the bluetooth microcontroller core of the current sensortag we’re using (called the CC2541) and the relevant onboard sensors like the gyroscope, accelerometer, and magnetometer. We could fabricate our own PCB and make the entire tag slightly larger than the coin-cell battery that powers the tag. We could then 3D print a custom case that’s tiny and streamlined, so that it would be truly nonintrusive.

Beyond the sensortags, we can move forward by continue to build the Taskly iPhone application using the best APIs that Apple provides. For example, we currently notify users of overdue tasks by texting them with Twilio. We would like to eventually send them push notifications using Apple Push Notifications Services, since text messages are typically used for communication. We could also expand what information the app makes available, increasing the depth and sophistication of historical data we expose. Finally, we could make the sensortag more sophisticated in its recognition of movements like the opening of a book or pillbox by implementing Machine Learning data to interpret these motions (perhaps, for example, using Weka). This would involve a learning section where the user performs the task with the sensortag attached to the object and the system would learn what sensor information corresponds to the task being performed.

Another thing we need to implement before the system can go public is offline storage. Currently the sensor only logs data when the phone is in range of the sensortag. By accessing the firmware on the sensortag, it is possible to make it store data even when the phone is not in range and then transmit it when a device becomes available. We focused on the iOS application and interfacing to Bluetooth, because the demonstration firmware already supported sending all the data we needed and none of us knew iOS programming at the start of the project. Now that we have developed a basic application, we can start looking into optimizing microcontroller firmware specifically for our system, and implementing as much as we can on the SensorTag rather than sending all data upstream (which is more power-hungry). A final change to make would be to reverse the way Bluetooth connects the phone and sensor: currently, the phone initiates connections to the Bluetooth tag; reversing this relationship (which is possible using the Bluetooth Low Energy host API) would make the platform far more interesting, since tags would now be able to push information to the phone all the time, and not just when a phone initiates a connection.

iOS and Server Code

https://www.dropbox.com/s/3xu3q92bekn3hf5/taskly.tar.gz

Third Party Code

1. TI offers a basic iOS application for connecting to SensorTags. We used it as a launching point for our app. http://www.ti.com/tool/sensortag-ios

2. We used a jquery graphing library for visualization. http://www.highcharts.com

Demo Poster

https://www.dropbox.com/s/qspkoob9tggp6yd/Taskly_final.pdf

The Elite Four – Final Project Documentation

The Elite Four (#19)
Jae (jyltwo)
Clay (cwhetung)
Jeff (jasnyder)
Michael (menewman)

Project Summary

We have developed the Beepachu, a minimally intrusive system to ensure that users remember to bring important items with them when they leave their residences; the system also helps users locate lost tagged items, either in their room or in the world at large

Our Journey

P1: http://blogs.princeton.edu/humancomputerinterface/2013/02/22/elite-four-brainstorming/
P2: http://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-elite-four/
P3: http://blogs.princeton.edu/humancomputerinterface/2013/03/29/the-elite-four-19-p3/
P4: http://blogs.princeton.edu/humancomputerinterface/2013/04/08/the-elite-four-19-p4/
P5: http://blogs.princeton.edu/humancomputerinterface/2013/04/22/the-elite-four-19-p5/
P6: http://blogs.princeton.edu/humancomputerinterface/2013/05/06/6008/

Videos

https://www.youtube.com/watch?v=88m7W5qVMsc&feature=youtu.be

https://www.youtube.com/watch?v=pkNwCs1Jv0w&feature=youtu.be

https://www.youtube.com/watch?v=bOmNP0gQXmI&feature=youtu.be

Photos

Changes Since P6

  • We added sound to the prototype’s first function: alerting the user if important items are left behind when s/he tries to leave the room. The system now plays a happy noise when the user opens the door with tagged items nearby; it plays a warning noise when the user opens the door without tagged items in proximity.

  • We added sound to the prototype’s item-finding feature. At first, we had used the blinking rate of the two LEDs to indicate how close the user was to the lost item. We improved that during P6 by lighting up the red LED when the item was completely out of range, and using the green LED’s blinking rate to indicate proximity. We now have sound to accompany this. The speaker only starts beeping when the lost item is in range, and the beeping rate increases as the user gets closer to the lost item.

By adding sound to the prototype, we made our system better able to get the user’s attention–after all, it is easy to overlook a small, flashing LED, but it is more difficult to both overlook the LED and ignore beeping. For the second function, the system allows the user to operate the prototype without taking their eyes off of their surroundings, thus improving their ability to visually look for missing items.

Goals and Design

Our primary goal has not changed since the beginning of the semester. Our prototype’s main function, as before, is to save users from their own faulty memories by reminding them not to forget important items when they leave their residences. However, our prototype has evolved to include a related secondary goal/task, which is finding items once they are already lost/forgotten. Because our hardware already incorporated item-detection via RFID, this was a natural extension not only of our stated goals, but also of our hardware’s capabilities.

Critical Evaluation

From our work on this project, it appears that a system such as ours could become a viable real-world system. The feedback we received from users about the concept shows that this has the potential to become a valuable product. We were able to address most of our testers’ interface concerns with little difficulty, but we still suffered from hardware-based issues. With improved technology and further iterations we could create a valuable real-world system.

The primary issue with the feasibility of developing this project into a real world system comes from hardware and cost constraints, rather than user interest. The current state of RFID presents two significant issues for our system. The first is that, in order for our system to function optimally, it needs a range achievable only with active RFID or extremely sophisticated passive RFID. That level of passive RFID would be unreasonable in our system due to its astronomical cost (many thousands of dollars). Active RFID, which our prototype used, is also quite expensive but feasible. Its primary issue is that the sizable form factor of most high quality transmitters does not allow for easy attachment to keys, phones, wallets, etc. Therefore, ideally, our system would have a high-powered and affordable passive RFID, but currently that technology appears to be unavailable. EAS, the anti-theft system commonly used in stores, is another feasible alternative, but its high cost is also prohibitive.

Moving Forward

As stated in the previous section, the best option for moving forward would be to improve the hardware, specifically by switching to more expensive high-powered passive RFID. Other avenues of exploration include refinement of range-detection for our first task, which would become increasingly important with increasingly powerful RFID detection systems, and implementation of tag syncing. Range limiting is important because if our system can detect tags from a significant distance, it is important not to give a “false positive” if a user has left items somewhere else in their relatively small room but does not have them at the door. Syncing of tagged items would become important for a system with multiple tags; it would allow users to intentionally leave behind certain items, or for multiple residents of a single room/residence to have different “tag profiles.” Syncing could also permit item-detection of particular items, which would allow greater specificity for our second and third tasks. Finally, for most of our testing we used laptops as a power source for the Arduino. This was fine for the prototype, but in a real version of this product, ideally the system would have its own power source (e.g., a rechargeable battery).

Code

https://drive.google.com/folderview?id=0B5LpfiwuMzttMHhna0tQZ1hXM0E&usp=sharing

Third Party Code

Demo Materials

https://drive.google.com/folderview?id=0B5LpfiwuMzttX2ZPRHBVcmlhMUk&usp=sharing

The Backend Cleaning Inspectors: Final Project

Group 8 – Backend Cleaning Inspectors

Dylan

Tae Jun

Green

Peter 

One-sentence description
Our project is to make a device to improve security and responsibility in the laun­dry room. 

Links to previous projects

P1P2P3P4P5P6 

Videos

Task 1: Current User Locking the Machine

http://www.youtube.com/watch?v=U3c_S24bCTs&feature=youtu.be

Task 2.1: Waiting User Sending Alert During Wash Cycle

http://www.youtube.com/watch?v=0TzK4zgQg28&feature=youtu.be

-If the waiting user sends an alert during the current wash cycle, the alert will queue until the cycle is done. When the cycle is done, the current washing user will immediately receive a waiting user alert in addition to the wash cycle complete notification. The grace period will begin immediately after the wash cycle ends.

Task 2.2: Waiting User Sending Alert When Wash Cycle Complete

http://www.youtube.com/watch?v=JoSIdzqKBC4&feature=youtu.be

-If the waiting user sends an alert after the current wash cycle is complete, the current washing user will immediately receive a waiting user alert and the grace period will begin.

Task 3: Current User Unlocking Machine to Retrieve Laundry

http://www.youtube.com/watch?v=UP3rVQB4EqM&feature=youtu.be

Changes made since the working prototype used in P6

  • Improved our instructions that are displayed on the lcd screen for each task yet again in order to enhance usability based on the feedback we received from the test users

How and why your goals and/or design evolved

Since the beginning, we knew pretty much what our final prototype was going to look like.  Along the way though we have evolved it little by little according to feedback we’ve received from the user studies.  One important thing we have developed over the semester is the manner in which our locking mechanism physically locks the machine.  We thought of a lot of creative ideas at the beginning, but eventually narrowed it down to using a simple servo motor inside a carved piece of balsa wood that rotates the lock closed when the user locks the machine.  This made the most sense; however it would have to be improved if the product were to be commercially produced and sold in the future, as it would not be that hard to break the lock and thus gain access to the clothes within.  Another important feature that we have improved along the way is the set of directions that are displayed on the LCD screen as the user attempts each of the three tasks.  The instructions and information displayed have grown from the bare minimum in the beginning to become significantly helpful now with timers, error recognition, and more.

The most significant development in terms of our overall goals was that, as we implemented our system and received feedback from our test users, we shifted our focus more from the security of the laundry itself to the responsibility of the interacting users. In the beginning, our priority was mainly to keep the current laundry in the machine from being tampered with, even at the expense of the waiting next user. However, we began to realize that our target audience preferred that we focus more on responsibility between users. This resulted in shorter grace periods for current laundry users, as our testers generally expressed that they would be fine with their laundry being opened if they had warnings that they failed to heed. They expressed that, from the standpoint of the waiting user, it was more fair that they waited less time for the machine to unlock, given that the responsibility lay with the late current user.

Critical evaluation of our project

Overall, we view our project as a definite success. For such a short period of time, we have developed a working prototype that accurately resembles the system we envisioned in the first planning stages, complete with improvements and minor revisions along the way. With further iteration and development this could definitely be made into a useful real-world system. The only parts that are lacking in terms of production viability is an improvement of the physical locking mechanism that would guarantee the machine remained locked, as well as a more robust backend setup that would coordinate user interactions (through email servers, etc.) more effectively. Once these components have been implemented, our code would take care of the rest, and the product would be close to being complete for commercial use with a few more user tests along the way. 

We have learned that our original idea was actually a very good one. The application space definitely exists, as there is a high demand for security and responsibility in the laundry room. We have observed many emails on the residential college list serves about lost/stolen/moved laundry problems, and this is the exact problem our system sets out to solve. Our test users have also expressed enthusiasm and support for our project, and at one point we were even offered an interview by the Daily Princetonian. Our system seems pretty intuitive from the user tests, and we think we have designed a pretty good prototype for a system that serves the purpose it was set out to accomplish: protecting students’ laundry from irresponsible users and giving users peace of mind.

Future plans given more time

The most important implementation challenge for production to be faced before the product will be close to commercially viable is the physical locking mechanism.  Right now it is currently just a weak servo motor encased in a block of balsa wood.  This would need to be improved or changed entirely, for example by using an industrial electromagnetic lock. Upon finding an appropriate production lock, we would also need to find a secure, minimally-invasive way to mount the system to existing laundry machines.

Another component that we would improve would be our backend setup. We are currently hosting our email warning system on a Django server hosted on a free Heroku trial server. This system currently only sends warnings to a few hard-coded email address, as we do not yet have access to the school’s database of current student account numbers and their corresponding NetID’s.

Code

https://www.dropbox.com/s/eoi09q30m2r4mvs/laundry_protector_program.zip

 

List of third-party code used in our project

  • Key­pad: to control our keypad (http://playground.arduino.cc/Code/Keypad)

  • Liq­uid­Crys­tal: to control our LCD screen. It comes with Arduino

  • Servo: to control our servo motor used in the locking mechanism. Included in Arduino

  • WiFly Shield: to control our WiFi shield. https://github.com/sparkfun/WiFly-Shield

  • Django: to send out emails to users upon requests from Arduino. https://www.djangoproject.com/

  • Heroku: to host our email server. https://www.heroku.com/

Links to PDF versions of all printed materials for demo

https://www.dropbox.com/s/x02pwoij7sdhoob/436Poster.pdf

Final Project – Team VARPEX

Richter Threads

Group Name and Number: VARPEX, Group 9

Group Members: Abbi, Dillon, Prerna and Sam

Project Description

Richter Threads is a jacket that creates a new private music listening experience through motors which vibrate with bass.

Previous Blog Posts:

Video

Changes Since P6

  • New protoboard with components soldered on: In our feedback from P5, many of our subjects complained that the box they had to carry around containing the system was cumbersome. To create a sleeker and more integrated experience, we soldered our system onto a small protoboard that could be affixed with velcro inside the jacket. The prototype box is no longer necessary.
  • Velcro fasteners for motors: The cloth pockets were not effective at holding the motors; we needed to sew them in our last prototype. For this iteration we put velcro inside the jacket and velcro on the motors to make them easier to attach and detach. This also enhances sensation from the motors since there is not an additional layer of cloth over them
  • Threshold potentiometer: We discovered in P6 that users liked being able to use their own music, but different music pumps out different bass frequencies. To allow users to change the threshold of bass volume needed for the jackets to vibrate, we included a knob users could use to adjust the amount of bass required to vibrate the motors when listening to their music.
  • Volume knob: We reversed the direction of the volume knob to make it more intuitive (turning clockwise increases the volume now).
  • New sweater: We chose a much more stylish and, more importantly, a better-fitting sweater for our system, since no one found our original sweater particularly attractive or comfortable.
  • Power switch: We added a power switch for just the batteries, to allow people to more easily listen to their music without the jacket on.
  • Length of motor wires: We lengthened some of the wires driving the motors, so that the jacket would be easier to put on despite the presence of a lot of wires.
  • “Internalizing” components: The board is affixed to the inside of the jacket while the controls (volume/threshold knob, power switch) are embedded in the pocket of the jacket to make the entire system more conspicuously hidden.
  • Batteries consolidated: Our battery pack has been consolidated and secured in a different part of the jacket. The batteries impose a large space bottleneck on our design (hard to miniaturize power needs like we did the board), so they require special placement within the jacket.

Evolution of Goals and Design

Our goals over the course of prototyping definitely broadened as we received feedback from users. The original inspiration for the jacket was the dubstep and electronica genres, which are known for being bass-heavy. In testing however, users responded positively to listening to other genres of music through the headphones; by limiting our goals to representing the concert going experience for dubstep/electronica, we were missing out on exploring the potential of the jacket in other musical genres. This development actually lead to a concrete design change when we added a threshold knob to allow users to adjust the sensitivity of the jacket. This actually leads into a different change in goals. Though we set out to replicate the concert-going experience, we succeeded in creating a unique experience in its own right. Our jacket offers users a tactile experience with music beyond mimicking large subwoofers.

As for our design, our early design goals sought to use an Arduino to perform the audio processing needed to filter low bass frequencies to actuate the motors. We quickly learned that this introduced a delay, and unfortunately in music-related applications, any delay can make the application useless. We found an excellent solution however, by implementing a low-pass filter with a threshold to actuate the motors in an analog circuit. This design change also allowed us to work on making the circuit more compact and portable with every iteration of our design. It also eliminates the need for a lot of complicated set-up of a microcontroller; it is hard to imagine that the set-up with an Arduino would have been as easy as plugging in your mp3 player and headphones and turning the jacket on, as it is in our current design.

 Project Evaluation

We will continue to create improved iterations of this jacket in the future. The users who have tested our project have expressed interest in the experience our jacket had to offer, and through the prototyping process we have worked on making the system as portable and inconspicuous as possible. In our last two iterations alone, we moved our system from a bulky box containing our hardware to a much smaller protoboard that we could embed in the inside of the jacket. If we ever wanted to produce this jacket for sale, surface mounted circuits could be made to decrease the electronic size even further. As our jacket stands right now, the jacket does not outwardly appear to be anything but a normal hoodie jacket. It is this sort of form factor that we think people could be interested in owning.

We worked in an applications space we would label as “entertainment electronics.” The challenge this space poses to us is that user ratings are, at their core, almost entirely subjective. We had to evaluate areas like user comfort and the quality of a “sensation,” none of which can really be independently tested or verified. In a way, our design goals boil down to going after a sort of “coolness factor,” and our real challenge was making the jacket as interesting of an experience as possible while minimizing the obstacles to usage. For instance, in P6, people reported difficulty in having to carry around a box that contained the system’s hardware, and while they liked the sensation of the jacket they found the form factor cumbersome. We sought to fix this in our final iteration of the jacket’s design- it is now as comfortable and inconspicuous as a normal jacket, which we hope would allow people to see the jacket as something purely fun at no cost of convenience.

We set out with the goal of replicating a concert-going experience, but in the process created something new entirely. We believe we were successful in the regard of giving something “cool” to people that brought them joy. It is not clear to us what sort of objective measures alone could have achieved this. While we could have strictly timed how long it took people to figure out how our jacket worked, for example, this wouldn’t have informed us about what people really would want out of our system. While we had tasks in mind when we set out to build and test our jacket, the users themselves came up with their own tasks (the jacket’s benefits in silent discos or at sculpture gardens, its uses in a variety of musical genres, etc) that forced us to consider our jacket beyond the narrow goals we had defined for ourselves. It is this sort of feedback that makes designing for entertainment an exciting and dynamic application space.

 Future Plans

If we had more time, we would form a product as sleek, comfortable, and safe as a normal jacket. We would address problems introduced by moisture (sweating or rain) to ensure user safety and proper functioning. In order for this to become a product used in the long term, these concerns must be addressed. Next, we would seek to minimize the impact of the hardware within the jacket. We would use smaller, more flexible wires to attach to the motors. Printed circuit boards would decrease the size of the circuitry and we would optimize the layout of the components. In even further iterations, we would use surface mount electronics to decrease the size of the electronics. We would also like to find a better power solution. For instance, a 9.6V NiCd rechargeable battery would be capable of supplying the necessary current for each of the voltage regulators and can fit in a better form factor. It would also prevent users from having to periodically buy new batteries. We would also test the battery-life of the device to determine acceptable trade-offs between physical battery size and total battery energy. The motors would also be integrated into a removable lining which would hold down the wires and improve the washability and durability of the jacket. We would like to add additional features, including automatic threshold adjustment and a pause button. These new features will require additional understanding of analog electronics and the standards for audio signals on different mobile devices.

The next stages of user testing would inform our decisions about battery type and the long-term usability of the jacket. We would have different sizes of the jacket so we can accommodate users of many body sizes, and we would ideally give the jacket to users for several hours or over the course of days to get quality feedback about practicality and durability. These experiments would also help us understand how users react to the jacket’s sensations after their novelty has worn off. This user testing could further shed light on additional features we might add to the jacket.

 README

How it works:

A male auxiliary cable plugs into a user’s music player and receives the audio signal. The signals from each of the two channels are fed to a dual potentiometer to reduce the volume for the user. The ground of this potentiometer is at VREF (1.56V) because a DC offset is needed in order to keep all the audio information with single supply op-amps. The output of this potentiometer is fed to a female auxiliary cable where the user plugs in headphones. One channel of the audio is also fed to a voltage follower which helps protect the signal from the rest of our circuit. This signal then goes to a low-pass filter which filters out frequencies above 160 Hz. The filtered signal is compared to a threshold value to determine the gate signal for the transistors. When each transistor turns on, current flows through its corresponding motor pair. The circuit is powered through three 9V batteries, each attached to a 5V voltage regulator (L705CVs). Two of the batteries power the motors, which draw significant current, and the third battery powers the operational amplifiers used for the comparator and voltage follower.

Schematics

XRSrJHj zNY6syP bTPJ270

 Budget

Since our project is mostly hardware based, we’ve included a parts list and final budget below.

Item

Quantity

Price ($)

Total Cost/Item

L7805CV Voltage Regulator

3

$.59

$1.77

PN2222A Transistor

6

$.32

$1.92

LM358 Dual Op-amp

2

$.58

$1.16

motor

12

$1.95

$23.40

protoboard

1

$6.50

$6.50

10k dual potentiometer

2

$1.99

$3.98

switch

2

$1.03

$2.06

battery clips

3

$1.20

$3.60

capacitor (.1uF)

15

$0.30

$4.50

capacitor (1.0uF)

2

$0.30

$.60

1N4001

7

$0.08

$.56

misc.

1

$1.00

$1.00

Total

$51.05

 Third Party Code

In earlier iterations, we attempted to use the Arduino to do the audio processing.

http://wiki.openmusiclabs.com/wiki/ArduinoFFT

Demo Materials

https://www.dropbox.com/sh/qxxsu30yclbh7l1/wv1qeXYeM-

 

Final Project – Team X

Group 10 — Team X
–Jun­jun Chen (junjunc),
–Osman Khwaja (okhwaja),
–Igor Zabukovec (iz),
–Ale­jan­dro Van Zandt-Escobar (av)

Description:
A “Kinect Juke­box” that lets you con­trol music using gestures.

Previous Posts:
P1: https://blogs.princeton.edu/humancomputerinterface/2013/02/22/p1-team-x/
P2: https://blogs.princeton.edu/humancomputerinterface/2013/03/11/p2-team-x/
P3: https://blogs.princeton.edu/humancomputerinterface/2013/03/29/group-10-p3/
P4: https://blogs.princeton.edu/humancomputerinterface/2013/04/09/group-10-p4/
P5: https://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-group-10-team-x/
P6: https://blogs.princeton.edu/humancomputerinterface/2013/05/06/6085/

Video Demo:

Our system uses gesture recognition to control music. The user can choose a music file with our Netbeans GUI. Then, they can use gestures for controls such as pause and play. We also have functionality for setting “break­points” with a gestures. When the dancer reaches a point in the music that he may want to go back to, he uses a gesture to set a breakpoint. Then, later, he can use another ges­ture to go back to that point in the music easily. The user is also to be able to change the speed of the music on the fly, by hav­ing the sys­tem fol­low gestures for speed up, slow down, and return to normal. Every time the slow down or speed up gesture is performed, the music incrementally slows down or speeds up.

Youtube Video Link

Changes

  • Improved the weights for predefined gestures in Kinetic Space, as our testing from P6 indicated that our system struggled to recognize some gestures, and this was the main source of frustration for our users. By changing the weights Kinetic Space places on certain parts of the body (for example, by weighing arms more when the gesture is mainly based on arm movement), we can make the recognition better.
  • Finished and connected an improved GUI made in Netbeans to our MAX/MSP controller. We want the interface to be as simple and easy to use as possible.

Goals and Design Evolution:
While our main goal (to create a system making it easier for dancers to interact with their music during practices) and design (using a Kinect and gesture recognition) has not changed much over the semester, there has been some evolution in the lower level tasks we wanted the system to be able to accomplish. The main reasons for this have been technical: we found early on through testing that for the system to be useful, rather than an hinderance, it must be able to recognize gestures with a very high fidelity. This task is further complicated by the fact that the dancer would be moving a lot during practice.
For example, one of our original goals was to be able to have our system follow the speed of the dancer, without the dancer having to make any specific gestures. We found that this was not feasible within the timeframe of the semester, however, as many of the recognition algorithms we looked at used machine learning (so worked better with many examples of an gesture) and many required knowing, generally, the beginning and end of the gesture (so would not work well with gestures tied into a dance, for example).

Also, we had to essentially abandon one of our proposed functionalities. We thought we would be able to implement a system that would make configuring a recognizable gesture a simple task, but after working with the gesture recognition software, we saw that setting up a gesture requires finely tuning the customizable weights of the different body parts to get even basic functionality. Implementing a system that automated that customization, we quickly realized, would take a very long time

Critical Evaluation:
Based on the positive feedback we received during testing, we feel that this could be turned into an useful real-world system. Many of our users have said that being able to control music easily would be useful for dancers and choreographers, and as a proof of concept, we believe our prototype has worked well. However, from our final testing in P6, we found that the user’s frustration levels would increase if they had to repeat a gesture even once. Therefore, there is a large gap between our current prototype and a real world system. Despite the users’ frustrations during testing, they did indicated in the post-evaluation survey that they would be interested in trying an improved iteration of our system and that they thought it could be useful for dancers.
We’ve learned several things from our design, implementation, and evaluation efforts. Firstly, we’ve learned that while the Kinect was launched in 2010, there actually isn’t a great familiarity with it in the general population. Secondly, we’ve found that the Kinect development community, while not small, is quite new. Microsoft support for development, with SDKs, is many for Windows, though there are community SDKs for Mac. From testing, we’ve found that users are less familiar with this application space than with windows based GUIs, but that they are generally very interested in gesture based applications.

Next Steps:

  • In moving forward, we’d like to make the system more customizable. We have not found a way to do so with the Kinetic Space gesture recognition software we’re using (we don’t see any way to pass info, such as user defined gestures, into the system), so to do so, we may have to implement our own gesture recognition. The basic structure gesture recognition algorithms we looked at seemed to involve looking at the x,y,z positions of various points and limbs, and comparing their movement (with margins of error perhaps determined through machine learning). We did not tackle this implementation challenge for the prototype, as we realized that the gesture recognition would need to be rather sophisticated for the system to work well. With more time, however, we would like to do our gesture recognition and recording in Max/MSP so that we could integrate our music playing software, and then maybe imbed the Kinect video feed in the Netbeans interface.
  • We still like the idea of having the music follow the dancer, without any other input, and that would be something we’d like to implement if we had more time. To do so, we would need the user to provide a “prototype” of the dance at regular speed. Then, we might extract specific “gestures” or moves from the dance, and change the speed of the music according to the speed of those gestures.
  • As mentioned before, we would also like to implement a configure gesture functionality. It may even be easier after we moved to Max/MSP for gesture recognition, but at this point, it’s only speculation.
  • We’d also like to do some further testing. In the testing we’ve done so far, we’ve had users come to a room we’ve set up and use the system as we’ve asked them to. We’d like to ask dancers if we can go into their actual practice sessions, and see how they use the system without our guidance. It would be informative to even leave the system with users for a few days, have them use it, and then get any feedback they have.

Source Code:
Source Code

Third Party Code:

Demo Materials:
Demo Materials